aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1303.0166
2101921379
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
Several papers consider a bounded support constraint to overcome the diffraction limit. Another possible constraint is sparsity: Donoho @cite_7 studied the problem of recovering a sparse signal for which only low frequencies of its Fourier transform are available. Recently, Candes and Fernandez-Granda @cite_19 also studied conditions under which sparse signals can be recovered. The results apply to signals which have a sparse representation. Sparsity has effectively also been practically used to break the diffraction limit using hardware, e.g. in stimulated emission depletion microscopy (STED) @cite_13 .
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_7" ], "mid": [ "2078524554", "2115737236", "2040882982" ], "abstract": [ "This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object---the high end of its spectrum---from coarse scale information only---from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in @math and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up until a frequency cut-off @math . We show that one can super-resolve these point sources with infinite precision---i.e. recover the exact locations and amplitudes---by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least @math . This result extends to higher dimensions and other models. In one dimension for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary.", "We propose a new type of scanning fluorescence microscope capable of resolving 35 nm in the far field. We overcome the diffraction resolution limit by employing stimulated emission to inhibit the fluorescence process in the outer regions of the excitation point-spread function. In contrast to near-field scanning optical microscopy, this method can produce three-dimensional images of translucent specimens.", "Preface to the Second Edition. Preface to the First Edition. 1 Ray Optics. 2 Wave Optics. 3 Beam Optics. 4 Fourier Optics. 5 Electromagnetic Optics. 6 Ploarization Optics. 7 Photonic-Crystal Optics. 8 Guided-Wave Optics. 9 Fiber Optics. 10 Resonator Optics. 11 Statistical Optics. 12 Photon Optics. 13 Photon and Atoms. 14 Laser Amplifiers. 15 Lasers. 16 Semiconductor Optics. 17 Semiconductor Photon Sources. 18 Semiconductor Photon Detectors. 19 Acousto-Optics. 20 Electro-Optics. 21 Nonlinear Optics. 22 Ultrafast Optics. 23 Optical Interconnects and Switches. 24 Optical Fiber Communications. A Fourier Transform. B Linear Systems. C Modes of Linear Systems. Symbols and Units. Authors. Index." ] }
1303.0166
2101921379
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
Finally, one should mention that the works above consider superresolution as the problem of breaking the diffraction limit, as opposed to trying to only'' increase the resolution of low resolution sensors (e.g. @cite_20 ). This type of superresolution is not the topic of this paper so we refer the reader to the review of Park, Park and Kang @cite_25 .
{ "cite_N": [ "@cite_25", "@cite_20" ], "mid": [ "2067042811", "2170057902" ], "abstract": [ "A new approach toward increasing spatial resolution is required to overcome the limitations of the sensors and optics manufacturing technology. One promising approach is to use signal processing techniques to obtain an high-resolution (HR) image (or sequence) from observed multiple low-resolution (LR) images. Such a resolution enhancement approach has been one of the most active research areas, and it is called super resolution (SR) (or HR) image reconstruction or simply resolution enhancement. In this article, we use the term \"SR image reconstruction\" to refer to a signal processing approach toward resolution enhancement because the term \"super\" in \"super resolution\" represents very well the characteristics of the technique overcoming the inherent resolution limitation of LR imaging systems. The major advantage of the signal processing approach is that it may cost less and the existing LR imaging systems can be still utilized. The SR image reconstruction is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including medical imaging, satellite imaging, and video applications. The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts. To this purpose, we present the technical review of various existing SR methodologies which are often employed. Before presenting the review of existing SR algorithms, we first model the LR image acquisition process.", "An iterative algorithm to increase image resolution is described. Examples are shown for low-resolution gray-level pictures, with an increase of resolution clearly observed after only a few iterations. The same method can also be used for deblurring a single blurred image. The approach is based on the resemblance of the presented problem to the reconstruction of a 2-D object from its 1-D projections in computer-aided tomography. The algorithm performed well for both computer-simulated and real images and is shown, theoretically and practically, to converge quickly. The algorithm can be executed in parallel for faster hardware implementation. >" ] }
1303.0422
1605385825
Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centrality-based network management. The problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination. It can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration. As a fundamental building block to tackle that problem, we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. Our algorithms are proven to be efficient on many real-life networks, especially on small-world networks, which have a small diameter and a spike-shaped shortest distance distribution. In addition to closeness centrality, they can also be a great arsenal for the shortest-path-based management and analysis of the networks. We experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal DBLP-coauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch. To the best of our knowledge, this is the first work which can yield practical large-scale network management based on closeness centrality values.
To the best of our knowledge, there are only two works that deal with maintaining centrality in dynamic networks. Yet, both are interested in betweenness centrality. Lee et al. proposed the QUBE framework which updates betweenness centrality in case of edge insertion and deletion within the network @cite_5 . QUBE relies on the biconnected component decomposition of the graphs. Upon an edge insertion or deletion, assuming that the decomposition does not change, only the centrality values within the updated biconnected component are recomputed from scratch. If the edge insertion deletion affects the decomposition the modified graph is decomposed into its biconnected components and the centrality values in the affected part are recomputed. The distribution of the vertices to the biconnected components is an important criteria for the performance of QUBE . If a large component exists, which is the case for many real-life networks, one should not expect a significant reduction on update time. Unfortunately, the performance of QUBE is only reported on small graphs (less than 100K edges) with very low edge density. In other words, it only performs significantly well on small graphs with a tree-like structure having many small biconnected components.
{ "cite_N": [ "@cite_5" ], "mid": [ "2036977244" ], "abstract": [ "The betweenness centrality of a vertex in a graph is a measure for the participation of the vertex in the shortest paths in the graph. The Betweenness centrality is widely used in network analyses. Especially in a social network, the recursive computation of the betweenness centralities of vertices is performed for the community detection and finding the influential user in the network. Since a social network graph is frequently updated, it is necessary to update the betweenness centrality efficiently. When a graph is changed, the betweenness centralities of all the vertices should be recomputed from scratch using all the vertices in the graph. To the best of our knowledge, this is the first work that proposes an efficient algorithm which handles the update of the betweenness centralities of vertices in a graph. In this paper, we propose a method that efficiently reduces the search space by finding a candidate set of vertices whose betweenness centralities can be updated and computes their betweenness centeralities using candidate vertices only. As the cost of calculating the betweenness centrality mainly depends on the number of vertices to be considered, the proposed algorithm significantly reduces the cost of calculation. The proposed algorithm allows the transformation of an existing algorithm which does not consider the graph update. Experimental results on large real datasets show that the proposed algorithm speeds up the existing algorithm 2 to 2418 times depending on the dataset." ] }
1303.0422
1605385825
Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centrality-based network management. The problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination. It can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration. As a fundamental building block to tackle that problem, we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. Our algorithms are proven to be efficient on many real-life networks, especially on small-world networks, which have a small diameter and a spike-shaped shortest distance distribution. In addition to closeness centrality, they can also be a great arsenal for the shortest-path-based management and analysis of the networks. We experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal DBLP-coauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch. To the best of our knowledge, this is the first work which can yield practical large-scale network management based on closeness centrality values.
Green et al. proposed a technique to update centrality scores rather than recomputing them from scratch upon edge insertions (can be extended to edge deletions) @cite_6 . The idea is storing the whole data structure used by the previous betweenness centrality update kernel. This storage is indeed useful for two main reasons: it avoids a significant amount of recomputation since some of the centrality values will stay the same. And second, it enables a partial traversal of the graph even when an update is necessary. However, as the authors state, @math values must be kept on the disk. For the Wikipedia user communication and DBLP coauthorship networks, which contain thousands of vertices and millions of edges, the technique by Green et al. requires TeraBytes of memory. The largest graph used in @cite_6 has approximately @math vertices and @math edges; the quadratic storage cost prevents their storage-based techniques to scale any higher. On the other hand, the memory footprint of our algorithms are linear and hence they are much more practical.
{ "cite_N": [ "@cite_6" ], "mid": [ "2088961699" ], "abstract": [ "Analysis of social networks is challenging due to the rapid changes of its members and their relationships. For many cases it impractical to recompute the metric of interest, therefore, streaming algorithms are used to reduce the total runtime following modifications to the graph. Centrality is often used for determining the relative importance of a vertex or edge in a graph. The vertex Between ness Centrality is the fraction of shortest paths going through a vertex among all shortest paths in the graph. Vertices with a high between ness centrality are usually key players in a social network or a bottleneck in a communication network. Evaluating the between ness centrality for a graph G=(V, E) is computationally demanding and the best known algorithm for unweighted graphs has an upper bound time complexity of O(V^2+VE). Consequently, it is desirable to find a way to avoid a full re-computation of between ness centrality when a new edge is inserted into the graph. In this work, we give a novel algorithm that reduces computation for the insertion of an edge into the graph. This is the first algorithm for the computation of between ness centrality in a streaming graph. While the upper bound time complexity of the new algorithm is the same as the upper bound for the static graph algorithm, we show significant speedups for both synthetic and real graphs. For synthetic graphs the speedup varies depending on the type of graph and the graph size. For synthetic graphs with 16384 vertices the average speedup is between 100X-400X. For five different real world collaboration networks the average speedup per graph is in range of 36X-148X." ] }
1302.7028
2949525853
We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands.
Oblivious routing approaches to network optimization have been used in many different contexts from switching ( @cite_23 @cite_28 @cite_4 ) to overlay networks ( @cite_27 @cite_26 ), or fundamental tradeoffs in distributed computing @cite_24 @cite_17 to name a few. In each case mentioned above, the primary performance measure is network congestion (or its dual problem throughput). In fact, one could summarize the early work by Valiant, Borodin and Hopcroft as saying that randomization is necessary and sufficient for oblivious routing to give @math congestion in many packet network topologies; @math being the number of nodes.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_28", "@cite_24", "@cite_27", "@cite_23", "@cite_17" ], "mid": [ "", "2117358494", "2119016372", "2107997203", "", "2085399064", "2145818650" ], "abstract": [ "", "Routers built around a single-stage crossbar and a centralized scheduler do not scale, and (in practice) do not provide the throughput guarantees that network operators need to make efficient use of their expensive long-haul links. In this paper we consider how optics can be used to scale capacity and reduce power in a router. We start with the promising load-balanced switch architecture proposed by C-S. Chang. This approach eliminates the scheduler, is scalable, and guarantees 100 throughput for a broad class of traffic. But several problems need to be solved to make this architecture practical: (1) Packets can be mis-sequenced, (2) Pathological periodic traffic patterns can make throughput arbitrarily small, (3) The architecture requires a rapidly configuring switch fabric, and (4) It does not work when linecards are missing or have failed. In this paper we solve each problem in turn, and describe new architectures that include our solutions. We motivate our work by designing a 100Tb s packet-switched router arranged as 640 linecards, each operating at 160Gb s. We describe two different implementations based on technology available within the next three years.", "Link bundling is a way to increase routing scalability whenever a pair of label switching routers in MPLS are connected by multiple parallel links. However, link bundling can be inefficient as a label switched path (LSP) has to be associated with a particular link. In this paper, we show that the efficiency of link bundling can be significantly improved if traffic can be effectively distributed across the parallel links. We propose an IP switch architecture that is capable of distributing flows both inside the switch and among the parallel links based on operations that are relatively simple to implement. The switch requires no speedup, guarantees in-sequence packet delivery for a given flow, avoids complex coordination algorithms, and can achieve LSP throughput higher than the line rate. By means of simulation using IP traces, we investigate the performance of the proposed switch, and show that the switch achieves good load-balancing performance. We describe extensions to the basic architecture which allows for very large bundle size, handles incremental upgrade strategy, improves reliability, and accommodates nonIP traffic.", "Consider @math nodes connected by wires to make an n-dimensional binary cube. Suppose that initially the nodes contain one packet each addressed to distinct nodes of the cube. We show that there is a distributed randomized algorithm that can route every packet to its destination without two packets passing down the same wire at any one time, and finishes within time @math with overwhelming probability for all such routing requests. Each packet carries with it @math bits of bookkeeping information. No other communication among the nodes takes place.The algorithm offers the only scheme known for realizing arbitrary permutations in a sparse N node network in @math time and has evident applications in the design of general purpose parallel computers.", "", "Parallel communication algorithms and networks are central to large-scale parallel computing and, also, data communications. This paper identifies adverse source-destination traffic patterns and proposes a scheme for obtaining relief by means of randomized routing of packets on simple extensions of the well-known omega networks. Valiant and Aleliunas have demonstrated randomized algorithms, for a certain context which we call nonrenewal, that complete the communication task in time O (log N ) with overwhelming probability, where N is the number of sources and destinations. Our scheme has advantages because it uses switches of fixed degree, requires no scheduling, and, for the nonrenewal context, is as good in proven performance. The main advantage of our scheme comes when we consider the renewal context in which packets are generated at the sources continually and asynchronously. Our algorithm extends naturally from the nonrenewal context. In the analysis in the renewal context we, first, explicitly identify the maximum traffic intensities in the internal links of the extended omega networks over all source-destination traffic specifications that satisfy loose bounds. Second, the benefits of randomization on the stability of the network are identified. Third, exact results, for certain restricted models for sources and transmission, and approximate analytic results, for quite general models, are derived for the mean delays. These results show that, in the stable regime, the maximum mean time from source to destination is asymptotically proportional to log N . Numerical results are presented.", "A principle task in parallel and distributed systems is to reduce the communication load in the interconnection network, as this is usually the major bottleneck for the performance of distributed applications. We introduce a framework for solving online problems that aim to minimize the congestion (i.e. the maximum load of a network link) in general topology networks. We apply this framework to the problem of online routing of virtual circuits and to a dynamic data management problem. For both scenarios we achieve a competitive ratio of O(log sup 3 n) with respect to the congestion of the network links. Our online algorithm for the routing problem has the remarkable property that it is oblivious, i.e., the path chosen for a virtual circuit is independent of the current network load. Oblivious routing strategies can easily be implemented in distributed environments and have therefore been intensively studied for certain network topologies as e.g. meshes, tori and hypercubic networks. This is the first oblivious path selection algorithm that achieves a polylogarithmic competitive ratio in general networks." ] }
1302.7028
2949525853
We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands.
The present work's focus is not on congestion, but on total link capacity cost. This falls in the general space of @math problems, which includes the Steiner tree and VPN problems as special cases. Exact or constant-approximation polytime algorithms for several other important traffic models have also been designed, e.g. the so-called symmetric and asymmetric hose models @cite_22 @cite_12 . The latter is however @math -hard to approximate in undirected graphs within polylogarithmic factors for general polytopes @cite_12 .
{ "cite_N": [ "@cite_22", "@cite_12" ], "mid": [ "2096881311", "2132064078" ], "abstract": [ "We present constant-factor approximation algorithms for several widely-studied NP-hard optimization problems in network design, including the multicommodity rent-or-buy, virtual private network design, and single-sink buy-at-bulk problems. Our algorithms are simple and their approximation ratios improve over those previously known, in some cases by orders of magnitude. We develop a general analysis framework to bound the approximation ratios of our algorithms. This framework is based on a novel connection between random sampling and game-theoretic cost sharing.", "We consider robust (undirected) network design (RND) problems where the set of feasible demands may be given by an arbitrary convex body. This model, introduced by Ben-Ameur and Kerivin [Ben-Ameur W, Kerivin H (2003) New economical virtual private networks. Comm. ACM 46(6):69–73], generalizes the well-studied virtual private network (VPN) problem. Most research in this area has focused on constant factor approximations for specific polytope of demands, such as the class of hose matrices used in the definition of VPN. As pointed out in Chekuri [Chekuri C (2007) Routing and network design with robustness to changing or uncertain traffic demands. SIGACT News 38(3):106–128], however, the general problem was only known to be APX-hard (based on a reduction from the Steiner tree problem). We show that the general robust design is hard to approximate to within polylogarithmic factors. We establish this by showing a general reduction of buy-at-bulk network design to the robust network design problem. Gupta pointed..." ] }
1302.7028
2949525853
We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands.
Our work was partly motivated by work in @cite_1 , where Selective Randomized Load Balancing ( @math ) is used to design minimum- cost networks. They showed that networks whose design is based on oblivious routing techniques (and specifically @math ) can be ideal to capture cost savings in IP networks. This is because optimal designs reserve capacity on long paths, where one may employ high-capacity optical circuits, and partially avoid expensive IP equipment costs at internal nodes (their empirical study incorporated IP router, optical switching, and fiber costs). In this paper, instead of a design based on hub routing to a small number of hubs, we consider more general @math , as discussed in @cite_12 .
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2076704288", "2132064078" ], "abstract": [ "We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol multiprotocol label switching (IP MPLS) networks using VPN-tree routing.", "We consider robust (undirected) network design (RND) problems where the set of feasible demands may be given by an arbitrary convex body. This model, introduced by Ben-Ameur and Kerivin [Ben-Ameur W, Kerivin H (2003) New economical virtual private networks. Comm. ACM 46(6):69–73], generalizes the well-studied virtual private network (VPN) problem. Most research in this area has focused on constant factor approximations for specific polytope of demands, such as the class of hose matrices used in the definition of VPN. As pointed out in Chekuri [Chekuri C (2007) Routing and network design with robustness to changing or uncertain traffic demands. SIGACT News 38(3):106–128], however, the general problem was only known to be APX-hard (based on a reduction from the Steiner tree problem). We show that the general robust design is hard to approximate to within polylogarithmic factors. We establish this by showing a general reduction of buy-at-bulk network design to the robust network design problem. Gupta pointed..." ] }
1302.7028
2949525853
We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands.
Our work also extends the long stream of work on designing @math s @cite_11 @cite_21 @cite_7 @cite_15 @cite_6 . Previous work has focused on provisioning for a @math based on either the hose or the fixed-demand models (c.f. @cite_2 ). Designing for more general traffic models in this context has not received much attention. We take an intermediate step by examining the capped hose model which to the best of our knowledge has not been studied.
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_6", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2107526127", "2094410510", "2076701885", "2111308047", "2064289893", "" ], "abstract": [ "We consider the following network design problem. We are given an undirected graph G=(V,E) with edges costs c(e) and a set of terminal nodes W. A hose demand matrix for W is any symmetric matrix [Dij] such that for each i, ∑ j ≠ i Dij ≤ 1. We must compute the minimum cost edge capacities that are able to support the oblivious routing of every hose matrix in the network. An oblivious routing template, in this context, is a simple path Pij for each pair i,j ∈ W. Given such a template, if we are to route a demand matrix D, then for each i,j we send Dij units of flow along each Pij. and obtained a 2-approximation for this problem, using a solution template in the form of a tree. It has been widely asked and subsequently conjectured [Italiano 2006] that this solution actually results in the optimal capacity for the single path VPN design problem; this has become known as the VPN conjecture. The conjecture has previously been proven for some restricted classes of graphs [Hurkens 2005, Grandoni 2007, Fiorini 2007]. Our main theorem establishes that this conjecture is true in general graphs. This also gives the first polynomial time algorithm for the single path VPN problem. We also show that the multipath version of the conjecture is false.", "Consider a setting in which a group of nodes, situated in a large underlying network, wishes to reserve bandwidth on which to support communication. Virtual private networks (VPNs) are services that support such a construct; rather than building a new physical network on the group of nodes that must be connected, bandwidth in the underlying network is reserved for communication within the group, forming a virtual “sub-network.” Provisioning a virtual private network over a set off terminals gives rise to the following general network design problem. We have bounds on the cumulative amount of traffic each terminal can send and receive; we must choose a path for each pair of terminals, and a bandwidth allocation for each edge of the network, so that any traffic matrix consistent with the given upper bounds can be feasibly routed. Thus, we are seeking to design a network that can support a continuum of possible traffic scenarios. We provide optimal and approximate algorithms for several variants of this problem, depending on whether the traffic matrix is required to be symmetric, and on whether the designed network is required to be a tree (a natural constraint in a number of basic applications). We also establish a relation between this collection of network design problems and a variant of the facility location problem introduced by Karger and Minkoff; we extend their results by providing a stronger approximation algorithm for this latter problem.", "Virtual private networks (VPNs) provide a secure and reliable communication between customer sites over a shared network. With increase in number and size of VPNs, service providers need efficient provisioning techniques that adapt to customer demands. The recently proposed hose model for VPN alleviates the scalability problem of the pipe model by reserving for aggregate ingress and egress bandwidths instead of between every pair of VPN endpoints. Existing studies on quality of service guarantees in the hose model either deal only with bandwidth requirements or regard the delay limit as the main objective ignoring the bandwidth cost. In this work we propose a new approach to enhance the hose model to guarantee delay limits between endpoints while optimizing the provisioning cost. We connect VPN endpoints using a tree structure and our algorithm attempts to optimize the total bandwidth reserved on edges of the VPN tree. Further, we introduce a fast and efficient algorithm in finding the shared VPN tree to reduce the total provisioning cost compared to the results proposed in previous works. Our proposed approach takes into account the user preferences in meeting the delay limits and provisioning cost to find the optimal solution of resource allocation problem. Our simulation results indicate that the VPN trees constructed by our proposed algorithm meet maximum end-to-end delay limits while reducing the bandwidth requirements as compared to previously proposed algorithms.", "Virtual Private Networks (VPNs) provide customers with predictable and secure network connections over a shared network. The recently proposed hose model for VPNs allows for greater flexibility since it permits traffic to and from a hose endpoint to be arbitrarily distributed to other endpoints. In this paper, we develop novel algorithms for provisioning VPNs in the hose model. We connect VPN endpoints using a tree structure and our algorithms attempt to optimize the total bandwidth reserved on edges of the VPN tree. We show that even for the simple scenario in which network links are assumed to have infinite capacity, the general problem of computing the optimal VPN tree is NP hard. Fortunately, for the special case when the ingress and egress bandwidths for each VPN endpoint are equal, we can devise an algorithm for computing the optimal tree whose time complexity is O (mn), where m and n are the number of links and nodes in the network, respectively. We present a novel integer programming formulation for the general VPN tree computation problem (that is, when ingress and egress bandwidths of VPN endpoints are arbitrary) and develop an algorithm that is based on the primal-dual method. Our experimental results with synthetic network graphs indicate that the VPN trees constructed by our proposed algorithms dramatically reduce bandwidth requirements (in many instances, by more than a factor of 2) compared to scenarios in which Steiner trees are employed to connect VPN endpoints.", "Only recently, Hurkens, Keijsper, and Stougie proved the VPN Tree Routing Conjecture for the special case of ring networks. We present a short proof of a slightly stronger result which might also turn out to be useful for proving the VPN Tree Routing Conjecture for general networks.", "" ] }
1302.7180
1617910803
In this paper, we propose a method to apply the popular cascade classifier into face recognition to improve the computational efficiency while keeping high recognition rate. In large scale face recognition systems, because the probability of feature templates coming from different subjects is very high, most of the matching pairs will be rejected by the early stages of the cascade. Therefore, the cascade can improve the matching speed significantly. On the other hand, using the nested structure of the cascade, we could drop some stages at the end of feature to reduce the memory and bandwidth usage in some resources intensive system while not sacrificing the performance too much. The cascade is learned by two steps. Firstly, some kind of prepared features are grouped into several nested stages. And then, the threshold of each stage is learned to achieve user defined verification rate (VR). In the paper, we take a landmark based Gabor+LDA face recognition system as baseline to illustrate the process and advantages of the proposed method. However, the use of this method is very generic and not limited in face recognition, which can be easily generalized to other biometrics as a post-processing module. Experiments on the FERET database show the good performance of our baseline and an experiment on a self-collected large scale database illustrates that the cascade can improve the matching speed significantly.
In @cite_16 , Wu al proposed a multi-reference re-ranking approach for large scale face recognition. The main idea is originated from the query expansion techniques in text information retrieval. Firstly, many local features of face components are used to get a small candidate set from the large gallery. Then a binary global feature is used to re-rank the candidate set to get the final result. Experiments show the performance of their algorithm is comparable with the linear scan system using the state-of-the-art face feature. On a database containing one million face images, the speed-up ratio is about 8x comparing to the linear scan system. @cite_24 also used similar way to improve the speed of face recognition by sifting the gallery according to rank. Our method is closely related to these two methods, the idea is using partial or simple features to reject the irrelevant samples as quickly as possible.
{ "cite_N": [ "@cite_24", "@cite_16" ], "mid": [ "177021162", "2088009615" ], "abstract": [ "Motivated by image perturbation and the geometry of manifolds, we present a novel method combining these two elements. First, we form a tangent space from a set of perturbed images and observe that the tangent space admits a vector space structure. Second, we embed the approximated tangent spaces on a Grassmann manifold and employ a chordal distance as the means for comparing subspaces. The matching process is accelerated using a coarse to fine strategy. Experiments on the FERET database suggest that the proposed method yields excellent results using both holistic and local features. Specifically, on the FERET Dup2 data set, our proposed method achieves 83.8 rank 1 recognition: to our knowledge the currently the best result among all non-trained methods. Evidence is also presented that peak recognition performance is achieved using roughly 100 distinct perturbed images.", "State-of-the-art image retrieval systems achieve scalability by using a bag-of-words representation and textual retrieval methods, but their performance degrades quickly in the face image domain, mainly because they produce visual words with low discriminative power for face images and ignore the special properties of faces. The leading features for face recognition can achieve good retrieval performance, but these features are not suitable for inverted indexing as they are high-dimensional and global and thus not scalable in either computational or storage cost. In this paper, we aim to build a scalable face image retrieval system. For this purpose, we develop a new scalable face representation using both local and global features. In the indexing stage, we exploit special properties of faces to design new component-based local features, which are subsequently quantized into visual words using a novel identity-based quantization scheme. We also use a very small Hamming signature (40 bytes) to encode the discriminative global feature for each face. In the retrieval stage, candidate images are first retrieved from the inverted index of visual words. We then use a new multireference distance to rerank the candidate images using the Hamming signature. On a one millon face database, we show that our local features and global Hamming signatures are complementary-the inverted index based on local features provides candidate images with good recall, while the multireference reranking with global Hamming signature leads to good precision. As a result, our system is not only scalable but also outperforms the linear scan retrieval system using the state-of-the-art face recognition feature in term of the quality." ] }
1302.7180
1617910803
In this paper, we propose a method to apply the popular cascade classifier into face recognition to improve the computational efficiency while keeping high recognition rate. In large scale face recognition systems, because the probability of feature templates coming from different subjects is very high, most of the matching pairs will be rejected by the early stages of the cascade. Therefore, the cascade can improve the matching speed significantly. On the other hand, using the nested structure of the cascade, we could drop some stages at the end of feature to reduce the memory and bandwidth usage in some resources intensive system while not sacrificing the performance too much. The cascade is learned by two steps. Firstly, some kind of prepared features are grouped into several nested stages. And then, the threshold of each stage is learned to achieve user defined verification rate (VR). In the paper, we take a landmark based Gabor+LDA face recognition system as baseline to illustrate the process and advantages of the proposed method. However, the use of this method is very generic and not limited in face recognition, which can be easily generalized to other biometrics as a post-processing module. Experiments on the FERET database show the good performance of our baseline and an experiment on a self-collected large scale database illustrates that the cascade can improve the matching speed significantly.
Inspired by the ideas of popular Hashing based methods @cite_15 @cite_29 , Yan al @cite_28 proposed a Similarity Hashing (SH) for large scale face recognition, and got good performance on a database containing 100,000 face images. Although SH has achieved 30x speed-up ratio in the experiments, it is very memory consuming and need extra tens of Giga-Bytes to store the hash index for every samples in the gallery. Because our method exploits the asymmetric structure of the data in large scale face recognition, comparing to the above methods, our cascade structure not only can obtain high speed-up ratio but also has the lowest performance loss.
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_29" ], "mid": [ "1983622948", "2162006472", "" ], "abstract": [ "Linear discriminant analysis with nearest neighborhood classifier (LDA + NN) has been commonly used in face recognition, but it often confronts with two problems in real applications: (1) it cannot incrementally deal with the information of training instances; (2) it cannot achieve fast search against large scale gallery set. In this paper, we use incremental LDA (ILDA) and hashing based search method to deal with these two problems. Firstly two incremental LDA algorithms are proposed under spectral regression framework, namely exact incremental spectral regression discriminant analysis (EI-SRDA) and approximate incremental spectral regression discriminant analysis (AI-SRDA). Secondly we propose a similarity hashing algorithm of sub-linear complexity to achieve quick recognition from large gallery set. Experiments on FRGC and self-collected 100,000 faces database show the effective of our methods.", "We present a novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions.Our scheme improves the running time of the earlier algorithm for the case of the lp norm. It also yields the first known provably efficient approximate NN algorithm for the case p<1. We also show that the algorithm finds the exact near neigbhor in O(log n) time for data satisfying certain \"bounded growth\" condition.Unlike earlier schemes, our LSH scheme works directly on points in the Euclidean space without embeddings. Consequently, the resulting query time bound is free of large factors and is simple and easy to implement. Our experiments (on synthetic data sets) show that the our data structure is up to 40 times faster than kd-tree.", "" ] }
1302.7180
1617910803
In this paper, we propose a method to apply the popular cascade classifier into face recognition to improve the computational efficiency while keeping high recognition rate. In large scale face recognition systems, because the probability of feature templates coming from different subjects is very high, most of the matching pairs will be rejected by the early stages of the cascade. Therefore, the cascade can improve the matching speed significantly. On the other hand, using the nested structure of the cascade, we could drop some stages at the end of feature to reduce the memory and bandwidth usage in some resources intensive system while not sacrificing the performance too much. The cascade is learned by two steps. Firstly, some kind of prepared features are grouped into several nested stages. And then, the threshold of each stage is learned to achieve user defined verification rate (VR). In the paper, we take a landmark based Gabor+LDA face recognition system as baseline to illustrate the process and advantages of the proposed method. However, the use of this method is very generic and not limited in face recognition, which can be easily generalized to other biometrics as a post-processing module. Experiments on the FERET database show the good performance of our baseline and an experiment on a self-collected large scale database illustrates that the cascade can improve the matching speed significantly.
Cascade is a kind of extreme unbalanced tree to deal with asymmetric two-class problem. The most successful application of cascade is face detection @cite_22 , in which the cascade was used to reject non-face samples at each node (or stage) and nearly allow all face samples to pass. In @cite_25 the cascade was used for face recognition, but its advantage was not noticed and analyzed in the context of large scale face recognition. In this paper, we take face matching as a two-class problem, then large face recognition problem is exactly an asymmetric classification problem. Given a probe, the majority of samples in gallery should have different identity with the probe. Therefore, we borrow the cascade structure from face detection to large scale face recognition in this case. We will see that the cascade can also work well for face recognition. The process of cascade learning in face recognition is simpler than that in face detection, because we only need to learn the threshold of each stage but the strong classifier.
{ "cite_N": [ "@cite_25", "@cite_22" ], "mid": [ "600097623", "2137401668" ], "abstract": [ "Face recognition localization and recognition of facial features and actions facial expression dtection and interpretation applications of face and hand gesture recognition video compression of facial and body movements. (part contents)", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second." ] }
1302.6937
173606385
The framework of online learning with memory naturally captures learning problems with temporal constraints, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement our theoretic results with an application to statistical arbitrage in finance: we devise algorithms for constructing mean-reverting portfolios.
Statistical arbitrage and in particular pairs trading strategies initially took place in the mid 80's @cite_5 . Since then, a great deal of work has been done on the problem of assembling mean reverting portfolios, mostly using cointegration techniques (see @cite_3 for more comprehensive information). In order to quantify the amount of mean reversion in various portfolios, different proxies are often suggested such as zero-crossing and predictability @cite_7 @cite_10 . In this work, we consider a new proxy for mean reversion which is aimed at maximizing fluctuation, as well as keeping the mean close to zero. Furthermore, whereas classical cointegration techniques require training period before applying a trading strategy (see for instance @cite_11 @cite_1 ), the online approach does not require that, in addition to providing a performance guarantee against the best mean reverting portfolio in hindsight.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_3", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "639519915", "2092306038", "630559790", "2180060173", "", "2163285066" ], "abstract": [ "An informative guide to market microstructure and trading strategiesOver the last decade, the financial landscape has undergone a significant transformation, shaped by the forces of technology, globalization, and market innovations to name a few. In order to operate effectively in today's markets, you need more than just the motivation to succeed, you need a firm understanding of how modern financial markets work and what professional trading is really about. Dr. Anatoly Schmidt, who has worked in the financial industry since 1997, and teaches in the Financial Engineering program of Stevens Institute of Technology, puts these topics in perspective with his new book.Divided into three comprehensive parts, this reliable resource offers a balance between the theoretical aspects of market microstructure and trading strategies that may be more relevant for practitioners. Along the way, it skillfully provides an informative overview of modern financial markets as well as an engaging assessment of the methods used in deriving and back-testing trading strategies. Details the modern financial markets for equities, foreign exchange, and fixed income Addresses the basics of market dynamics, including statistical distributions and volatility of returns Offers a summary of approaches used in technical analysis and statistical arbitrage as well as a more detailed description of trading performance criteria and back-testing strategies Includes two appendices that support the main material in the bookIf you're unprepared to enter today's markets you will underperform. But with Financial Markets and Trading as your guide, you'll quickly discover what it takes to make it in this competitive field.", "This paper analytically solves the portfolio optimization problem of an investor faced with a risky arbitrage opportunity (e.g. relative mispricing in equity pairs). Unlike the extant literature, which typically models mispricings through the Ornstein--Uhlenbeck (OU) process, we introduce a nonlinear generalization of OU which jointly captures several important risk factors inherent in arbitrage trading. While these factors are absent from the standard OU, we show that considering them yields several new insights into the behavior of rational arbitrageurs: Firstly, arbitrageurs recognizing these risk factors exhibit a diminishing propensity to exploit large mispricings. Secondly, optimal investment behavior in light of these risk factors precipitates the gradual unwinding of losing trades far sooner than is entailed in existing approaches including OU. Finally, an empirical application to daily FTSE100 pairs data shows that incorporating these risks renders our model's risk-management capabilities superior to both OU and a simple threshold strategy popular in the literature. These observations are useful in understanding the role of arbitrageurs in enforcing price efficiency.", "Preface. Acknowledgments. PART ONE: BACKGROUND MATERIAL. Chapter 1. Introduction. The CAPM Model. Market Neutral Strategy. Pairs Trading. Outline. Audience. Chapter 2. Time Series. Overview. Autocorrelation. Time Series Models. Forecasting. Goodness of Fit versus Bias. Model Choice. Modeling Stock Prices. Chapter 3. Factor Models. Introduction. Arbitrage Pricing Theory. The Covariance Matrix. Application: Calculating the Risk on a Portfolio. Application: Calculation of Portfolio Beta. Application: Tracking Basket Design. Sensitivity Analysis. Chapter 4. Kalman Filtering. Introduction. The Kalman Filter. The Scalar Kalman Filter. Filtering the Random Walk. Application: Example with the Standard & Poor Index. PART TWO: STATISTICAL ARBITRAGE. Chapter 5. Overview. History. Motivation. Cointegration. Applying the Model. A Trading Strategy. Road Map for Strategy Design. Chapter 6. Pairs Selection in Equity Markets. Introduction. Common Trends Cointegration Model. Common Trends Model and APT. The Distance Measure. Interpreting the Distance Measure. Reconciling Theory and Practice. Chapter 7. Testing for Tradability. Introduction. The Linear Relationship. Estimating the Linear Relationship: The Multifactor Approach. Estimating the Linear Relationship: The Regression Approach. Testing Residual for Tradability. Chapter 8. Trading Design. Introduction. Band Design for White Noise. Spread Dynamics. Nonparametric Approach. Regularization. Tying Up Loose Ends. PART THREE: RISK ARBITRAGE PAIRS. Chapter 9. Risk Arbitrage Mechanics. Introduction. History. The Deal Process. Transaction Terms. The Deal Spread. Trading Strategy. Quantitative Aspects. Chapter 10. Trade Execution. Introduction. Specifying the Order. Verifying the Execution. Execution During the Pricing Period. Short Selling. Chapter 11. The Market Implied Merger Probability. Introduction. Implied Probabilities and Arrow-Debreu Theory. The Single-Step Model. The Multistep Model. Reconciling Theory and Practice. Risk Management. Chapter 12. Spread Inversion. Introduction. The Prediction Equation. The Observation Equation. Applying the Kalman Filter. Model Selection. Applications to Trading. Index.", "The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples.If each element of a vector of time series xt first achieves stationarity after differencing, but a linear combination α'xt, is already stationary, the time series xt are said to be co-integrated with co-integrating vector α. There may be several such co-integrating vectors so that α becomes a matrix. Interpreting α'xt, = 0 as a long run equilibrium, co-integration implies that deviations from equilibrium are stationary, with finite variance, even though the series themselves are nonstationary and have infinite variance.The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations for co-integrated systems. A vector autoregression in differenced variables is incompatible with these representations. Estimation of these models is discussed and a simple but asymptotically efficient two-step estimator is proposed. Testing for co-integration combines the problems of unit root tests and tests with parameters unidentified under the null. Seven statistics are formulated and analyzed. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined and one test procedure is recommended for application.In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is co-integrated with M2, but not M1, M3, or aggregate liquid assets.", "", "We study model-driven statistical arbitrage in US equities. Trading signals are generated in two ways: using Principal Component Analysis (PCA) or regressing stock returns on sector Exchange Traded Funds (ETFs). In both cases, the idiosyncratic returns are modelled as mean-reverting processes, which leads naturally to 'contrarian' strategies. We construct, back-test and compare market-neutral PCA- and ETF-based strategies applied to the broad universe of US equities. After accounting for transaction costs, PCA-based strategies have an average annual Sharpe ratio of 1.44 over the period 1997 to 2007, with stronger performances prior to 2003. During 2003-2007, the average Sharpe ratio of PCA-based strategies was only 0.9. ETF-based strategies had a Sharpe ratio of 1.1 from 1997 to 2007, experiencing a similar degradation since 2002. We also propose signals that account for trading volume, observing significant improvement in performance in the case of ETF-based signals. ETF-strategies with volume information achieved a Sharpe ratio of 1.51 from 2003 to 2007. The paper also relates the performance of mean-reversion statistical arbitrage strategies with the stock market cycle. In particular, we study in detail the performance of the strategies during the liquidity crisis of the summer of 2007, following Khandani and Lo [Social Science Research Network (SSRN) working paper, 2007]." ] }
1302.5936
2952590379
A compressed sensing method consists of a rectangular measurement matrix, @math with @math , together with an associated recovery algorithm, @math . Compressed sensing methods aim to construct a high quality approximation to any given input vector @math using only @math as input. In particular, we focus herein on instance optimal nonlinear approximation error bounds for @math and @math of the form @math for @math , where @math is the best possible @math -term approximation to @math . In this paper we develop a compressed sensing method whose associated recovery algorithm, @math , runs in @math -time, matching a lower bound up to a @math factor. This runtime is obtained by using a new class of sparse binary compressed sensing matrices of near optimal size in combination with sublinear-time recovery techniques motivated by sketching algorithms for high-volume data streams. The new class of matrices is constructed by randomly subsampling rows from well-chosen incoherent matrix constructions which already have a sub-linear number of rows. As a consequence, fewer random bits than previously required are needed in order to select the rows utilized by the fast reconstruction algorithms considered herein.
Previous work involving the development of compressed sensing methods having both sub-linear time reconstruction algorithms, and the type of @math '' error guarantees considered herein, began with @cite_17 . In @cite_17 built on streaming algorithm techniques with weaker error guarantees (e.g., see @cite_3 @cite_11 @cite_6 ) in order to develop @math -time recovery algorithms, @math , with associated @math '' error guarantees in the for each" model. Similar techniques were later utilized by Gilbert et. al. in @cite_22 to create sub-linear time algorithms with the same error guarantees, but whose associated measurement matrices, @math , have a near-optimal number of rows up to constant factors (i.e., @math ). Other related compressed sensing methods with fast runtimes and @math '' error guarantees in the for all" model were also considered in @cite_10 . Unlike these previous methods, the compressed sensing methods developed herein utilize the combinatorial properties of a new class of sparse binary measurement matrices formed by randomly selecting sub-matrices from larger incoherent matrices.
{ "cite_N": [ "@cite_22", "@cite_17", "@cite_6", "@cite_3", "@cite_10", "@cite_11" ], "mid": [ "2020301027", "2973707709", "2080234606", "", "1974466705", "2167973519" ], "abstract": [ "A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by ^x=D(Φ x), which must satisfy ||x - x||2≤ C ||x - xk||2, where xk denotes the optimal k-term approximation to x. (The output ^x may have more than k terms). For each vector x, the system must succeed with probability at least 3 4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N k)) measurements--matching a lower bound, up to a constant factor--and decoding time k log O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting ^x = D(Φ x + ν2), and for properly normalized ν, we get [||^x - x||22 ≤ (1+e)||ν1||22 + e||ν2||22,] using O((k e)log(N k)) measurements and (k e)logO(1)(N) time for decoding.", "In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement", "We introduce a new sublinear space data structure--the count-min sketch--for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known--typically from 1 e2 to 1 e in factor.", "", "Compressed Sensing is a new paradigm for acquiring the compressible signals that arise in many applications. These signals can be approximated using an amount of information much smaller than the nominal dimension of the signal. Traditional approaches acquire the entire signal and process it to extract the information. The new approach acquires a small number of nonadaptive linear measurements of the signal and uses sophisticated algorithms to determine its information content. Emerging technologies can compute these general linear measurements of a signal at unit cost per measurement. This paper exhibits a randomized measurement ensemble and a signal reconstruction algorithm that satisfy four requirements: 1. The measurement ensemble succeeds for all signals, with high probability over the random choices in its construction. 2. The number of measurements of the signal is optimal, except for a factor polylogarithmic in the signal length. 3. The running time of the algorithm is polynomial in the amount of information in the signal and polylogarithmic in the signal length. 4. The recovery algorithm offers the strongest possible type of error guarantee. Moreover, it is a fully polynomial approximation scheme with respect to this type of error bound. Emerging applications demand this level of performance. Yet no otheralgorithm in the literature simultaneously achieves all four of these desiderata.", "Most database management systems maintain statistics on the underlying relation. One of the important statistics is that of the \"hot items\" in the relation: those that appear many times (most frequently, or more than some threshold). For example, end-biased histograms keep the hot items as part of the histogram and are used in selectivity estimation. Hot items are used as simple outliers in data mining, and in anomaly detection in networking applications.We present a new algorithm for dynamically determining the hot items at any time in the relation that is undergoing deletion operations as well as inserts. Our algorithm maintains a small space data structure that monitors the transactions on the relation, and when required, quickly outputs all hot items, without rescanning the relation in the database. With user-specified probability, it is able to report all hot items. Our algorithm relies on the idea of \"group testing\", is simple to implement, and has provable quality, space and time guarantees. Previously known algorithms for this problem that make similar quality and performance guarantees can not handle deletions, and those that handle deletions can not make similar guarantees without rescanning the database. Our experiments with real and synthetic data shows that our algorithm is remarkably accurate in dynamically tracking the hot items independent of the rate of insertions and deletions." ] }
1302.5936
2952590379
A compressed sensing method consists of a rectangular measurement matrix, @math with @math , together with an associated recovery algorithm, @math . Compressed sensing methods aim to construct a high quality approximation to any given input vector @math using only @math as input. In particular, we focus herein on instance optimal nonlinear approximation error bounds for @math and @math of the form @math for @math , where @math is the best possible @math -term approximation to @math . In this paper we develop a compressed sensing method whose associated recovery algorithm, @math , runs in @math -time, matching a lower bound up to a @math factor. This runtime is obtained by using a new class of sparse binary compressed sensing matrices of near optimal size in combination with sublinear-time recovery techniques motivated by sketching algorithms for high-volume data streams. The new class of matrices is constructed by randomly subsampling rows from well-chosen incoherent matrix constructions which already have a sub-linear number of rows. As a consequence, fewer random bits than previously required are needed in order to select the rows utilized by the fast reconstruction algorithms considered herein.
Perhaps the measurement matrices considered herein are most similar to previous compressed sensing matrices based on unbalanced expander graphs (see, e.g., @cite_20 @cite_23 @cite_9 ). Indeed, the measurement matrices used in this paper are created by randomly sampling rows from larger binary matrices that are, in fact, the adjacency matrices of a subclass of unbalanced expander graphs. However, unlike previous approaches which use the properties of general unbalanced expanders, we use different combinatorial techniques which allow us to develop @math -time recovery algorithms. To the best of our knowledge, the runtimes we obtain by doing so are the best known for any such method having @math '' error guarantees.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_20" ], "mid": [ "2140466267", "2130979540", "" ], "abstract": [ "Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any n-dimensional vector that is k-sparse can be fully recovered using O(klogn) measurements and only O(klogn) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond 3 4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O(nlog(n k))). We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost k-sparse signal and then, using very simple optimization techniques, finds a k-sparse signal which is close to the best k-term approximation of the original signal.", "We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a high-dimensional vector xisinRopfn from its lower-dimensional sketch AxisinRopfm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation xcirc of x such that the L1 approximation error parx-xcircpar1 is close to minx' parx-x'par1, where x' ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years. Many solutions to this problem have been discovered, achieving different trade-offs between various attributes, such as the sketch length, encoding and recovery times. In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes (see Figure 1). In particular, this is the first recovery scheme that guarantees O(k log(n k)) sketch length, and near-linear O(n log (n k)) recovery time simultaneously. It also features low encoding and update times, and is noise-resilient.", "" ] }
1302.5990
2062319484
Computing the viability kernel is key in providing guarantees of safety and proving existence of safety-preserving controllers for constrained dynamical systems. Current numerical techniques that approximate this construct suffer from a complexity that is exponential in the dimension of the state. We study conditions under which a linear time-invariant (LTI) system can be suitably decomposed into lower dimensional subsystems so as to admit a conservative computation of the viability kernel in a decentralized fashion in subspaces. We then present an isomorphism that imposes these desired conditions, most suitably on two-time-scale systems. Decentralized computations are performed in the transformed coordinates, yielding a conservative approximation of the viability kernel in the original state space. Significant reduction of complexity can be achieved, allowing the previously inapplicable tools to be employed for treatment of higher dimensional systems. We show the results on two examples including a 6-D system.
Complexity reduction for viability and minimal reachability has been addressed by many researchers. A projection scheme in @cite_5 based on Hamilton-Jacobi (HJ) partial differential equations (PDEs) over-approximates the projection of the true minimal reachable tube in lower dimensional subspaces, with the unmodeled dimensions treated as a disturbance. Similarly, @cite_38 decomposes a full-order nonlinear system into either disjoint or overlapping subsystems and solves multiple HJ PDEs in lower dimensions. More recently, a mixed implicit-explicit HJ is presented in @cite_25 for nonlinear systems whose state vector contains states that are integrators of other states. The complexity of this new formulation is linear in the number of integrator states, while still exponential in the dimension of the rest of the states. These techniques assume that the system itself presents a certain structure that can be exploited.
{ "cite_N": [ "@cite_5", "@cite_25", "@cite_38" ], "mid": [ "315739085", "2067202372", "1542232783" ], "abstract": [ "In earlier work, we showed that the set of states which can reach a target set of a continuous dynamic game is the zero sublevel set of the viscosity solution of a time dependent Hamilton-Jacobi-Isaacs (HJI) partial differential equation (PDE). We have developed a numerical tool—based on the level set methods of Osher and Sethian—for computing these sets, and we can accurately calculate them for a range of continuous and hybrid systems in which control inputs are pitted against disturbance inputs. The cost of our algorithm, like that of all convergent numerical schemes, increases exponentially with the dimension of the state space. In this paper, we devise and implement a method that projects the true reachable set of a high dimensional system into a collection of lower dimensional subspaces where computation is less expensive. We formulate a method to evolve the lower dimensional reachable sets such that they are each an overapproximation of the full reachable set, and thus their intersection will also be an overapproximation of the reachable set. The method uses a lower dimensional HJI PDE for each projection with a set of disturbance inputs augmented with the unmodeled dimensions of that projection's subspace. We illustrate our method on two examples in three dimensions using two dimensional projections, and we discuss issues related to the selection of appropriate projection subspaces.", "The solution of a particular Hamilton-Jacobi (HJ) partial differential equation (PDE) provides an implicit representation of reach sets and tubes for continuous systems with nonlinear dynamics and can treat inputs in either worst-case or best-case fashion; however, it can rarely be determined analytically and its numerical approximation typically requires computational resources that grow exponentially with the state space dimension. In this paper we describe a new formulation - also based on HJ PDEs - for reach sets and tubes of systems where some states are terminal integrators: states whose evolution can be written as an integration over time of the other states. The key contribution of this new mixed implicit explicit (MIE) scheme is that its computational cost is linear in the number of terminal integrators, although still exponential in the dimension of the rest of the state space. Application of the new scheme to four examples of varying dimension provides empirical evidence of its considerable improvement in computational speed.", "In this paper, we present a method to decompose the problem of computing the backward reachable set for a dynamic system in a space of a given dimension, into a set of computational problems involving level set functions, each defined in a lower dimensional (subsystem) space. This allows the potential for great reduction in computation time. The overall system is considered as an interconnection of either disjoint or overlapping subsystems. The projection of the backward reachable set into the subsystem spaces is over-approximated by a level set of the corresponding subsystem level set function. It is shown how this method can be applied to two-player differential games. Finally, results of the computation of polytopic over-approximations of the unsafe set for the two aircraft conflict resolution problem are presented." ] }
1302.5990
2062319484
Computing the viability kernel is key in providing guarantees of safety and proving existence of safety-preserving controllers for constrained dynamical systems. Current numerical techniques that approximate this construct suffer from a complexity that is exponential in the dimension of the state. We study conditions under which a linear time-invariant (LTI) system can be suitably decomposed into lower dimensional subsystems so as to admit a conservative computation of the viability kernel in a decentralized fashion in subspaces. We then present an isomorphism that imposes these desired conditions, most suitably on two-time-scale systems. Decentralized computations are performed in the transformed coordinates, yielding a conservative approximation of the viability kernel in the original state space. Significant reduction of complexity can be achieved, allowing the previously inapplicable tools to be employed for treatment of higher dimensional systems. We show the results on two examples including a 6-D system.
In @cite_22 , an approximate dynamic programming technique is presented that, although still grid-based, enables a more efficient computation of the viability kernel. The viability kernel (similarly to @cite_4 ) is expressed as the zero sublevel set of the value function of the corresponding optimal control problem. It is assumed that the value function, which is a viscosity solution of a HJB PDE, is differentiable everywhere on the constraint set. The PDE is then discretized and the resulting value function is numerically computed on a grid using a function approximator such as the @math -nearest neighbor algorithm. The error-bounded approximation is not conservative (it is an over-approximation) but converges to the true viability kernel in the limit as the number of grid points goes to infinity.
{ "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2090501324", "2130715863" ], "abstract": [ "Questions of reachability for continuous and hybrid systems can be formulated as optimal control or game theory problems, whose solution can be characterized using variants of the Hamilton-Jacobi-Bellman or Isaacs partial differential equations. The formal link between the solution to the partial differential equation and the reachability problem is usually established in the framework of viscosity solutions. This paper establishes such a link between reachability, viability and invariance problems and viscosity solutions of a special form of the Hamilton-Jacobi equation. This equation is developed to address optimal control problems where the cost function is the minimum of a function of the state over a specified horizon. The main advantage of the proposed approach is that the properties of the value function (uniform continuity) and the form of the partial differential equation (standard Hamilton-Jacobi form, continuity of the Hamiltonian and simple boundary conditions) make the numerical solution of the problem much simpler than other approaches proposed in the literature. This fact is demonstrated by applying our approach to a reachability problem that arises in flight control and using numerical tools to compute the solution.", "Viability theory considers the problem of maintaining a system under a set of viability constraints. The main tool for solving viability problems lies in the construction of the viability kernel, defined as the set of initial states from which there exists a trajectory that remains in the set of constraints indefinitely. The theory is very elegant and appears naturally in many applications. Unfortunately, the current numerical approaches suffer from low computational efficiency, which limits the potential range of applications of this domain. In this paper we show that the viability kernel is the zero-level set of a related dynamic programming problem, which opens promising research directions for numerical approximation of the viability kernel using tools from approximate dynamic programming. We illustrate the approach using k-nearest neighbors on a toy problem in two dimensions and on a complex dynamical model for anaerobic digestion process in four dimensions" ] }
1302.5990
2062319484
Computing the viability kernel is key in providing guarantees of safety and proving existence of safety-preserving controllers for constrained dynamical systems. Current numerical techniques that approximate this construct suffer from a complexity that is exponential in the dimension of the state. We study conditions under which a linear time-invariant (LTI) system can be suitably decomposed into lower dimensional subsystems so as to admit a conservative computation of the viability kernel in a decentralized fashion in subspaces. We then present an isomorphism that imposes these desired conditions, most suitably on two-time-scale systems. Decentralized computations are performed in the transformed coordinates, yielding a conservative approximation of the viability kernel in the original state space. Significant reduction of complexity can be achieved, allowing the previously inapplicable tools to be employed for treatment of higher dimensional systems. We show the results on two examples including a 6-D system.
Recently, we presented a connection between the viability kernel and efficiently-computable classes of reachability constructs known as maximal reachable sets. Owing to this connection, scalable numerical algorithms (collectively referred to as @cite_48 ) such as @cite_6 @cite_27 @cite_1 @cite_13 @cite_2 @cite_36 @cite_17 , originally developed for maximal reachability, can now be used to approximate the viability kernel. We presented two algorithms for LTI systems with convex constraints based on piecewise ellipsoidal representations @cite_20 and support vectors @cite_0 that have polynomial complexity. In contrast to these results, the technique presented here reduces the complexity indirectly by decentralizing computations. The benefit of this approach is that it allows useful features of Eulerian methods such gradient-based control synthesis and handling of arbitrarily shaped nonconvex constraints be taken advantage of.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_1", "@cite_6", "@cite_0", "@cite_27", "@cite_2", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "", "1574960890", "1726255445", "2042835403", "2085744412", "2176215692", "", "", "2109569369", "" ], "abstract": [ "", "Using only the existence and uniqueness of trajectories for a generic dynamic system with inputs, we define and examine eight types of forward and backward reachability constructs. If the input is treated in a worst-case fashion, any forward or backward reach set or tube can be used for safety analysis, but if the input is treated in a best-case fashion only the backward reach tube always provides the correct results. Fortunately, forward and backward algorithms can be exchanged if well-posed reverse time trajectories can be defined. Unfortunately, backward reachability constructs are more likely to suffer from numerical stability issues, especially in systems with significant contraction--the very systems where forward simulation and reachability are most effective.", "This report describes the calculation of the reach sets and tubes for linear control systems with time-varying coefficients and hard bounds on the controls through tight external and internal ellipsoidal approximations. These approximating tubes touch the reach tubes from outside and inside respectively at every point of their boundary so that the surface of the reach tube is totally covered by curves that belong to the approximating tubes. The proposed approximation scheme induces a very small computational burden compared with other methods of reach set calculation. In particular such approximations may be expressed through ordinary differential equations with coefficients given in explicit analytical form. This yields exact parametric representation of reach tubes through families of external and internal ellipsoidal tubes. The proposed techniques, combined with calculation of external and internal approximations for intersections of ellipsoids, provide an approach to reachability problems for hybrid systems.", "This work is concerned with the algorithmic reachability analysis of continuous-time linear systems with constrained initial states and inputs. We propose an approach for computing an over-approximation of the set of states reachable on a bounded time interval. The main contribution over previous works is that it allows us to consider systems whose sets of initial states and inputs are given by arbitrary compact convex sets represented by their support functions. We actually compute two over-approximations of the reachable set. The first one is given by the union of convex sets with computable support functions. As the representation of convex sets by their support function is not suitable for some tasks, we derive from this first over-approximation a second one given by the union of polyhedrons. The overall computational complexity of our approach is comparable to the complexity of the most competitive available specialized algorithms for reachability analysis of linear systems using zonotopes or ellipsoids. The effectiveness of our approach is demonstrated on several examples.", "Abstract While a number of Lagrangian algorithms to approximate reachability in dozens or even hundreds of dimensions for systems with linear dynamics have recently appeared in the literature, no similarly scalable algorithms for approximating viable sets have been developed. In this paper we describe a connection between reachability and viability that enables us to compute the viability kernel using reach sets. This connection applies to any type of system, such as those with nonlinear dynamics and or non-convex state constraints; however, here we take advantage of it to construct three viability kernel approximation algorithms for linear systems with convex input and state constraint sets. We compare the performance of the three algorithms and demonstrate that the two based on highly scalable Lagrangian reachability–those using ellipsoidal and support vector set representations–are able to compute the viability kernel for linear systems of larger state dimension than was previously feasible using traditional Eulerian methods. Our results are illustrated on a 6-dimensional pharmacokinetic model and a 20-dimensional model of heat conduction on a lattice.", "We present a scalable reachability algorithm for hybrid systems with piecewise affine, non-deterministic dynamics. It combines polyhedra and support function representations of continuous sets to compute an over-approximation of the reachable states. The algorithm improves over previous work by using variable time steps to guarantee a given local error bound. In addition, we propose an improved approximation model, which drastically improves the accuracy of the algorithm. The algorithm is implemented as part of SpaceEx, a new verification platform for hybrid systems, available at spaceex.imag.fr. Experimental results of full fixed-point computations with hybrid systems with more than 100 variables illustrate the scalability of the approach.", "", "", "We present a connection between the viability kernel and maximal reachable sets. Current numerical schemes that compute the viability kernel suffer from a complexity that is exponential in the dimension of the state space. In contrast, extremely efficient and scalable techniques are available that compute maximal reachable sets. We show that under certain conditions these techniques can be used to conservatively approximate the viability kernel for possibly high-dimensional systems. We demonstrate the results on two practical examples, one of which is a seven-dimensional problem of safety in anesthesia.", "" ] }
1302.6454
1488813780
This paper presents a new feedback shift register-based method for embedding deterministic test patterns on-chip suitable for complementing conventional BIST techniques for in-field testing. Our experimental results on 8 real designs show that the presented approach outperforms the bit-flipping approach by 24.7 on average. We also show that it is possible to exploit the uneven distribution of don't care bits in test patterns in order to reduce the area required for storing deterministic test patterns more than 3 times with less than 2 fault coverage drop.
The first algorithm for constructing a binary machine with the minimum number of stages for a given binary sequence was presented in @cite_19 . This algorithm exploits the unique property of binary machines that any binary @math -tuple can be the next state of a given current state. The algorithm assigns every 0 of a sequence a unique even integer and every 1 of a sequence a unique odd integer. Integers are assigned in an increasing order starting from 0. For example, if an 8-bit sequence 00101101 is given, the sequence of integers 0,2,1,4,3,5,6,7 can be used. This sequence of integers is interpreted as a sequence of states of a binary machine. The largest integer in the sequence of states determines the number of stages. In the example above, @math , thus the resulting binary machine has 3 stages. The feedback functions @math implementing the resulting current-to-next state mapping are derived using the traditional logic synthesis techniques @cite_26 .
{ "cite_N": [ "@cite_19", "@cite_26" ], "mid": [ "2128253600", "2105761964" ], "abstract": [ "The problem of constructing a binary machine with the minimum number of stages generating a given binary sequence is addressed. Binary machines are a generalization of nonlinear feedback shift registers (NLFSRs) in which both connections, feedback and feedforward, are allowed and no chain connection between the register stages is required. An algorithm for constructing a shortest binary machine generating a given periodic binary sequence is presented.", "1. Introduction.- 1.1 Design Styles for VLSI Systems.- 1.2 Automatic Logic Synthesis.- 1.3 PLA Implementation.- 1.4 History of Logic Minimization.- 1.5 ESPRESSO-II.- 1.6 Organization of the Book.- 2. Basic Definitions.- 2.1 Operations on Logic Functions.- 2.2 Algebraic Representation of a Logic Function.- 2.3 Cubes and Covers.- 3. Decomposition and Unate Functions.- 3.1 Cofactors and the Shannon Expansion.- 3.2 Merging.- 3.3 Unate Functions.- 3.4 The Choice of the Splitting Variable.- 3.5 Unate Complementation.- 3.6 SIMPLIFY.- 4. The ESPRESSO Minimization Loop and Algorithms.- 4.0 Introduction.- 4.1 Complementation.- 4.2 Tautology.- 4.2.1 Vanilla Recursive Tautology.- 4.2.2 Efficiency Results for Tautology.- 4.2.3 Improving the Efficiency of Tautology.- 4.2.4 Tautology for Multiple-Output Functions.- 4.3 Expand.- 4.3.1 The Blocking Matrix.- 4.3.2 The Covering Matrix.- 4.3.3 Multiple-Output Functions.- 4.3.4 Reduction of the Blocking and Covering Matrices.- 4.3.5 The Raising Set and Maximal Feasible Covering Set.- 4.3.6 The Endgame.- 4.3.7 The Primality of c+.- 4.4 Essential Primes.- 4.5 Irredundant Cover.- 4.6 Reduction.- 4.6.1 The Unate Recursive Paradigm for Reduction.- 4.6.2 Establishing the Recursive Paradigm.- 4.6.3 The Unate Case.- 4.7 Lastgasp.- 4.8 Makesparse.- 4.9 Output Splitting.- 5. Multiple-Valued Minimization.- 6. Experimental Results.- 6.1 Analysis of Raw Data for ESPRESSO-IIAPL.- 6.2 Analysis of Algorithms.- 6.3 Optimality of ESPRESSO-II Results.- 7. Comparisons and Conclusions.- 7.1 Qualitative Evaluation of Algorithms of ESPRESSO-II.- 7.2 Comparison with ESPRESSO-IIC.- 7.3 Comparison of ESPRESSO-II with Other Programs.- 7.4 Other Applications of Logic Minimization.- 7.5 Directions for Future Research.- References." ] }
1302.6289
2952735140
We consider discrete-time plants that interact with their controllers via fixed discrete alphabets. For this class of systems, and in the absence of exogenous inputs, we propose a general, conceptual procedure for constructing a sequence of finite state approximate models starting from finite length sequences of input and output signal pairs. We explicitly derive conditions under which the proposed construct, used in conjunction with a particular generalized structure, satisfies desirable properties of @math approximations thereby leading to nominal deterministic finite state machine models that can be used in certified-by- design controller synthesis. We also show that the cardinality of the minimal disturbance alphabet that can be used in this setting equals that of the sensor output alphabet. Finally, we show that the proposed construct satisfies a relevant semi-completeness property.
Other related research directions make use of symbolic models @cite_32 @cite_22 , approximating automata @cite_34 @cite_21 , and finite quotients of the system @cite_28 @cite_6 .
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_21", "@cite_32", "@cite_6", "@cite_34" ], "mid": [ "", "2570828688", "1968401573", "2150335178", "", "2163458574" ], "abstract": [ "", "Abstract The reduction of dynamic systems has a rich history, with many important applications related to stability, control and verification. Reduction of nonlinear systems is typically performed in an “exact” manner–as is the case with mechanical systems with symmetry–which, unfortunately, limits the type of systems to which it can be applied. The goal of this paper is to consider a more general form of reduction, termed approximate reduction , in order to extend the class of systems that can be reduced. Using notions related to incremental stability, we give conditions on when a dynamic system can be projected to a lower dimensional space while providing hard bounds on the induced errors, i.e. when it is behaviourally similar to a dynamic system on a lower dimensional space. These concepts are illustrated on a series of examples.", "This paper presents a new algorithm for generating finite-state automata models of hybrid systems in which the continuous-state dynamics can be switched when the continuous-state trajectory generates threshold events. The automata state transitions correspond to threshold events and the automata states correspond to portions of the threshold surfaces in the continuous state space. The hybrid system dynamics are approximated by the automata models in the sense that the languages of threshold event sequences generated by the automata contain the threshold event language for the hybrid system. Properties of the algorithm for constructing and refining the approximating automata are demonstrated and the application of approximating automata for system verification is illustrated for a switching controller for an inverted pendulum. Relationships to other approaches to hybrid system synthesis and verification are also discussed.", "We consider the following problem: given a linear system and a linear temporal logic (LTL) formula over a set of linear predicates in its state variables, find a feedback control law with polyhedral bounds and a set of initial states so that all trajectories of the closed loop system satisfy the formula. Our solution to this problem consists of three main steps. First, we partition the state space in accordance with the predicates in the formula, and construct a transition system over the partition quotient, which captures our capability of designing controllers. Second, using a procedure resembling model checking, we determine runs of the transition system satisfying the formula. Third, we generate the control strategy. Illustrative examples are included.", "", "The paper concerns the synthesis of supervisory controllers for a class of continuous-time hybrid systems with discrete-valued input signals that select differential inclusions for continuous-valued state trajectories and event-valued output signals generated by threshold crossings in the continuous state space, the supervisor is allowed to switch the input signal value when threshold events are observed. The objective is to synthesize a nonblocking supervisor such that the set of possible sequences of control and threshold event pairs for the closed-loop system lies between given upper and lower bounds in the sense of set containment. We show how this problem can be converted into a supervisor synthesis problem for a standard controlled discrete-event system (DES). A finite representation may not exist for the exact DES model of the hybrid system, however. To circumvent this difficulty, we present an algorithm for constructing finite-state Muller automata that accept outer approximations to the exact controlled threshold-event language, and we demonstrate that supervisors that solve the synthesis problem for the approximating automata achieve the control specifications when applied to the original hybrid system." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
One approach to DFL is to estimate the changes in the RF propagation field of the monitored area and then to image this change field. This image can then be used to infer the locations of people within the deployed network. Estimating the changes in the propagation field is referred to as radio tomographic imaging (RTI), a term coined in @cite_0 . Several measurement modalities have been proposed for the purpose of RTI. In @cite_15 , the attenuation of every voxel in the monitored area is estimated using the RSS measurements of many links of the network. Attenuation-based RTI is capable of achieving high accuracy in small and unobstructed deployments, however, in cluttered environments the system loses its capability to locate people.
{ "cite_N": [ "@cite_0", "@cite_15" ], "mid": [ "2161280016", "2151034334" ], "abstract": [ "Unlike current models for radio channel shadowing indicate, real-world shadowing losses on different links in a network are not independent. The correlations have both detrimental and beneficial impacts on sensor, ad hoc, and mesh networks. First, the probability of network connectivity reduces when link shadowing correlations are considered. Next, the variance bounds for sensor self-localization change, and provide the insight that algorithms must infer localization information from link correlations in order to avoid significant degradation from correlated shadowing. Finally, a major benefit is that shadowing correlations between links enable the tomographic imaging of an environment from pairwise RSS measurements. This paper applies measurement-based models, and measurements themselves, to analyze and to verify both the benefits and drawbacks of correlated link shadowing.", "Radio Tomographic Imaging (RTI) is an emerging technology for imaging the attenuation caused by physical objects in wireless networks. This paper presents a linear model for using received signal strength (RSS) measurements to obtain images of moving objects. Noise models are investigated based on real measurements of a deployed RTI system. Mean-squared error (MSE) bounds on image accuracy are derived, which are used to calculate the accuracy of an RTI system for a given node geometry. The ill-posedness of RTI is discussed, and Tikhonov regularization is used to derive an image estimator. Experimental results of an RTI experiment with 28 nodes deployed around a 441 square foot area are presented." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
In @cite_17 , variance-based RTI (VRTI) is presented, which is capable of localizing and tracking people even through walls of a building. A drawback of VRTI is that it is not capable of localizing stationary people since the measurements are based on a windowed variance of RSS. In addition, VRTI is prone to intrinsic motion which is variance of the RSS that is not caused by people. The problem is confronted in @cite_28 , where methods to reduce the noise of VRTI images are presented. Results indicate that the tracking accuracy can be improved significantly by removing the intrinsic noise of the estimated images.
{ "cite_N": [ "@cite_28", "@cite_17" ], "mid": [ "2152178374", "2147124620" ], "abstract": [ "Human motion in the vicinity of a wireless link causes variations in the link received signal strength (RSS). Device-free localization (DFL) systems, such as variance-based radio tomographic imaging (VRTI) use these RSS variations in a wireless network to detect, locate and track people in the area of the network, even through walls. However, intrinsic motion, such as branches moving in the wind, rotating or vibrating machinery, also causes RSS variations which degrade the performance of a DFL system. In this paper, we propose and evaluate a subspace decomposition method subspace variance-based radio tomography (SubVRT) to reduce the impact of the variations caused by intrinsic motion. Experimental results show that the SubVRT algorithm reduces localization root mean squared error (RMSE) by 41 . In addition, the Kalman filter tracking results from SubVRT have 97 of errors less than 1.4 m, a 65 improvement compared to tracking results from VRTI.", "This paper presents a new method for imaging, localizing, and tracking motion behind walls in real time. The method takes advantage of the motion-induced variance of received signal strength measurements made in a wireless peer-to-peer network. Using a multipath channel model, we show that the signal strength on a wireless link is largely dependent on the power contained in multipath components that travel through space containing moving objects. A statistical model relating variance to spatial locations of movement is presented and used as a framework for the estimation of a motion image. From the motion image, the Kalman filter is applied to recursively track the coordinates of a moving target. Experimental results for a 34-node through-wall imaging and tracking system over a 780 square foot area are presented." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
A measurement modality capable of locating both stationary and moving people is presented in @cite_29 . Further, the proposed system can achieve high accuracy in open, cluttered, and even in through-wall environments. The system is based on calculating the kernel distance @cite_1 of two RSS histograms, a long-term histogram representing the RSS measurements when the link line is not obstructed, and a short-term histogram that is capable of capturing the temporal RSS variations when the person is in close proximity to the wireless link.
{ "cite_N": [ "@cite_29", "@cite_1" ], "mid": [ "2151364214", "1818863960" ], "abstract": [ "We present an interactive demonstration of histogram distance-based radio tomographic imaging (HD-RTI), a device-free localization (DFL) system that uses measurements of received signal strength (RSS) on static links in a wireless network to estimate the locations of people who do not participate in the system by wearing any radio device in the deployment area. Compared to prior methods of RSS-based DFL, using a histogram difference metric is a very accurate method to quantify the change in RSS on the link compared to historical metrics. The new method is remarkably accurate, and works with lower node densities than prior methods.", "This document reviews the definition of the kernel distance, providing a gentle introduction tailored to a reader with background in theoretical computer science, but limited exposure to technology more common to machine learning, functional analysis and geometric measure theory. The key aspect of the kernel distance developed here is its interpretation as an L2 distance between probability measures or various shapes (e.g. point sets, curves, surfaces) embedded in a vector space (specifically an RKHS). This structure enables several elegant and efficient solutions to data analysis problems. We conclude with a glimpse into the mathematical underpinnings of this measure, highlighting its recent independent evolution in two separate fields." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
The use of channel diversity to enhance DFL accuracy is presented in @cite_9 . The work ranks the different frequency channels used for communication based on two parameters: packet delivery ratio and channel fade level. The results demonstrate that fade level is a more important factor than communication performance when accurate DFL is required. Through channel diversity, the localization accuracy is shown to improve by an order of magnitude compared to a system communicating on a single channel. The multi-channel system in @cite_9 is used in a long-term residential monitoring application in @cite_26 . The work demonstrates the applicability of RSS-based DFL in real domestic environments and the ability of this technology to achieve high localization accuracy over extended periods of time.
{ "cite_N": [ "@cite_9", "@cite_26" ], "mid": [ "2006327170", "2008025584" ], "abstract": [ "Radio tomographic imaging (RTI) is an emerging device-free localization (DFL) technology enabling the localization of people and other objects without requiring them to carry any electronic device. Instead, the RF attenuation field of the deployment area of a wireless network is estimated using the changes in received signal strength (RSS) measured on links of the network. This paper presents the use of channel diversity to improve the localization accuracy of RTI. Two channel selection methods, based on channel packet reception rates (PRRs) and fade levels, are proposed. Experimental evaluations are performed in two different types of environments, and the results show that channel diversity improves localization accuracy by an order of magnitude. People can be located with average error as low as 0.10 m, the lowest DFL location error reported to date. We find that channel fade level is a more important statistic than PRR for RTI channel selection. Using channel diversity, this paper, for the first time, demonstrates that attenuation-based through-wall RTI is possible.", "Device-free localization (DFL) enables localizing people by monitoring the changes in the radio frequency (RF) attenuation field of an area where a wireless network is deployed. Notably, this technology does not require people to participate in the localization effort by carrying any electronic device. This paper presents a DFL system for long-term residential monitoring. Due to the daily activities carried out by the people being monitored, the radio signals' propagation patterns change continuously. This would make a system relying only on an initial calibration of the radio environment highly inaccurate in the long run. In this paper, we use an on-line recalibration method that allows the system to adapt to the changes in the radio environment, and then provide accurate position estimates in the long run. A finite-state machine (FSM) defines when the person is located at specific areas-of-interest (AoI) inside the house (e.g. kitchen, bathroom, bed, etc.). Moreover, each time a state transition is triggered, the system tweets the new AoI in a Twitter account. The FSM allows extracting higher level information about the daily routine of the person being monitored, enabling interested parties (e.g. caretakers, relatives) to check that everything is proceeding normally in his life. In the long-term experiment carried out in a real domestic environment, the system was able to accurately and reliably localize the person during his daily activities." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
A drawback of imaging-based DFL systems is that they first estimate the changed RF-propagation field and then the coordinates of the person. In this two-step process, information can be lost and additional measurement noise can be introduced. Hence, methods to estimate the person's location directly from the RSS measurements are provided in @cite_5 @cite_25 @cite_21 . In @cite_5 , a particle filter is applied to simultaneously estimate the location of the sensors and the coordinates of a person moving inside the monitored area. In @cite_25 , a fade level skew-Laplace signal strength model and a statistical inversion method are introduced to estimate the location of people. In @cite_21 , an online learning algorithm is used to determine whether a link is obstructed by a person or not, and a particle filter is used to locate the person. These works rely on sequential Monte Carlo methods to estimate the position of the person. For this reason, due to the computational complexity of particle filters, especially when the number of used particles is high, these systems can not work in real-time.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_25" ], "mid": [ "2162814504", "1972766851", "2118097499" ], "abstract": [ "This paper presents and evaluates a method for simultaneously tracking a target while localizing the sensor nodes of a passive device-free tracking system. The system uses received signal strength (RSS) measurements taken on the links connecting many nodes in a wireless sensor network, with nodes deployed such that the links overlap across the region. A target moving through the region attenuates links intersecting or nearby its path. At the same time, RSS measurements provide information about the relative locations of sensor nodes. We utilize the Sequential Monte Carlo (particle filtering) framework for tracking, and we use an online EM algorithm to simultaneously estimate static parameters (including the sensor locations, as well as model parameters including noise variance and attenuation strength of the target). Simultaneous tracking, online calibration and parameter estimation enable rapid deployment of a RSS-based device free localization system, e.g., in emergency response scenarios. Simulation results and experiments with a wireless sensor network testbed illustrate that the proposed tracking method performs well in a variety of settings.", "This paper presents a novel method for tracking a moving person or object through walls using wireless networks. The method takes advantage of the motion-induced variation of received signal strength (RSS) measurements in a radio tomography network. Based on real measurements of a deployed network, we show that the RSS distribution on a wireless link can be modeled as a mixture of Gaussians. An online learning algorithm is then proposed to update the model and detect whether the link is affected by the motion. Using spatial locations of the affected links, we apply the sequential Monte Carlo (SMC) methods to track the coordinates of a moving target. Experimental results show that the proposed method achieves high tracking accuracy in time-varying environment without the need for offline training.", "Device-free localization (DFL) is the estimation of the position of a person or object that does not carry any electronic device or tag. Existing model-based methods for DFL from RSS measurements are unable to locate stationary people in heavily obstructed environments. This paper introduces measurement-based statistical models that can be used to estimate the locations of both moving and stationary people using received signal strength (RSS) measurements in wireless networks. A key observation is that the statistics of RSS during human motion are strongly dependent on the RSS \"fade level” during no motion. We define fade level and demonstrate, using extensive experimental data, that changes in signal strength measurements due to human motion can be modeled by the skew-Laplace distribution, with parameters dependent on the position of person and the fade level. Using the fade-level skew-Laplace model, we apply a particle filter to experimentally estimate the location of moving and stationary people in very different environments without changing the model parameters. We also show the ability to track more than one person with the model." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
In the works presented above, the sensors composing the network are deployed around the monitored area. In @cite_14 , the sensors are mounted above the monitored area, i.e. hanging from the ceiling, and a is applied to estimate the person location when the RSS changes exceed a predefined threshold. The work is extended in @cite_3 , where a clustering algorithm is used to track two people.
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "2126300356", "2112130409" ], "abstract": [ "In traditional radio-based localization methods, the target object has to carry a transmitter (e.g., active RFID), a receiver (e.g., 802.11x detector), or a transceiver (e.g., sensor node). However, in some applications, such as safe guard systems, it is not possible to meet this precondition. In this paper, we propose a model of signal dynamics to allow tracking of transceiver-free objects. Based on radio signal strength indicator (RSSI), which is readily available in wireless communication, three tracking algorithms are proposed to eliminate noise behaviors and improve accuracy. The midpoint and intersection algorithms can be applied to track a single object without calibration, while the best-cover algorithm has potential to track multiple objects but requires calibration. Our experimental test-bed is a grid sensor array based on MICA2 sensor nodes. The experimental results show that the best side length between sensor nodes in the grid is 2 meters and the best-cover algorithm can reach localization accuracy to 0.99 m", "RF-based transceiver-free object tracking, originally proposed by the authors, allows real-time tracking of a moving object, where the object does not have to be equipped with an RF transceiver. Our previous algorithm, the best cover algorithm, suffers from a drawback, i.e., it does not work well when there are multiple objects in the tracking area. In this paper, we propose a localization model of distance, transmission power and the signal dynamics caused by the objects. The signal dynamics are derived from the measured Radio Signal Strength Indication (RSSI). Using this new model, we propose the “probabilistic cover algorithm” which is based on distributed dynamic clustering thus it can dramatically improve the localization accuracy when multiple objects are present. Moreover, the probabilistic cover algorithm can reduce the tracking latency in the system. We argue that the small overhead of the proposed algorithm makes it scalable for large deployment. Experimental results show that in addition to its ability to identify multiple objects, the tracking accuracy is improved at a rate of 10 to 20 ." ] }
1302.5914
1602547744
RSS-based device-free localization (DFL) monitors changes in the received signal strength (RSS) measured by a network of static wireless nodes to locate people without requiring them to carry or wear any electronic device. Current models assume that the spatial impact area, i.e., the area in which a person affects a link's RSS, has constant size. This paper shows that the spatial impact area varies considerably for each link. Data from extensive experiments are used to derive a multi-scale spatial weight model that is a function of the fade level, i.e., the difference between the predicted and measured RSS, and of the direction of RSS change. In addition, a measurement model is proposed which gives a probability of a person locating inside the derived spatial model for each given RSS measurement. A real-time radio tomographic imaging system is described which uses channel diversity and the presented models. Experiments in an open indoor environment, in a typical one-bedroom apartment and in a through-wall scenario are conducted to determine the accuracy of the system. We demonstrate that the new system is capable of localizing and tracking a person with high accuracy (<0.30 m) in all the environments, without the need to change the model parameters.
In this paper, we adapt an imaging based solution @cite_0 @cite_15 @cite_26 @cite_17 @cite_28 @cite_29 @cite_9 to estimate the changes in the RF propagation field. However, these works assume identical spatial areas for links' area of impact, whereas our algorithm is the first imaging-based DFL approach to set each link's spatial impact area differently. Extensive experiments are conducted to derive a multi-scale spatial weight model which more accurately describes the human induced RSS-changes with respect to the spatial location of people. In addition, a new measurement model is introduced which gives the probability of the person locating inside the modeled area. The multi-scale weight and measurement models are built upon the concept of fade level. Channel diversity is exploited to enhance the accuracy of the system as in @cite_26 @cite_9 . However, due to the more accurate weight and measurement models, the system achieves its best accuracy when all the used channels are utilized to estimate the RF propagation field.
{ "cite_N": [ "@cite_26", "@cite_28", "@cite_29", "@cite_9", "@cite_0", "@cite_15", "@cite_17" ], "mid": [ "2008025584", "2152178374", "2151364214", "2006327170", "2161280016", "2151034334", "2147124620" ], "abstract": [ "Device-free localization (DFL) enables localizing people by monitoring the changes in the radio frequency (RF) attenuation field of an area where a wireless network is deployed. Notably, this technology does not require people to participate in the localization effort by carrying any electronic device. This paper presents a DFL system for long-term residential monitoring. Due to the daily activities carried out by the people being monitored, the radio signals' propagation patterns change continuously. This would make a system relying only on an initial calibration of the radio environment highly inaccurate in the long run. In this paper, we use an on-line recalibration method that allows the system to adapt to the changes in the radio environment, and then provide accurate position estimates in the long run. A finite-state machine (FSM) defines when the person is located at specific areas-of-interest (AoI) inside the house (e.g. kitchen, bathroom, bed, etc.). Moreover, each time a state transition is triggered, the system tweets the new AoI in a Twitter account. The FSM allows extracting higher level information about the daily routine of the person being monitored, enabling interested parties (e.g. caretakers, relatives) to check that everything is proceeding normally in his life. In the long-term experiment carried out in a real domestic environment, the system was able to accurately and reliably localize the person during his daily activities.", "Human motion in the vicinity of a wireless link causes variations in the link received signal strength (RSS). Device-free localization (DFL) systems, such as variance-based radio tomographic imaging (VRTI) use these RSS variations in a wireless network to detect, locate and track people in the area of the network, even through walls. However, intrinsic motion, such as branches moving in the wind, rotating or vibrating machinery, also causes RSS variations which degrade the performance of a DFL system. In this paper, we propose and evaluate a subspace decomposition method subspace variance-based radio tomography (SubVRT) to reduce the impact of the variations caused by intrinsic motion. Experimental results show that the SubVRT algorithm reduces localization root mean squared error (RMSE) by 41 . In addition, the Kalman filter tracking results from SubVRT have 97 of errors less than 1.4 m, a 65 improvement compared to tracking results from VRTI.", "We present an interactive demonstration of histogram distance-based radio tomographic imaging (HD-RTI), a device-free localization (DFL) system that uses measurements of received signal strength (RSS) on static links in a wireless network to estimate the locations of people who do not participate in the system by wearing any radio device in the deployment area. Compared to prior methods of RSS-based DFL, using a histogram difference metric is a very accurate method to quantify the change in RSS on the link compared to historical metrics. The new method is remarkably accurate, and works with lower node densities than prior methods.", "Radio tomographic imaging (RTI) is an emerging device-free localization (DFL) technology enabling the localization of people and other objects without requiring them to carry any electronic device. Instead, the RF attenuation field of the deployment area of a wireless network is estimated using the changes in received signal strength (RSS) measured on links of the network. This paper presents the use of channel diversity to improve the localization accuracy of RTI. Two channel selection methods, based on channel packet reception rates (PRRs) and fade levels, are proposed. Experimental evaluations are performed in two different types of environments, and the results show that channel diversity improves localization accuracy by an order of magnitude. People can be located with average error as low as 0.10 m, the lowest DFL location error reported to date. We find that channel fade level is a more important statistic than PRR for RTI channel selection. Using channel diversity, this paper, for the first time, demonstrates that attenuation-based through-wall RTI is possible.", "Unlike current models for radio channel shadowing indicate, real-world shadowing losses on different links in a network are not independent. The correlations have both detrimental and beneficial impacts on sensor, ad hoc, and mesh networks. First, the probability of network connectivity reduces when link shadowing correlations are considered. Next, the variance bounds for sensor self-localization change, and provide the insight that algorithms must infer localization information from link correlations in order to avoid significant degradation from correlated shadowing. Finally, a major benefit is that shadowing correlations between links enable the tomographic imaging of an environment from pairwise RSS measurements. This paper applies measurement-based models, and measurements themselves, to analyze and to verify both the benefits and drawbacks of correlated link shadowing.", "Radio Tomographic Imaging (RTI) is an emerging technology for imaging the attenuation caused by physical objects in wireless networks. This paper presents a linear model for using received signal strength (RSS) measurements to obtain images of moving objects. Noise models are investigated based on real measurements of a deployed RTI system. Mean-squared error (MSE) bounds on image accuracy are derived, which are used to calculate the accuracy of an RTI system for a given node geometry. The ill-posedness of RTI is discussed, and Tikhonov regularization is used to derive an image estimator. Experimental results of an RTI experiment with 28 nodes deployed around a 441 square foot area are presented.", "This paper presents a new method for imaging, localizing, and tracking motion behind walls in real time. The method takes advantage of the motion-induced variance of received signal strength measurements made in a wireless peer-to-peer network. Using a multipath channel model, we show that the signal strength on a wireless link is largely dependent on the power contained in multipath components that travel through space containing moving objects. A statistical model relating variance to spatial locations of movement is presented and used as a framework for the estimation of a motion image. From the motion image, the Kalman filter is applied to recursively track the coordinates of a moving target. Experimental results for a 34-node through-wall imaging and tracking system over a 780 square foot area are presented." ] }
1302.5611
2950844687
Transit Node Routing (TNR) is a fast and exact distance oracle for road networks. We show several new results for TNR. First, we give a surprisingly simple implementation fully based on Contraction Hierarchies that speeds up preprocessing by an order of magnitude approaching the time for just finding a CH (which alone has two orders of magnitude larger query time). We also develop a very effective purely graph theoretical locality filter without any compromise in query times. Finally, we show that a specialization to the online many-to-one (or one-to-many) shortest path further speeds up query time by an order of magnitude. This variant even has better query time than the fastest known previous methods which need much more space.
A related technique is by Abraham al @cite_8 which stores sorted CH search spaces, intersecting them to obtain the distance. Using sophisticated tuning measures this can be made significantly faster than TNR since it incurs less cache faults. However HL need much more space than TNR.
{ "cite_N": [ "@cite_8" ], "mid": [ "2143072141" ], "abstract": [ "We study the problem of computing batched shortest paths in road networks efficiently. Our focus is on computing paths from a single source to multiple targets (one-to-many queries). We perform a comprehensive experimental comparison of several approaches, including new ones. We conclude that a new extension of PHAST (a recent one-to-all algorithm), called RPHAST, has the best performance in most cases, often by orders of magnitude. When used to compute distance tables (many-to-many queries), RPHAST often outperforms all previous approaches." ] }
1302.5734
2949888555
Flow watermarks efficiently link packet flows in a network in order to thwart various attacks such as stepping stones. We study the problem of designing good flow watermarks. Earlier flow watermarking schemes mostly considered substitution errors, neglecting the effects of packet insertions and deletions that commonly happen within a network. More recent schemes consider packet deletions but often at the expense of the watermark visibility. We present an invisible flow watermarking scheme capable of enduring a large number of packet losses and insertions. To maintain invisibility, our scheme uses quantization index modulation (QIM) to embed the watermark into inter-packet delays, as opposed to time intervals including many packets. As the watermark is injected within individual packets, packet losses and insertions may lead to watermark desynchronization and substitution errors. To address this issue, we add a layer of error-correction coding to our scheme. Experimental results on both synthetic and real network traces demonstrate that our scheme is robust to network jitter, packet drops and splits, while remaining invisible to an attacker.
Earlier flow watermarks are of inter packet delay (IPD)-based type. @cite_23 , the authors propose an IPD-based scheme that modulates the mean of selected IPDs using the QIM framework. Watermark synchronization is lost if enough packets are dropped or split. Therefore the scheme is unreliable. Another IPD-based scheme is presented in @cite_25 , where watermarks are added by enlarging or shrinking the IPDs. This non-blind scheme achieves some watermark resynchronizations when packets are dropped or split, but is not scalable as the original packet flow is required during decoding.
{ "cite_N": [ "@cite_25", "@cite_23" ], "mid": [ "187122915", "2139978474" ], "abstract": [ "Linking network flows is an important problem in intrusion detection as well as anonymity. Passive traffic analysis can link flows but requires long periods of observation to reduce errors. Watermarking techniques allow for better precision and blind detection, but they do so by introducing significant delays to the traffic flow, enabling attacks that detect and remove the mark, while at the same time slowing down legitimate traffic. We propose a new, non-blind watermarking scheme called RAINBOW that is able to use delays hundreds of times smaller than existing watermarks by eliminating the interference caused by the flow in the blind case. As a result, our watermark is invisible to detection, as confirmed by experiments using information-theoretic detection tools. We analyze the error rates of our scheme based on a mathematical model of network traffic and jitter. We also validate the analysis using an implementation running on PlanetLab. We find that our scheme generates orders of magnitudes lower rates of false errors than passive traffic analysis, while using only a few hundred observed packets. We also extend our scheme so that it is robust to packet drops and repacketization and show that flows can still be reliably linked, though at the cost of somewhat longer observation periods.", "Network based intruders seldom attack directly from their own hosts, but rather stage their attacks through intermediate \"stepping stones\" to conceal their identity and origin. To identify attackers behind stepping stones, it is necessary to be able to correlate connections through stepping stones, even if those connections are encrypted or perturbed by the intruder to prevent traceability.The timing-based approach is the most capable and promising current method for correlating encrypted connections. However, previous timing-based approaches are vulnerable to packet timing perturbations introduced by the attacker at stepping stones. In this paper, we propose a novel watermark-based correlation scheme that is designed specifically to be robust against timing perturbations. The watermark is introduced by slightly adjusting the timing of selected packets of the flow. By utilizing redundancy techniques, we have developed a robust watermark correlation framework that reveals a rather surprising result on the inherent limits of independent and identically distributed (iid) random timing perturbations over sufficiently long flows. We also identify the tradeoffs between timing perturbation characteristics and achievable correlation effectiveness. Experiments show that the new method performs significantly better than existing, passive, timing-based correlation in the presence of random packet timing perturbations." ] }
1302.5101
2951341567
A password composition policy restricts the space of allowable passwords to eliminate weak passwords that are vulnerable to statistical guessing attacks. Usability studies have demonstrated that existing password composition policies can sometimes result in weaker password distributions; hence a more principled approach is needed. We introduce the first theoretical model for optimizing password composition policies. We study the computational and sample complexity of this problem under different assumptions on the structure of policies and on users' preferences over passwords. Our main positive result is an algorithm that -- with high probability --- constructs almost optimal policies (which are specified as a union of subsets of allowed passwords), and requires only a small number of samples of users' preferred passwords. We complement our theoretical results with simulations using a real-world dataset of 32 million passwords.
It has been repeatedly demonstrated that users tend to select easily guessable passwords @cite_20 @cite_0 @cite_5 and NIST recommends that organizations should also ensure that other trivial passwords cannot be set," to thwart potential attackers @cite_14 . Unfortunately, this task is more difficult than it might appear at first. Policies were initially developed without empirical data to support them, since such data was not available to policy designers @cite_16 . When hackers leaked the RockYou dataset to the Internet, both researchers (and attackers) suddenly had access to password data, leading to many insights into true passwords @cite_19 . However, recent research analyzing leaked datasets from non-English speakers, notably Hebrew and Chinese-language websites, shows that trivial password choices can vary between contexts, making a simple blacklist approach ineffective @cite_2 . This means that, depending on the context, a policy based on leaked password data might provide no security guarantee, and it has ethical issues as well.
{ "cite_N": [ "@cite_14", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "", "1548573590", "2097267243", "2119545418", "2048755632", "2618675491", "" ], "abstract": [ "", "We propose to strengthen user-selected passwords against statistical-guessing attacks by allowing users of Internet-scale systems to choose any password they want--so long as it's not already too popular with other users. We create an oracle to identify undesirably popular passwords using an existing data structure known as a count-min sketch, which we populate with existing users' passwords and update with each new user password. Unlike most applications of probabilistic data structures, which seek to achieve only a maximum acceptable rate false-positives, we set a minimum acceptable false-positive rate to confound attackers who might query the oracle or even obtain a copy of it.", "In this paper we attempt to determine the effectiveness of using entropy, as defined in NIST SP800-63, as a measurement of the security provided by various password creation policies. This is accomplished by modeling the success rate of current password cracking techniques against real user passwords. These data sets were collected from several different websites, the largest one containing over 32 million passwords. This focus on actual attack methodologies and real user passwords quite possibly makes this one of the largest studies on password security to date. In addition we examine what these results mean for standard password creation policies, such as minimum password length, and character set requirements.", "We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength. We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.", "We report on the largest corpus of user-chosen passwords ever studied, consisting of anonymized password histograms representing almost 70 million Yahoo! users, mitigating privacy concerns while enabling analysis of dozens of subpopulations based on demographic factors and site usage characteristics. This large data set motivates a thorough statistical treatment of estimating guessing difficulty by sampling from a secret distribution. In place of previously used metrics such as Shannon entropy and guessing entropy, which cannot be estimated with any realistically sized sample, we develop partial guessing metrics including a new variant of guesswork parameterized by an attacker's desired success rate. Our new metric is comparatively easy to approximate and directly relevant for security engineering. By comparing password distributions with a uniform distribution which would provide equivalent security against different forms of guessing attack, we estimate that passwords provide fewer than 10 bits of security against an online, trawling attack, and only about 20 bits of security against an optimal offline dictionary attack. We find surprisingly little variation in guessing difficulty; every identifiable group of users generated a comparably weak password distribution. Security motivations such as the registration of a payment card have no greater impact than demographic factors such as age and nationality. Even proactive efforts to nudge users towards better password choices with graphical feedback make little difference. More surprisingly, even seemingly distant language communities choose the same weak passwords and an attacker never gains more than a factor of 2 efficiency gain by switching from the globally optimal dictionary to a population-specific lists.", "This recommendation provides technical guidelines for Federal agencies implementing electronic authentication and is not intended to constrain the development or use of standards outside of this purpose. The recommendation covers remote authentication of users (such as employees, contractors, or private individuals) interacting with government IT systems over open networks. It defines technical requirements for each of four levels of assurance in the areas of identity proofing, registration, tokens, management processes, authentication protocols and related assertions. This publication supersedes NIST SP 800-63-1.", "" ] }
1302.5101
2951341567
A password composition policy restricts the space of allowable passwords to eliminate weak passwords that are vulnerable to statistical guessing attacks. Usability studies have demonstrated that existing password composition policies can sometimes result in weaker password distributions; hence a more principled approach is needed. We introduce the first theoretical model for optimizing password composition policies. We study the computational and sample complexity of this problem under different assumptions on the structure of policies and on users' preferences over passwords. Our main positive result is an algorithm that -- with high probability --- constructs almost optimal policies (which are specified as a union of subsets of allowed passwords), and requires only a small number of samples of users' preferred passwords. We complement our theoretical results with simulations using a real-world dataset of 32 million passwords.
schechter2010popularity suggest using a popularity oracle to prevent individual passwords that have been used too frequently from being selected by new users. They also proposed using the count-min sketch data structure @cite_11 to build such a popularity oracle. Malone and Maher malone2012investigating suggest a similar system using a Metropolis-Hastings scheme to force an approximately uniform distribution on passwords. Usability results on the effectiveness of dictionary checks @cite_3 suggest that such policies would be very frustrating since the policy is hidden from users behind an oracle. In contrast, we seek to construct optimal policies from combinations of rules that are visible to the user and can be described in natural language.
{ "cite_N": [ "@cite_3", "@cite_11" ], "mid": [ "2113266120", "2080234606" ], "abstract": [ "Text-based passwords are the most common mechanism for authenticating humans to computer systems. To prevent users from picking passwords that are too easy for an adversary to guess, system administrators adopt password-composition policies (e.g., requiring passwords to contain symbols and numbers). Unfortunately, little is known about the relationship between password-composition policies and the strength of the resulting passwords, or about the behavior of users (e.g., writing down passwords) in response to different policies. We present a large-scale study that investigates password strength, user behavior, and user sentiment across four password-composition policies. We characterize the predictability of passwords by calculating their entropy, and find that a number of commonly held beliefs about password composition and strength are inaccurate. We correlate our results with user behavior and sentiment to produce several recommendations for password-composition policies that result in strong passwords without unduly burdening users.", "We introduce a new sublinear space data structure--the count-min sketch--for summarizing data streams. Our sketch allows fundamental queries in data stream summarization such as point, range, and inner product queries to be approximately answered very quickly; in addition, it can be applied to solve several important problems in data streams such as finding quantiles, frequent items, etc. The time and space bounds we show for using the CM sketch to solve these problems significantly improve those previously known--typically from 1 e2 to 1 e in factor." ] }
1302.5192
2238041365
Erasure codes are an integral part of many distributed storage systems aimed at Big Data, since they provide high fault-tolerance for low overheads. However, traditional erasure codes are inefficient on reading stored data in degraded environments (when nodes might be unavailable), and on replenishing lost data (vital for long term resilience). Consequently, novel codes optimized to cope with distributed storage system nuances are vigorously being researched. In this paper, we take an engineering alternative, exploring the use of simple and mature techniques ‐ juxtaposing a standard erasure code with RAID4 like parity. We carry out an analytical study to determine the efficacy of this approach over traditional as well as some novel codes. We build upon this study to design CORE, a general storage primitive that we integrate into HDFS. We benchmark this implementation in a proprietary cluster and in EC2. Our experiments show that compared to traditional erasure codes, CORE uses 50 less bandwidth and is up to 75 faster while recovering a single failed node, while the gains are respectively 15 and 60 for double node failures.
Erasure codes have long been explored as a storage efficient alternative to replication for achieving fault-tolerance @cite_15 in the peer-to-peer (P2P) systems literature, and have led to numerous prototypes, e.g., OceanStore @cite_17 and TotalRecall @cite_21 to name a few. In recent years erasure codes have gained traction @cite_3 even in main-stream storage technologies such as RAID @cite_16 . The ideas from RAID systems are in turn permeating to Cloud settings @cite_29 @cite_24 , and erasure codes have become an integral part of many proprietary file systems used in data-centers @cite_10 @cite_6 , as well as open-source variants @cite_0 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_3", "@cite_6", "@cite_24", "@cite_0", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "330091898", "41204052", "2004901646", "2119528150", "", "", "1836955865", "2147504831", "2029467255", "2104210894" ], "abstract": [ "To reduce storage overhead, cloud file systems are transitioning from replication to erasure codes. This process has revealed new dimensions on which to evaluate the performance of different coding schemes: the amount of data used in recovery and when performing degraded reads. We present an algorithm that finds the optimal number of codeword symbols needed for recovery for any XOR-based erasure code and produces recovery schedules that use a minimum amount of data. We differentiate popular erasure codes based on this criterion and demonstrate that the differences improve I O performance in practice for the large block sizes used in cloud file systems. Several cloud systems [15, 10] have adopted Reed-Solomon (RS) codes, because of their generality and their ability to tolerate larger numbers of failures. We define a new class of rotated Reed-Solomon codes that perform degraded reads more efficiently than all known codes, but otherwise inherit the reliability and performance properties of Reed-Solomon codes.", "Availability is a storage system property that is both highly desired and yet minimally engineered. While many systems provide mechanisms to improve availability - such as redundancy and failure recovery - how to best configure these mechanisms is typically left to the system manager. Unfortunately, few individuals have the skills to properly manage the trade-offs involved, let alone the time to adapt these decisions to changing conditions. Instead, most systems are configured statically and with only a cursory understanding of how the configuration will impact overall performance or availability. While this issue can be problematic even for individual storage arrays, it becomes increasingly important as systems are distributed - and absolutely critical for the wide-area peer-to-peer storage infrastructures being explored. This paper describes the motivation, architecture and implementation for a new peer-to-peer storage system, called TotalRecall, that automates the task of availability management. In particular, the TotalRecall system automatically measures and estimates the availability of its constituent host components, predicts their future availability based on past behavior, calculates the appropriate redundancy mechanisms and repair policies, and delivers user-specified availability while maximizing efficiency.", "Large centralized and networked storage systems have grown to the point where the single fault tolerance provided by RAID-5 is no longer enough. RAID-6 storage systems protect k disks of data with two parity disks so that the system of k + 2 disks may tolerate the failure of any two disks. Coding techniques for RAID-6 systems are varied, but an important class of techniques are those with minimum density, featuring an optimal combination of encoding, decoding and modification complexity. The word size of a code has an impact on both how the code is laid out on each disk's sectors and how large k can be. Word sizes which are powers of two are especially important, since they fit precisely into file system blocks. Minimum density codes exist for many word sizes with the notable exception of eight. This paper fills that gap by describing a new code called The RAID-6 Liber8tion Code for this important word size. The description includes performance properties as well as details of the discovery process.", "Scalable analysis on large data sets has been core to the functions of a number of teams at Facebook - both engineering and non-engineering. Apart from ad hoc analysis of data and creation of business intelligence dashboards by analysts across the company, a number of Facebook's site features are also based on analyzing large data sets. These features range from simple reporting applications like Insights for the Facebook Advertisers, to more advanced kinds such as friend recommendations. In order to support this diversity of use cases on the ever increasing amount of data, a flexible infrastructure that scales up in a cost effective manner, is critical. We have leveraged, authored and contributed to a number of open source technologies in order to address these requirements at Facebook. These include Scribe, Hadoop and Hive which together form the cornerstones of the log collection, storage and analytics infrastructure at Facebook. In this paper we will present how these systems have come together and enabled us to implement a data warehouse that stores more than 15PB of data (2.5PB after compression) and loads more than 60TB of new data (10TB after compression) every day. We discuss the motivations behind our design choices, the capabilities of this solution, the challenges that we face in day today operations and future capabilities and improvements that we are working on.", "", "", "Peer-to-peer systems are positioned to take advantage of gains in network bandwidth, storage capacity, and computational resources to provide long-term durable storage infrastructures. In this paper, we quantitatively compare building a distributed storage infrastructure that is self-repairing and resilient to faults using either a replicated system or an erasure-resilient system. We show that systems employing erasure codes have mean time to failures many orders of magnitude higher than replicated systems with similar storage and bandwidth requirements. More importantly, erasure-resilient systems use an order of magnitude less bandwidth and storage to provide similar system durability as replicated systems.", "Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.", "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.", "OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development." ] }
1302.5192
2238041365
Erasure codes are an integral part of many distributed storage systems aimed at Big Data, since they provide high fault-tolerance for low overheads. However, traditional erasure codes are inefficient on reading stored data in degraded environments (when nodes might be unavailable), and on replenishing lost data (vital for long term resilience). Consequently, novel codes optimized to cope with distributed storage system nuances are vigorously being researched. In this paper, we take an engineering alternative, exploring the use of simple and mature techniques ‐ juxtaposing a standard erasure code with RAID4 like parity. We carry out an analytical study to determine the efficacy of this approach over traditional as well as some novel codes. We build upon this study to design CORE, a general storage primitive that we integrate into HDFS. We benchmark this implementation in a proprietary cluster and in EC2. Our experiments show that compared to traditional erasure codes, CORE uses 50 less bandwidth and is up to 75 faster while recovering a single failed node, while the gains are respectively 15 and 60 for double node failures.
With the proliferation of erasure codes in storage-centric applications, there has been a corresponding rise in the exploration of novel erasure codes which cater to the nuances of distributed storage systems. Specific aspects that have been investigated in designing such new coding techniques include: (i) @cite_18 @cite_14 , (ii) @cite_20 @cite_27 by either combining standard codes @cite_28 @cite_22 @cite_2 , applying network coding techniques @cite_13 @cite_25 @cite_27 , or designing completely new codes with lower repair fan-in @cite_30 @cite_7 @cite_31 @cite_4 , and (iii) of erasure coded redundancy @cite_26 @cite_8 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_26", "@cite_28", "@cite_8", "@cite_27", "@cite_2", "@cite_31", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "2050413942", "1996042140", "154253821", "2949925098", "2030250129", "2117246793", "1920946852", "2159028410", "2086797542", "2058863419", "2071658001", "1995147907", "2951544515", "2111915261", "280113044" ], "abstract": [ "Erasure codes provide a storage efficient alternative to replication based redundancy in (networked) storage systems. They however entail high communication overhead for maintenance, when some of the encoded fragments are lost and need to be replenished. Such overheads arise from the fundamental need to recreate (or keep separately) first a copy of the whole object before any individual encoded fragment can be generated and replenished. There has recently been intense interest to explore alternatives, most prominent ones being regenerating codes (RGC) and hierarchical codes (HC). We propose as an alternative a new family of codes to improve the maintenance process, called self-repairing codes (SRC), with the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments by downloading less data than the size of the complete object, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. These properties allow for not only low communication overhead to recreate a missing fragment, but also independent reconstruction of different missing fragments in parallel, possibly in different parts of the network. The fundamental difference between SRCs and HCs is that different encoded fragments in HCs do not have symmetric roles (equal importance). Consequently the number of fragments required to replenish a specific fragment in HCs depends on which specific fragments are missing, and not solely on how many. Likewise, object reconstruction may need different number of fragments depending on which fragments are missing. RGCs apply network coding over (n, k) erasure codes, and provide network information flow based limits on the minimal maintenance overheads. RGCs need to communicate with at least k other nodes to recreate any fragment, and the minimal overhead is achieved if only one fragment is missing, and information is downloaded from all the other n−1 nodes. We analyze the static resilience of SRCs with respect to erasure codes, and observe that SRCs incur marginally larger storage overhead in order to achieve the aforementioned properties. The salient SRC properties naturally translate to low communication overheads for reconstruction of lost fragments, and allow reconstruction with lower latency by facilitating repairs in parallel. These desirable properties make SRC a practical candidate for networked distributed storage systems.", "We design flexible schemes to explore the tradeoffs between storage space and access efficiency in reliable data storage systems. Aiming at this goal, two new classes of erasure-resilient codes are introduced -- Basic Pyramid Codes (BPC) and Generalized Pyramid Codes (GPC). Both schemes require slightly more storage space than conventional schemes, but significantly improve the critical performance of read during failures and unavailability. As a by-product, we establish a necessary matching condition to characterize the limit of failure recovery, that is, unless the matching condition is satisfied, a failure case is impossible to recover. In addition, we define a maximally recoverable (MR) property. For all ERC schemes holding the MR property, the matching condition becomes sufficient, that is, all failure cases satisfying the matching condition are indeed recoverable. We show that GPC is the first class of non-MDS schemes holding the MR property.", "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.", "Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2x on the repair disk I O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14 more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.", "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80p and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.", "Self-Repairing Codes (SRC) are codes designed to suit the need of coding for distributed networked storage: they not only allow stored data to be recovered even in the presence of node failures, they also provide a repair mechanism where as little as two live nodes can be contacted to regenerate the data of a failed node. In this paper, we propose a new instance of self-repairing codes, based on constructions of spreads coming from projective geometry. We study some of their properties to demonstrate the suitability of these codes for distributed networked storage.", "Given the vast volume of data that needs to be stored reliably, many data-centers and large-scale file systems have started using erasure codes to achieve reliable storage while keeping the storage overhead low. This has invigorated the research on erasure codes tailor made to achieve different desirable storage system properties such as efficient redundancy replenishment mechanisms, resilience against data corruption, degraded reads, to name a few prominent ones. A problem that has mainly been overlooked until recently is that of how the storage system can be efficiently populated with erasure coded data to start with. In this paper, we will look at two distinct but related scenarios: (i) migration to archival - leveraging on existing replicated data to create an erasure encoded archive, and (ii) data insertion - new data being inserted in the system directly in erasure coded format. We will elaborate on coding techniques to achieve better throughput for data insertion and migration, and in doing so, explore the connection of these techniques with recently proposed locally repairable codes such as self-repairing codes.", "Redundancy is the basic technique to provide reliability in storage systems consisting of multiple components. A redundancy scheme defines how the redundant data are produced and maintained. The simplest redundancy scheme is replication, which however suffers from storage inefficiency. Another approach is erasure coding, which provides the same level of reliability as replication using a significantly smaller amount of storage. When redundant data are lost, they need to be replaced. While replacing replicated data consists in a simple copy, it becomes a complex operation with erasure codes: new data are produced performing a coding over some other available data. The amount of data to be read and coded is d times larger than the amount of data produced. This implies that coding has a larger computational and I O cost, which, for distributed storage systems, translates into increased network traffic. Participants of peer-to-peer systems have ample storage and CPU power, but their network bandwidth may be limited. For these reasons existing coding techniques are not suitable for P2P storage. This work explores the design space between replication and the existing erasure codes. We propose and evaluate a new class of erasure codes, called hierarchical codes, which aims at finding a flexible trade-off that allows the reduction of the network traffic due to maintenance without losing the benefits given by traditional codes.", "To achieve reliability in distributed storage systems, data has usually been replicated across different nodes. However the increasing volume of data to be stored has motivated the introduction of erasure codes, a storage efficient alternative to replication, particularly suited for archival in data centers, where old datasets (rarely accessed) can be erasure encoded, while replicas are maintained only for the latest data. Many recent works consider the design of new storage-centric erasure codes for improved repairability. In contrast, this paper addresses the migration from replication to encoding: traditionally erasure coding is an atomic operation in that a single node with the whole object encodes and uploads all the encoded pieces. Although large datasets can be concurrently archived by distributing individual object encodings among different nodes, the network and computing capacity of individual nodes constrain the archival process due to such atomicity. We propose a new pipelined coding strategy that distributes the network and computing load of single-object encodings among different nodes, which also speeds up multiple object archival. We further present RapidRAID codes, an explicit family of pipelined erasure codes which provides fast archival without compromising either data reliability or storage overheads. Finally, we provide a real implementation of RapidRAID codes and benchmark its performance using both a cluster of 50 nodes and a set of Amazon EC2 instances. Experiments show that RapidRAID codes reduce a single object's coding time by up to 90 , while when multiple objects are encoded concurrently, the reduction is up to 20 .", "Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic.", "The problem of replenishing redundancy in erasure code based fault-tolerant storage has received a great deal of attention recently, leading to the design of several new coding techniques [3], aiming at a better repairability. In this paper, we adopt a different point of view, by proposing to code across different already encoded objects to alleviate the repair problem. We show that the addition of parity pieces - the simplest form of coding - significantly boosts repairability without sacrificing fault-tolerance for equivalent storage overhead. The simplicity of our approach as well as its reliance on time-tested techniques makes it readily deployable.", "One main challenge in the design of distributed storage codes is the Exact Repair Problem: if a node storing encoded information fails, to maintain the same level of reliability, we need to exactly regenerate what was lost in a new node. A major open problem in this area has been the design of codes that i) admit exact and low cost repair of nodes and ii) have arbitrarily high data rates. In this paper, we are interested in the metric of repair locality, which corresponds to the the number of disk accesses required during a node repair. Under this metric we characterize an information theoretic trade-off that binds together locality, code distance, and storage cost per node. We introduce Locally repairable codes (LRCs) which are shown to achieve this tradeoff. The achievability proof uses a “locality aware” flow graph gadget which leads to a randomized code construction. We then present the first explicit construction of LRCs that can achieve arbitrarily high data-rates.", "Erasure correcting codes are widely used to ensure data persistence in distributed storage systems. This paper addresses the simultaneous repair of multiple failures in such codes. We go beyond existing work (i.e., regenerating codes by ) by describing (i) coordinated regenerating codes (also known as cooperative regenerating codes) which support the simultaneous repair of multiple devices, and (ii) adaptive regenerating codes which allow adapting the parameters at each repair. Similarly to regenerating codes by , these codes achieve the optimal tradeoff between storage and the repair bandwidth. Based on these extended regenerating codes, we study the impact of lazy repairs applied to regenerating codes and conclude that lazy repairs cannot reduce the costs in term of network bandwidth but allow reducing the disk-related costs (disk bandwidth and disk I O).", "When there are multiple storage node failures in distributed storage system, regenerating them individually is suboptimal as far as repair bandwidth minimization is concerned. The tradeoff between storage and repair bandwidth is derived in the case where data exchange among the newcomers is enabled. The tradeoff curve with cooperation is strictly better than the one without cooperation. An explicit construction of cooperative regenerating code is given.", "The most commonly deployed multi-storage device systems are RAID housed in a single computing unit. The idea of distributing data across multiple disks has been naturally extended to multiple storage nodes which are interconnected over a network and are called Networked Distributed Storage Systems (NDSS). The simplest coding techniques based on replication are often used to ensure redundancy in these systems, but given the sheer volume of data that needs to be stored and the overheads of replication, other coding techniques are being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems (NDSS) surveys coding techniques for NDSS, which aim at achieving (1) fault tolerance efficiently and (2) good repairability characteristics to replenish the lost redundancy, and ensure data durability over time. This is a vibrant are of research and this book is the first overview which presents the background required to understand the problems as well as covering the most important techniques currently being developed. Coding Techniques for Repairability in Networked Distributed Storage Systems is essential reading for all researchers and engineers involved in designing and researching computer storage systems." ] }
1302.5192
2238041365
Erasure codes are an integral part of many distributed storage systems aimed at Big Data, since they provide high fault-tolerance for low overheads. However, traditional erasure codes are inefficient on reading stored data in degraded environments (when nodes might be unavailable), and on replenishing lost data (vital for long term resilience). Consequently, novel codes optimized to cope with distributed storage system nuances are vigorously being researched. In this paper, we take an engineering alternative, exploring the use of simple and mature techniques ‐ juxtaposing a standard erasure code with RAID4 like parity. We carry out an analytical study to determine the efficacy of this approach over traditional as well as some novel codes. We build upon this study to design CORE, a general storage primitive that we integrate into HDFS. We benchmark this implementation in a proprietary cluster and in EC2. Our experiments show that compared to traditional erasure codes, CORE uses 50 less bandwidth and is up to 75 faster while recovering a single failed node, while the gains are respectively 15 and 60 for double node failures.
Despite the plethora of works investigating novel erasure codes, most existing distributed file systems using erasure codes do so by adapting traditional erasure codes. Microsoft's Windows Azure Storage @cite_10 is a prominent exception which uses an optimized version of Pyramid codes @cite_18 called Local Reconstruction Code (LRC) @cite_14 . Some recent academic prototypes - NCFS @cite_5 and @cite_19 Coincidentally, @cite_19 uses the same name, CORE, for co llaborative re generation. likewise explore the feasibility of applying network coding techniques for repairing lost data. The latter systems do not address the issue of degraded reads. In contrast to these systems based on proprietary and novel erasure coding techniques with significant system design complexity, CORE composes two mature techniques (standard erasure codes and RAID-4 like parity) while achieving very good repairability and degraded read performance. This makes CORE suitable for ready integration with many block based storage file systems, and its simple design makes it amenable to third party reimplementations.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_19", "@cite_5", "@cite_10" ], "mid": [ "1996042140", "154253821", "2161993886", "2112924098", "2029467255" ], "abstract": [ "We design flexible schemes to explore the tradeoffs between storage space and access efficiency in reliable data storage systems. Aiming at this goal, two new classes of erasure-resilient codes are introduced -- Basic Pyramid Codes (BPC) and Generalized Pyramid Codes (GPC). Both schemes require slightly more storage space than conventional schemes, but significantly improve the critical performance of read during failures and unavailability. As a by-product, we establish a necessary matching condition to characterize the limit of failure recovery, that is, unless the matching condition is satisfied, a failure case is impossible to recover. In addition, we define a maximally recoverable (MR) property. For all ERC schemes holding the MR property, the matching condition becomes sufficient, that is, all failure cases satisfying the matching condition are indeed recoverable. We show that GPC is the first class of non-MDS schemes holding the MR property.", "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere, at any time, and only pay for what they use and store. To provide durability for that data and to keep the cost of storage low, WAS uses erasure coding. In this paper we introduce a new set of codes for erasure coding called Local Reconstruction Codes (LRC). LRC reduces the number of erasure coding fragments that need to be read when reconstructing data fragments that are offline, while still keeping the storage overhead low. The important benefits of LRC are that it reduces the bandwidth and I Os required for repair reads over prior codes, while still allowing a significant reduction in storage overhead. We describe how LRC is used in WAS to provide low overhead durable storage with consistently low read latencies.", "Data availability is critical in distributed storage systems, especially when node failures are prevalent in real life. A key requirement is to minimize the amount of data transferred among nodes when recovering the lost or unavailable data of failed nodes. This paper explores recovery solutions based on regenerating codes, which are shown to provide fault-tolerant storage and minimum recovery bandwidth. Existing optimal regenerating codes are designed for single node failures. We build a system called CORE, which augments existing optimal regenerating codes to support a general number of failures including single and concurrent failures. We theoretically show that CORE achieves the minimum possible recovery bandwidth for most cases. We implement CORE and evaluate our prototype atop a Hadoop HDFS cluster testbed with up to 20 storage nodes. We demonstrate that our CORE prototype conforms to our theoretical findings and achieves recovery bandwidth saving when compared to the conventional recovery approach based on erasure codes.", "An emerging application of network coding is to improve the robustness of distributed storage. Recent theoretical work has shown that a class of regenerating codes, which are based on the concept of network coding, can improve the data repair performance over traditional storage schemes such as erasure coding. However, there remain open issues regarding the feasibility of deploying regenerating codes in practical storage systems. We present NCFS, a distributed file system that realizes regenerating codes under real network settings. NCFS transparently stripes data across multiple storage nodes, without requiring the storage nodes to coordinate among themselves. It adopts a layered design that allows extensibility, such that different storage schemes can be readily included into NCFS. We deploy and evaluate our NCFS prototype in different real network settings. In particular, we use NCFS to conduct an empirical study of different storage schemes, including the traditional erasure codes RAID-5 and RAID-6, and a special family of regenerating codes that are based on E-MBR [16]. Our work provides a practical and extensible platform for realizing theories of regenerating codes in distributed file systems.", "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems." ] }
1302.5328
2571224072
Let D be a set of n pairwise disjoint unit disks in the plane. We describe how to build a data structure for D so that for any point set P containing exactly one point from each disk, we can quickly find the onion decomposition (convex layers) of P . Our data structure can be built in O ( n log n ) time and has linear size. Given P , we can find its onion decomposition in O ( n log k ) time, where k is the number of layers. We also provide a matching lower bound. Our solution is based on a recursive space decomposition, combined with a fast algorithm to compute the union of two disjoint onion decompositions.
The notion of onion decompositions first appears in the computational statistics literature @cite_17 , and several rather brute-force algorithms to compute it have been suggested (see @cite_9 and the references therein). In the computational geometry community, Overmars and van Leeuwen @cite_0 presented the first near-linear time algorithm, requiring @math time. Chazelle @cite_4 improved this to an optimal @math time algorithm. Nielsen @cite_13 gave an output-sensitive algorithm to compute only the outermost @math layers in @math time, where @math is the number of vertices participating on the outermost @math layers. In @math , Chan @cite_16 described an @math expected time algorithm.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_0", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2048493433", "2234945639", "", "2624483937", "2073239623", "184591146" ], "abstract": [ "Let S be a set of n points in the Euclidean plane. The convex layers of S are the convex polygons obtained by iterating on the following procedure: compute the convex hull of S and remove its vertices from S . This process of peeling a planar point set is central in the study of robust estimators in statistics. It also provides valuable information on the morphology of a set of sites and has proven to be an efficient preconditioning for range search problems. An optimal algorithm is described for computing the convex layers of S. The algorithm runs in 0 (n n) time and requires O(n) space. Also addressed is the problem of determining the depth of a query point within the convex layers of S , i.e., the number of layers that enclose the query point. This is essentially a planar point location problem, for which optimal solutions are therefore known. Taking advantage of structural properties of the problem, however, a much simpler optimal solution is derived.", "Given a set of n points in the plane, a method is described for constructing a nested sequence of m < n 2 convex polygons based on the points. If the points are a random sample, it is shown that the convex sets share some of the distributional properties of one-dimensional order statistics. An algorithm which requires 0(n3) time and 0(n2) space is described for constructing the sequence of convex sets.", "", "We present a fully dynamic randomized data structure that can answer queries about the convex hull of a set of n points in three dimensions, where insertions take O(log3n) expected amortized time, deletions take O(log6n) expected amortized time, and extreme-point queries take O(log2n) worst-case time. This is the first method that guarantees polylogarithmic update and query cost for arbitrary sequences of insertions and deletions, and improves the previous O(ne)-time method by Agarwal and Matousek a decade ago. As a consequence, we obtain similar results for nearest neighbor queries in two dimensions and improved results for numerous fundamental geometric problems (such as levels in three dimensions and dynamic Euclidean minimum spanning trees in the plane).", "We give an output-sensitive algorithm to compute the first k convex or maximal layers in 0( n log I&) -time where Hk is the number of points participating in the first k layers. Computing only the first k layers is interesting in various problems that arise in computational geometry (k-sets and dually k-levels, k-hulls and dually k-belts), pattern recognition, statistics, operations research, etc.", "" ] }
1302.5328
2571224072
Let D be a set of n pairwise disjoint unit disks in the plane. We describe how to build a data structure for D so that for any point set P containing exactly one point from each disk, we can quickly find the onion decomposition (convex layers) of P . Our data structure can be built in O ( n log n ) time and has linear size. Given P , we can find its onion decomposition in O ( n log k ) time, where k is the number of layers. We also provide a matching lower bound. Our solution is based on a recursive space decomposition, combined with a fast algorithm to compute the union of two disjoint onion decompositions.
The framework for preprocessing regions that represent points was first introduced by Held and Mitchell hm-tipps-08 , who show how to store a set of disjoint unit disks in a data structure such that any point set containing one point from each disk can be triangulated in linear time. This result was later extended to arbitrary disjoint regions in the plane by van Kreveld al @cite_3 . L "offler and Snoeyink first showed that the Delaunay triangulation (or its dual, the Voronoi diagram) can also be computed in linear time after preprocessing a set of disjoint unit disks @cite_7 . This result was later extended by Buchin al @cite_8 , and Devillers gives a practical alternative @cite_10 . Ezra and Mulzer @cite_11 show how to preprocess a set of lines in the plane such that the convex hull of a set of points with one point on each line can be computed faster than @math time.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_3", "@cite_10", "@cite_11" ], "mid": [ "2041094480", "", "1972925770", "2102170762", "1589450195" ], "abstract": [ "An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of n disjoint unit disks in the plane in O(nlogn) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay.", "", "Traditional algorithms in computational geometry assume that the input points are given precisely. In practice, data is usually imprecise, but information about the imprecision is often available. In this context, we investigate what the value of this information is. We show here how to preprocess a set of disjoint regions in the plane of total complexity @math in @math time so that if one point per set is specified with precise coordinates, a triangulation of the points can be computed in linear time. In our solution, we solve another problem which we believe to be of independent interest. Given a triangulation with red and blue vertices, we show how to compute a triangulation of only the blue vertices in linear time.", "We propose a new algorithm that preprocess a set of n disjoint unit disks to be able to compute the Delaunay triangulation in O(n) expected time. Conversely to previous similar results, our algorithm is actually faster than a direct computation in O(n log n) time.", "Motivated by the desire to cope with data imprecision Loffler (2009) [31], we study methods for taking advantage of preliminary information about point sets in order to speed up the computation of certain structures associated with them. In particular, we study the following problem: given a set L of n lines in the plane, we wish to preprocess L such that later, upon receiving a set P of n points, each of which lies on a distinct line of L, we can construct the convex hull of P efficiently. We show that in quadratic time and space it is possible to construct a data structure on L that enables us to compute the convex hull of any such point set P in O([email protected](n)log^@?n) expected time. If we further assume that the points are ''oblivious'' with respect to the data structure, the running time improves to O([email protected](n)). The same result holds when L is a set of line segments (in general position). We present several extensions, including a trade-off between space and query time and an output-sensitive algorithm. We also study the ''dual problem'' where we show how to efficiently compute the (=" ] }
1302.4128
1967259538
Recent trends suggest that cognitive radio based wireless networks will be frequency agile and the nodes will be equipped with multiple radios capable of tuning across large swaths of spectrum. The MAC scheduling problem in such networks refers to making intelligent decisions on which communication links to activate at which time instant and over which frequency band. The challenge in designing a low-complexity distributed MAC, that achieves low delay, is posed by two additional dimensions of cognitive radio networks: interference graphs and data rates that are frequency-band dependent, and explosion in number of feasible schedules due to large number of available frequency-bands. In this paper, we propose MAXIMAL-GAIN MAC, a distributed MAC scheduler for frequency agile multi-band networks that simultaneously achieves the following: (i) optimal network-delay scaling with respect to the number of communicating pairs, (ii) low computational complexity of O(log2(maximum degree of the interference graphs)) which is independent of the number of frequency bands, number of radios per node, and overall size of the network, and (iii) robustness, i.e., it can be adapted to a scenario where nodes are not synchronized and control packets could be lost. Our proposed MAC also achieves a throughput provably within a constant fraction (under isotropic propagation) of the maximum throughput. Due to a recent impossibility result, optimal delay-scaling could only be achieved with some amount of throughput loss . Extensive simulations using OMNeT++ network simulator shows that, compared to a multi-band extension of a state-of-art CSMA algorithm (namely, Q-CSMA), our asynchronous algorithm achieves a 2.5x reduction in delay while achieving at least 85 of the maximum achievable throughput. Our MAC algorithms are derived from a novel local search based technique.
The extensive research on MAC scheduling in single band networks (see @cite_29 for an excellent survey) can be broadly divided into two classes: max-weight computation based and Glauber dynamics based. The first class of approach is inspired by the seminal work @cite_21 which proves that maximum-weight (MW) scheduling achieves 100 at every instant a non-interfering set of links such that the total of weighted data-rates of the activated links is maximized, where, weight of a link is roughly defined by the number of backlogged packets. Furthermore, a recent work @cite_22 (also see @cite_2 @cite_8 ) has shown that an MW schedule achieves order optimal delay scaling with the number of links. However, computing MW schedule is NP-hard, leading to considerable research on approximate MW schedules @cite_16 @cite_30 @cite_31 @cite_7 . However, extending these works to multi-band wireless networks, with distributed implementation and low complexity, is not easy. As discussed in , the second class of approach @cite_14 @cite_12 @cite_17 is motivated by so called Glauber dynamics based local search. These scheme achieve 100 guarantees in general networks.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_29", "@cite_21", "@cite_2", "@cite_31", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2101641515", "", "2036236852", "", "1995862053", "2123893388", "2105177639", "", "", "1607044975", "2125141803", "2097058227" ], "abstract": [ "A joint congestion control, channel allocation and scheduling algorithm for multi-channel multi-interface multi- hop wireless networks is discussed. The goal of maximizing a utility function of the injected traffic, while guaranteeing queue stability, is defined as an optimization problem where the input traffic intensity, channel loads, interface to channel binding and transmission schedules are jointly optimized by a dynamic algorithm. Due to the inherent NP-Hardness of the scheduling problem, a simple centralized heuristic is used to define a lower bound for the performance of the whole optimization algorithm. The behavior of the algorithm for different numbers of channels, interfaces and traffic flows is shown through simulations.", "", "This paper studies delay properties of the well-known maximum weight scheduling algorithm in wireless ad hoc networks. We consider wireless networks with either one-hop or multi-hop flows. Specifically, this paper shows that the maximum weight scheduling algorithm achieves order optimal delay for wireless ad hoc networks with single-hop traffic flows if the number of activated links in one typical schedule is of the same order as the number of links in the network. This condition would be satisfied for most practical wireless networks. This result holds for both i.i.d and Markov modulated arrival processes with two states. For the multi-hop flow case, we also derive tight backlog bounds in the order sense.", "", "We consider the delay properties of max-weight opportunistic scheduling in a multi-user ON OFF wireless system, such as a multi-user downlink or uplink. It is well known that max-weight scheduling stabilizes the network (and hence yields maximum throughput) whenever input rates are inside the network capacity region. We show that when arrival and channel processes are independent, average delay of the max-weight policy is order-optimal, in the sense that it does not grow with the number of network links. While recent queue-grouping algorithms are known to also yield order-optimal delay, this is the first such result for the simpler class of max-weight policies. We then consider multi-rate transmission models and show that average delay in this case typically does increase with the network size due to queues containing a small number of ldquoresidualrdquo packets.", "This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a \"loosely coupled\" cross-layer solution. That is, the algorithms obtained map to different layers [transport, network, and medium access control physical (MAC PHY)] of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >", "", "", "A resource allocation model that has within its scope a number of computer and communication network architectures was introduced by Tassiulas and Ephremides (1992) and scheduling methods that achieve maximum throughput were proposed. Those methods require the solution of a complex optimization problem at each packet transmission time and as a result they are not amenable to direct implementations. We propose a class of maximum throughput scheduling policies for the model introduced by Tassiulas and Ephremides that have linear complexity and can lead to practical implementations. They rely on a randomized, iterative algorithm for the solution of the optimization problem arising in the scheduling, in combination with an incremental updating rule. The proposed policy is of maximum throughput under some fairly general conditions on the randomized algorithm.", "The popularity of Aloha-like algorithms for resolution of contention between multiple entities accessing common resources is due to their extreme simplicity and distributed nature. Example applications of such algorithms include Ethernet and recently emerging wireless multi-access networks. Despite a long and exciting history of more than four decades, the question of designing an algorithm that is essentially as simple and distributed as Aloha while being efficient has remained unresolved. In this paper, we resolve this question successfully for a network of queues where contention is modeled through independent-set constraints over the network graph. The work by Tassiulas and Ephremides (1992) suggests that an algorithm that schedules queues so that the summation of weight' of scheduled queues is maximized, subject to constraints, is efficient. However, implementing such an algorithm using Aloha-like mechanism has remained a mystery. We design such an algorithm building upon a Metropolis-Hastings sampling mechanism along with selection of weight' as an appropriate function of the queue-size. The key ingredient in establishing the efficiency of the algorithm is a novel adiabatic-like theorem for the underlying queueing network, which may be of general interest in the context of dynamical systems.", "In multihop wireless networks, designing distributed scheduling algorithms to achieve the maximal throughput is a challenging problem because of the complex interference constraints among different links. Traditional maximal-weight scheduling (MWS), although throughput-optimal, is difficult to implement in distributed networks. On the other hand, a distributed greedy protocol similar to IEEE 802.11 does not guarantee the maximal throughput. In this paper, we introduce an adaptive carrier sense multiple access (CSMA) scheduling algorithm that can achieve the maximal throughput distributively. Some of the major advantages of the algorithm are that it applies to a very general interference model and that it is simple, distributed, and asynchronous. Furthermore, the algorithm is combined with congestion control to achieve the optimal utility and fairness of competing flows. Simulations verify the effectiveness of the algorithm. Also, the adaptive CSMA scheduling is a modular MAC-layer algorithm that can be combined with various protocols in the transport layer and network layer. Finally, the paper explores some implementation issues in the setting of 802.11 networks." ] }
1302.4128
1967259538
Recent trends suggest that cognitive radio based wireless networks will be frequency agile and the nodes will be equipped with multiple radios capable of tuning across large swaths of spectrum. The MAC scheduling problem in such networks refers to making intelligent decisions on which communication links to activate at which time instant and over which frequency band. The challenge in designing a low-complexity distributed MAC, that achieves low delay, is posed by two additional dimensions of cognitive radio networks: interference graphs and data rates that are frequency-band dependent, and explosion in number of feasible schedules due to large number of available frequency-bands. In this paper, we propose MAXIMAL-GAIN MAC, a distributed MAC scheduler for frequency agile multi-band networks that simultaneously achieves the following: (i) optimal network-delay scaling with respect to the number of communicating pairs, (ii) low computational complexity of O(log2(maximum degree of the interference graphs)) which is independent of the number of frequency bands, number of radios per node, and overall size of the network, and (iii) robustness, i.e., it can be adapted to a scenario where nodes are not synchronized and control packets could be lost. Our proposed MAC also achieves a throughput provably within a constant fraction (under isotropic propagation) of the maximum throughput. Due to a recent impossibility result, optimal delay-scaling could only be achieved with some amount of throughput loss . Extensive simulations using OMNeT++ network simulator shows that, compared to a multi-band extension of a state-of-art CSMA algorithm (namely, Q-CSMA), our asynchronous algorithm achieves a 2.5x reduction in delay while achieving at least 85 of the maximum achievable throughput. Our MAC algorithms are derived from a novel local search based technique.
Some examples of work that focus on low delay algorithms in single band networks are @cite_0 @cite_32 . For single band networks, low delay CSMA based algorithms for special classes of interference graphs have been proposed: @cite_3 proposes a delay-optimal CSMA based MAC for polynomial growth networks (i.e., the number of nodes @math hops away from a node is polynomial in @math ), and @cite_20 proposes polynomial delay Q-CSMA based algorithm for bounded degree networks.
{ "cite_N": [ "@cite_0", "@cite_20", "@cite_32", "@cite_3" ], "mid": [ "2076426337", "", "2102224707", "2047087115" ], "abstract": [ "In a wireless network, a sophisticated algorithm is required to schedule simultaneous wireless transmissions while satisfying interference constraint that two neighboring nodes can not transmit simultaneously. The scheduling algorithm need to be excellent in performance while being simple and distributed so as to be implementable. The result of Tassiulas and Ephremides (1992) imply that the algorithm, scheduling transmissions of nodes in the 'maximum weight independent set' (MWIS) of network graph, is throughput optimal. However, algorithmically the problem of finding MWIS is known to be NP-hard and hard to approximate. This raises the following questions: is it even possible to obtain throughput optimal simple, distributed scheduling algorithm? if yes, is it possible to minimize delay of such an algorithm? Motivated by these questions, we first provide a distributed throughput optimal algorithm for any network topology. However, this algorithm may induce exponentially large delay. To overcome this, we present an order optimal delay algorithm for any non-expanding network topology. Networks deployed in geographic area, like wireless networks, are likely to be of this type. Our algorithm is based on a novel distributed graph partitioning scheme which may be of interest in its own right. Our algorithm for non-expanding graph takes O (n) total message exchanges or O(l) message exchanges per node to compute a schedule.", "", "We consider the problem of designing an online scheduling scheme for a multi-hop wireless packet network with arbitrary topology and operating under arbitrary scheduling constraints. The objective is to design a scheme that achieves high throughput and low delay simultaneously. We propose a scheduling scheme that - for networks operating under primary interference constraints - guarantees a per-flow end-to-end packet delay bound of 5dj (1-ρj), at a factor 5 loss of throughput, where dj is the path length (number of hops) of flow j and ρj is the effective loading along the route of flow j. Clearly, dj is a universal lower bound on end-to-end packet delay for flow j. Thus, our result is essentially optimal. To the best of our knowledge, our result is the first one to show that it is possible to achieve a per-flow end-to-end delay bound of O(# of hops) in a constrained network. Designing such a scheme comprises two related subproblems: Global Scheduling and Local Scheduling. Global Scheduling involves determining the set of links that will be simultaneously active, without violating the scheduling constraints. While local scheduling involves determining the packets that will be transferred across active edges. We design a local scheduling scheme by adapting the Preemptive Last-In-First-Out (PL) scheme, applied for quasi-reversible continuous time networks, to an unconstrained discrete-time network. A global scheduling scheme will be obtained by using stable marriage algorithms to emulate the unconstrained network with the constrained wireless network. Our scheme can be easily extended to a network operating under general scheduling constraints, such as secondary interference constraints, with the same delay bound and a loss of throughput that depends on scheduling constraints through an intriguing \"sub-graph covering\" property.", "In the past year or so, an exciting progress has led to throughput optimal design of CSMA-based algorithms for wireless networks. However, such an algorithm suffers from very poor delay performance. A recent work suggests that it is impossible to design a CSMA-like simple algorithm that is throughput optimal and induces low delay for any wireless network. However, wireless networks arising in practice are formed by nodes placed, possibly arbitrarily, in some geographic area. In this paper, we propose a CSMA algorithm with per-node average-delay bounded by a constant, independent of the network size, when the network has geometry (precisely, polynomial growth structure) that is present in any practical wireless network. Two novel features of our algorithm, crucial for its performance, are (a) choice of access probabilities as an appropriate function of queue-sizes, and (b) use of local network topological structures. Essentially, our algorithm is a queue-based CSMA with a minor difference that at each time instance a very small fraction of frozen nodes do not execute CSMA. Somewhat surprisingly, appropriate selection of such frozen nodes, in a distributed manner, lead to the delay optimal performance." ] }
1302.4128
1967259538
Recent trends suggest that cognitive radio based wireless networks will be frequency agile and the nodes will be equipped with multiple radios capable of tuning across large swaths of spectrum. The MAC scheduling problem in such networks refers to making intelligent decisions on which communication links to activate at which time instant and over which frequency band. The challenge in designing a low-complexity distributed MAC, that achieves low delay, is posed by two additional dimensions of cognitive radio networks: interference graphs and data rates that are frequency-band dependent, and explosion in number of feasible schedules due to large number of available frequency-bands. In this paper, we propose MAXIMAL-GAIN MAC, a distributed MAC scheduler for frequency agile multi-band networks that simultaneously achieves the following: (i) optimal network-delay scaling with respect to the number of communicating pairs, (ii) low computational complexity of O(log2(maximum degree of the interference graphs)) which is independent of the number of frequency bands, number of radios per node, and overall size of the network, and (iii) robustness, i.e., it can be adapted to a scenario where nodes are not synchronized and control packets could be lost. Our proposed MAC also achieves a throughput provably within a constant fraction (under isotropic propagation) of the maximum throughput. Due to a recent impossibility result, optimal delay-scaling could only be achieved with some amount of throughput loss . Extensive simulations using OMNeT++ network simulator shows that, compared to a multi-band extension of a state-of-art CSMA algorithm (namely, Q-CSMA), our asynchronous algorithm achieves a 2.5x reduction in delay while achieving at least 85 of the maximum achievable throughput. Our MAC algorithms are derived from a novel local search based technique.
The inspiration of our MAC comes from application of local search based techniques to develop approximation algorithms for NP-hard optimization problems. We refer the reader to @cite_35 for an excellent survey.
{ "cite_N": [ "@cite_35" ], "mid": [ "1494455882" ], "abstract": [ "In this chapter we review the main results known on local search algorithms with worst case guarantees. We consider classical combinatorial optimization problems: satisfiability problems, traveling salesman and quadratic assignment problems, set packing and set covering problems, maximum independent set, maximum cut, several facility location related problems and finally several scheduling problems. A replica placement problem in a distributed file systems is also considered as an example of the use of a local search algorithm in a distributed environment. For each problem we have provided the neighborhoods used along with approximation results. Proofs when too technical are omitted, but often sketch of proofs are provided." ] }
1302.3894
1530180322
A generic framework for the solution of PDE-constrained optimisation problems based on the FEniCS system is presented. Its main features are an intuitive mathematical interface, a high degree of automation, and an efficient implementation of the generated adjoint model. The framework is based upon the extension of a domain-specific language for variational problems to cleanly express complex optimisation problems in a compact, high-level syntax. For example, optimisation problems constrained by the time-dependent Navier-Stokes equations can be written in tens of lines of code. Based on this high-level representation, the framework derives the associated adjoint equations in the same domain-specific language, and uses the FEniCS code generation technology to emit parallel optimised low-level C++ code for the solution of the forward and adjoint systems. The functional and gradient information so computed is then passed to the optimisation algorithm to update the parameter values. This approach works both for steady-state as well as transient, and for linear as well as nonlinear governing PDEs and a wide range of functionals and control parameters. We demonstrate the applicability and efficiency of this approach on classical textbook optimisation problems and advanced examples.
A closely related project is developed by waanders2002 , with the goal of creating an optimisation framework based on the finite-element software Sundance @cite_1 . Sundance is similar to in that it also operates on variational forms: in particular, it can automatically differentiate and adjoin individual variational forms. However, the built-in automatic adjoint derivation of Sundance does not currently extend to cases where the forward model consists of a sequence of variational problems, which is typically the case for time-dependent problems.
{ "cite_N": [ "@cite_1" ], "mid": [ "1985302366" ], "abstract": [ "Sundance is a package in the Trilinos suite designed to provide high-level components for the development of high-performance PDE simulators with built-in capabilities for PDE-constrained optimization. We review the implications of PDE-constrained optimization on simulator design requirements, then survey the architecture of the Sundance problem specification components. These components allow immediate extension of a forward simulator for use in an optimization context. We show examples of the use of these components to develop full-space and reduced-space codes for linear and nonlinear PDE-constrained inverse problems." ] }
1302.3894
1530180322
A generic framework for the solution of PDE-constrained optimisation problems based on the FEniCS system is presented. Its main features are an intuitive mathematical interface, a high degree of automation, and an efficient implementation of the generated adjoint model. The framework is based upon the extension of a domain-specific language for variational problems to cleanly express complex optimisation problems in a compact, high-level syntax. For example, optimisation problems constrained by the time-dependent Navier-Stokes equations can be written in tens of lines of code. Based on this high-level representation, the framework derives the associated adjoint equations in the same domain-specific language, and uses the FEniCS code generation technology to emit parallel optimised low-level C++ code for the solution of the forward and adjoint systems. The functional and gradient information so computed is then passed to the optimisation algorithm to update the parameter values. This approach works both for steady-state as well as transient, and for linear as well as nonlinear governing PDEs and a wide range of functionals and control parameters. We demonstrate the applicability and efficiency of this approach on classical textbook optimisation problems and advanced examples.
Finally, the PROPT @cite_27 , ACADO @cite_24 and CasADi @cite_15 toolkits are optimisation frameworks with similar design goals, but focussed on ordinary differential equations and differential-algebraic equations instead of PDEs.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_15" ], "mid": [ "1977164425", "", "1967134278" ], "abstract": [ "In this paper the software environment and algorithm collection ACADO Toolkit is presented, which implements tools for automatic control and dynamic optimization. It provides a general framework for using a great variety of algorithms for direct optimal control, including model predictive control as well as state and parameter estimation. The ACADO Toolkit is implemented as a self-contained C++ code, while the object-oriented design allows for convenient coupling of existing optimization packages and for extending it with user-written optimization routines. We discuss details of the software design of the ACADO Toolkit 1.0 and describe its main software modules. Along with that we highlight a couple of algorithmic features, in particular its functionality to handle symbolic expressions. The user-friendly syntax of the ACADO Toolkit to set up optimization problems is illustrated with two tutorial examples: an optimal control and a parameter estimation problem. Copyright © 2010 John Wiley & Sons, Ltd.", "", "In its basic form, the reverse mode of computational differentiation yields the gradient of a scalar-valued function at a cost that is a small multiple of the computational work needed to evaluate the function itself. However, the corresponding memory requirement is proportional to the run-time of the evaluation program. Therefore, the practical applicability of the reverse mode in its original formulation is limited despite the availability of ever larger memory systems. This observation leads to the development of checkpointing schedules to reduce the storage requirements. This article presents the function revolve, which generates checkpointing schedules that are provably optimal with regard to a primary and a secondary criterion. This routine is intended to be used as an explicit “controller” for running a time-dependent applications program." ] }
1302.3085
2009253859
This paper studies the problem of self-organizing heterogeneous LTE systems. We propose a model that jointly considers several important characteristics of heterogeneous LTE system, including the usage of orthogonal frequency division multiple access (OFDMA), the frequency-selective fading for each link, the interference among different links, and the different transmission capabilities of different types of base stations. We also consider the cost of energy by taking into account the power consumption, including that for wireless transmission and that for operation, of base stations and the price of energy. Based on this model, we aim to propose a distributed protocol that improves the spectrum efficiency of the system, which is measured in terms of the weighted proportional fairness among the throughputs of clients, and reduces the cost of energy. We identify that there are several important components involved in this problem. We propose distributed strategies for each of these components. Each of the proposed strategies requires small computational and communicational overheads. Moreover, the interactions between components are also considered in the proposed strategies. Hence, these strategies result in a solution that jointly considers all factors of heterogeneous LTE systems. Simulation results also show that our proposed strategies achieve much better performance than existing ones.
There has been some work on self-organized wireless systems. Chen and Baccelli @cite_5 has proposed a distributed algorithm for the self optimization of radio resources that aims to achieve potential delay fairness. @cite_7 has proposed a distributed protocol for load balancing among base stations. Borst, Markakis, and Saniee @cite_1 studies the problem of utility maximization for self-organizing networks for arbitrary utility functions. Lopez- @cite_4 , Hou and Gupta @cite_2 , and Hou and Chen @cite_6 have considered the problems of jointly optimizing different components in self-organizing networks under various system models. These works do not take energy efficient into considerations.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_1", "@cite_6", "@cite_2", "@cite_5" ], "mid": [ "2113643803", "1978895175", "2550072282", "2038091796", "2169551554", "1983156373" ], "abstract": [ "This article investigates the problem of the allocation of modulation and coding, subcarriers and power to users in LTE. The proposed model achieves inter-cell interference mitigation through the dynamic and distributed self-organization of cells. Therefore, there is no need for any a prior frequency planning. Moreover, a two-level decomposition method able to find near optimal solutions is proposed to solve the optimization problem. Finally, simulation results show that compared to classic reuse schemes the proposed approach is able to pack more users into the same bandwidth, decreasing the probability of user outage.", "With the rapid growth of mobile communications, deployment and maintenance of cellular mobile networks are becoming more and more complex, time consuming, and expensive. In order to meet the requirements of network operators and service providers, the telecommunication industry and international standardization bodies have recently paid intensive attention to the research and development of self-organizing networks. In this article we first introduce both the market and technological perspectives for SONs. Then we focus on the self-configuration procedure and illustrate a self-booting mechanism for a newly added evolved NodeB without a dedicated backhaul interface. Finally, mobility load balancing as one of the most important selfoptimization issues for Long Term Evolution networks is discussed, and a distributed MLB algorithm with low handover cost is proposed and evaluated.", "We consider a wireless OFDMA cellular network and address the problem of jointly allocating power to frequencies (subbands) and assigning users to cells, for a variety of elastic and inelastic services. The goal is to maximize the sum of the users' throughput utility functions subject to various constraints, such as minimum throughput requirements. The problem naturally fits into a Network Utility Maximization (NUM) framework with a mixture of concave (e.g., data rates) and nonconcave (e.g., voice video streaming) utilities. The hardness of this nonconvex, mixed integer program prohibits the use of standard convex optimization algorithms, or efficient combinatorial approximation techniques. We devise a randomized algorithm for the said NUM problem, whose proof of asymptotic optimality is derived from the classical framework of interacting particle systems, via a judiciously selected neighborhood structure. The proposed algorithm is highly distributed, asynchronous, requires limited computational effort per node iteration, and yields provable convergence in the limit. Several numerical experiments are presented to illustrate the convergence speed and performance of the proposed method.", "We consider the problem of LTE network self organization and optimization of resource allocation. One particular challenge for LTE systems is that, by applying OFDMA, a transmission may use multiple resource blocks scheduled over the frequency and time. There are three key components involved in the resource allocation and network optimization: resource block scheduling, power control, and client association. We propose a distributed protocol that aims to achieve weighted proportional fairness (WPF) among clients by jointly consider them. The cross-layer design includes: (i) an optimal online policy for resource block scheduling, (ii) a heuristic for transmit power control, and (iii) a selfish strategy for client association. The proposed scheme only requires limited local information exchange and thus can be easily implemented for large networks. Simulation results have shown its effectiveness in both the system throughput and user fairness.", "A challenging problem in multi-band multi-cell self-organized wireless systems, such as multi-channel Wi-Fi networks, femto pico cells in 3G 4G cellular networks, and more recent wireless networks over TV white spaces, is of distributed resource allocation. This involves four components: channel selection, client association, channel access, and client scheduling. In this paper, we present a unified framework for jointly addressing the four components with the global system objective of maximizing the clients throughput in a proportionally fair manner. Our formulation allows a natural dissociation of the problem into two sub-parts. We show that the first part, involving channel access and client scheduling, is convex and derive a distributed adaptation procedure for achieving Pareto-optimal solution. For the second part, involving channel selection and client association, we develop a Gibbs-sampler based approach for local adaptation to achieve the global objective, as well as derive fast greedy algorithms from it that achieve good solutions.", "In this work, we develop mathematical and algorithmic tools for the self-optimization of mobile cellular networks. Scalable algorithms which are based on local measurements and do not require heavy coordination among the wireless devices are proposed. We focus on the optimization of transmit power and of user association. The method is applicable to both joint and separate optimizations. The global utility minimized is linked to potential delay fairness. The distributed algorithm adaptively updates the system parameters and achieves global optimality by measuring SINR and interference. It is built on Gibbs' sampler and offers a unified framework that can be easily reused for different purposes. Simulation results demonstrate the effectiveness of the algorithm." ] }
1302.3085
2009253859
This paper studies the problem of self-organizing heterogeneous LTE systems. We propose a model that jointly considers several important characteristics of heterogeneous LTE system, including the usage of orthogonal frequency division multiple access (OFDMA), the frequency-selective fading for each link, the interference among different links, and the different transmission capabilities of different types of base stations. We also consider the cost of energy by taking into account the power consumption, including that for wireless transmission and that for operation, of base stations and the price of energy. Based on this model, we aim to propose a distributed protocol that improves the spectrum efficiency of the system, which is measured in terms of the weighted proportional fairness among the throughputs of clients, and reduces the cost of energy. We identify that there are several important components involved in this problem. We propose distributed strategies for each of these components. Each of the proposed strategies requires small computational and communicational overheads. Moreover, the interactions between components are also considered in the proposed strategies. Hence, these strategies result in a solution that jointly considers all factors of heterogeneous LTE systems. Simulation results also show that our proposed strategies achieve much better performance than existing ones.
On the other hand, techniques for improving cellular radio energy efficiency have recently attracted much attention. @cite_10 has investigated the amount of power consumptions for various types of base stations. @cite_0 has discussed various techniques for improving energy efficiency. @cite_20 has proposed to turn base stations to sleep mode when the network traffic is small to save energy. @cite_19 , @cite_22 , and Gong, Zhou, and Niu @cite_18 have proposed various policies of allocating clients so that clients are mostly allocated to a few base stations. As a result, many base stations that do not have any clients can be turned to sleep mode to save energy. However, these studies require the knowledge of traffic of each client, and cannot be applied to scenarios where clients' traffic is elastic. @cite_21 has studied the trade-off between spectrum efficiency and energy efficiency. @cite_13 and @cite_3 have provided extensive surveys on energy-efficient wireless communications. However, they do not consider the interference and interactions between base stations, and are hence not applicable to self-organizing networks.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_22", "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_10", "@cite_20" ], "mid": [ "1974327731", "2069443635", "2788355865", "2119952207", "2154319347", "2104286156", "2110519532", "2112170521", "2145282441" ], "abstract": [ "Since battery technology has not progressed as rapidly as semiconductor technology, power efficiency has become increasingly important in wireless networking, in addition to the traditional quality and performance measures, such as bandwidth, throughput, and fairness. Energy-efficient design requires a cross layer approach as power consumption is affected by all aspects of system design, ranging from silicon to applications. This article presents a comprehensive overview of recent advances in cross-layer design for energy-efficient wireless communications. We particularly focus on a system-based approaches toward energy optimal transmission and resource management across time, frequency, and spatial domains. Details related to energy-efficient hardware implementations are also covered. Copyright © 2008 John Wiley & Sons, Ltd.", "The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.", "The explosive development of ICT (information and communication technology) industry has emerged as one of the major sources of world energy consumption. Therefore, this paper concerns about the BS (base station) energy saving issue, for most energy consumption of the communication network comes from the BSs and the core network. Particularly, we consider dynamically turning off certain BSs when the network traffic is low. Centralized and decentralized implementations are investigated. Simulations demonstrate the energy efficiency of the proposed algorithms and the tradeoff between energy saving and coverage guarantee.", "Traditional mobile wireless network mainly design focuses on ubiquitous access and large capacity. However, as energy saving and environmental protection become global demands and inevitable trends, wireless researchers and engineers need to shift their focus to energy-efficiency-oriented design, that is, green radio. In this article, we propose a framework for green radio research and integrate the fundamental issues that are currently scattered. The skeleton of the framework consists of four fundamental tradeoffs: deployment efficiency-energy efficiency, spectrum efficiency-energy efficiency, bandwidth-power, and delay-power. With the help of the four fundamental trade-offs, we demonstrate that key network performance cost indicators are all strung together.", "With explosive growth of high-data-rate applications, more and more energy is consumed in wireless networks to guarantee quality of service. Therefore, energy-efficient communications have been paid increasing attention under the background of limited energy resource and environmental- friendly transmission behaviors. In this article, basic concepts of energy-efficient communications are first introduced and then existing fundamental works and advanced techniques for energy efficiency are summarized, including information-theoretic analysis, OFDMA networks, MIMO techniques, relay transmission, and resource allocation for signaling. Some valuable topics in energy-efficient design are also identified for future research.", "The last ten years have witnessed explosive growth in the number of subscribers for mobile telephony. The technology has evolved from early voice only services to today's mobile wireless broadband (Internet) data delivery. The increasing use of wireless connectivity via smartphones and laptops has led to an exponential surge in network traffic. Meeting traffic demands will cause a significant increase in operator energy cost as an enlarged network of radio base stations will be needed to support mobile broadband effectively and maintain operational competitiveness. This article explores approaches that will assist in delivering significant energy efficiency gains in future wireless networks, easing the burden on network operators. It investigates three approaches to saving energy in future wireless networks. These include sleep mode techniques to switch off radio transmissions whenever possible; femtocell and relay deployments; and multiple antenna wireless systems. The impact of these approaches on achieving energy-efficient wireless communication systems is discussed.", "Energy-efficiency, one of the major design goals in wireless cellular networks, has received much attention lately, due to increased awareness of environmental and economic issues for network operators. In this paper, we develop a theoretical framework for BS energy saving that encompasses dynamic BS operation and the related problem of user association together. Specifically, we formulate a total cost minimization that allows for a flexible tradeoff between flow-level performance and energy consumption. For the user association problem, we propose an optimal energy-efficient user association policy and further present a distributed implementation with provable convergence. For the BS operation problem (i.e., BS switching on off), which is a challenging combinatorial problem, we propose simple greedy-on and greedy-off algorithms that are inspired by the mathematical background of submodularity maximization problem. Moreover, we propose other heuristic algorithms based on the distances between BSs or the utilizations of BSs that do not impose any additional signaling overhead and thus are easy to implement in practice. Extensive simulations under various practical configurations demonstrate that the proposed user association and BS operation algorithms can significantly reduce energy consumption.", "In order to quantify the energy efficiency of a wireless network, the power consumption of the entire system needs to be captured. In this article, the necessary extensions with respect to existing performance evaluation frameworks are discussed. The most important addenda of the proposed energy efficiency evaluation framework (E3F) are a sophisticated power model for various base station types, as well as large-scale long-term traffic models. The BS power model maps the RF output power radiated at the antenna elements to the total supply power of a BS site. The proposed traffic model emulates the spatial distribution of the traffic demands over large geographical regions, including urban and rural areas, as well as temporal variations between peak and off-peak hours. Finally, the E3F is applied to quantify the energy efficiency of the downlink of a 3GPP LTE radio access network.", "In this article, we consider the adoption of sleep modes for the base stations of a cellular access network, focusing on the design of base station sleep and wake-up transients.. We discuss the main issues arising with this approach, and we focus on the design of base station sleep and wake-up transients, also known as cell wilting and blossoming. The performance of the proposed procedures is evaluated in a realistic test scenario, and the results show that sleep and wake-up transients are short, lasting at most 30 seconds." ] }
1302.3145
2950087375
The Asymmetric Traveling Salesperson Path Problem (ATSPP) is one where, given an asymmetric metric space @math with specified vertices s and t, the goal is to find an s-t path of minimum length that passes through all the vertices in V. This problem is closely related to the Asymmetric TSP (ATSP), which seeks to find a tour (instead of an @math path) visiting all the nodes: for ATSP, a @math -approximation guarantee implies an @math -approximation for ATSPP. However, no such connection is known for the integrality gaps of the linear programming relaxations for these problems: the current-best approximation algorithm for ATSPP is @math , whereas the best bound on the integrality gap of the natural LP relaxation (the subtour elimination LP) for ATSPP is @math . In this paper, we close this gap, and improve the current best bound on the integrality gap from @math to @math . The resulting algorithm uses the structure of narrow @math - @math cuts in the LP solution to construct a (random) tree spanning tree that can be cheaply augmented to contain an Eulerian @math - @math walk. We also build on a result of Oveis Gharan and Saberi and show a strong form of Goddyn's conjecture about thin spanning trees implies the integrality gap of the subtour elimination LP relaxation for ATSPP is bounded by a constant. Finally, we give a simpler family of instances showing the integrality gap of this LP is at least 2.
The first non-trivial approximation for ATSPP was an @math -approximation by Lam and Newman @cite_2 . This was improved to @math by Chekuri and P 'al @cite_6 , and the constant was further improved in @cite_1 . The paper @cite_1 also showed that ATSP and ATSPP had approximability within a constant factor of each other. All these results are combinatorial and do not bound integrality gap of ATSPP. A bound of @math on the integrality gap of ATSPP was given by Nagarajan and Ravi @cite_10 , and was improved to @math by Friggstad, Salavatipour and Svitkina @cite_3 . Note that there is still no result known that relates the integrality gaps of the ATSP and ATSPP problems in a black-box fashion.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "2137717103", "2950391451", "330417094", "1976231547", "1492330513" ], "abstract": [ "In metric asymmetric traveling salesperson problems the input is a complete directed graph in which edge weights satisfy the triangle inequality, and one is required to find a minimum weight walk that visits all vertices. In the asymmetric traveling salesperson problem (ATSP) the walk is required to be cyclic. In asymmetric traveling salesperson path problem (ATSPP), the walk is required to start at vertex sand to end at vertex t. We improve the approximation ratio for ATSP from @math to @math . This improvement is based on a modification of the algorithm of [JACM 05] that achieved the previous best approximation ratio. We also show a reduction from ATSPP to ATSP that loses a factor of at most 2 + i¾?in the approximation ratio, where i¾?> 0 can be chosen to be arbitrarily small, and the running time of the reduction is polynomial for every fixed i¾?. Combined with our improved approximation ratio for ATSP, this establishes an approximation ratio of @math for ATSPP, improving over the previous best ratio of 4log e ni¾? 2.76log 2 nof Chekuri and Pal [Approx 2006].", "We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.", "Compounds corresponding to the formula: I wherein Z represents a radical which completes a condensed aromatic ring system; R1 represents an n-valent aliphatic or aromatic radical; R2 represents H, alkyl or aryl, R3 represents one or more radicals to control the diffusion properties and the activation pH; and n represents 1 or 2, are suitable ED precursor compounds for use in color-photographic recording materials. They are preferably used in a combination with reducible dye-releasers. They are also suitable as so-called scavengers.", "In the traveling salesman path problem, we are given a set of cities, traveling costs between city pairs and fixed source and destination cities. The objective is to find a minimum cost path from the source to destination visiting all cities exactly once. In this paper, we study polyhedral and combinatorial properties of a variant we call the traveling salesman walk problem, in which the objective is to find a minimum cost walk from the source to destination visiting all cities at least once. We first characterize traveling salesman walk perfect graphs, graphs for which the convex hull of incidence vectors of traveling salesman walks can be described by linear inequalities. We show these graphs have a description by way of forbidden minors and also characterize them constructively. We also address the asymmetric traveling salesman path problem (ATSPP) and give a factor @math -approximation algorithm for this problem.", "We study the directed minimum latency problem: given an n-vertex asymmetric metric (V,d) with a root vertex ri¾? V, find a spanning path originating at rthat minimizes the sum of latencies at all vertices (the latency of any vertex vi¾? Vis the distance from rto valong the path). This problem has been well-studied on symmetric metrics, and the best known approximation guarantee is 3.59 [3]. For any @math O( n^ ^3 ) @math =O( n )$, which implies (for any fixed i¾?> 0) a polynomial time O(n1 2 + i¾?)-approximation algorithm for directed latency. In the special case of metrics induced by shortest-paths in an unweighted directed graph, we give an O(log2n) approximation algorithm. As a consequence, we also obtain an O(log2n) approximation algorithm for minimizing the weighted completion time in no-wait permutation flowshop scheduling. We note that even in unweighted directed graphs, the directed latency problem is at least as hard to approximate as the well-studied asymmetric traveling salesman problem, for which the best known approximation guarantee is O(logn)." ] }
1302.2570
2054573207
Do unique node identifiers help in deciding whether a network G has a prescribed property P? We study this question in the context of distributed local decision, where the objective is to decide whether G has property P by having each node run a constant-time distributed decision algorithm. In a yes-instance all nodes should output yes, while in a no-instance at least one node should output no. Recently, (OPODIS 2012) gave several conditions under which identifiers are not needed, and they conjectured that identifiers are not needed in any decision problem. In the present work, we disprove the conjecture. More than that, we analyse two critical variations of the underlying model of distributed computing: (B): the size of the identifiers is bounded by a function of the size of the input network, (¬B): the identifiers are unbounded, (C): the nodes run a computable algorithm, (¬C): the nodes can compute any, possibly uncomputable function. While it is easy to see that under (¬B, ¬C) identifiers are not needed, we show that under all other combinations there are properties that can be decided locally if and only if identifiers are present.
The classes @math , @math and @math defined by are the distributed analogues of the classes @math , @math and @math , respectively. The paper @cite_11 provides structural results, develops a notion of local reduction, and establishes completeness results. One of the main results is that, for a large class of languages, called , there exists a sharp threshold for randomisation, above which randomisation does not help.
{ "cite_N": [ "@cite_11" ], "mid": [ "2067202579" ], "abstract": [ "A central theme in distributed network algorithms concerns understanding and coping with the issue of locality . Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for . In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard @math model of computation and define @math (for local decision ) as the class of decision problems that can be solved in @math communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class @math , containing all languages for which there exists a randomized algorithm that runs in @math rounds, accepts correct instances with probability at least @math and rejects incorrect ones with probability at least @math . We show that @math is a threshold for the containment of @math in @math . More precisely, we show that there exists a language that does not belong to @math for any @math but does belong to @math for any @math such that @math . On the other hand, we show that, restricted to hereditary languages, @math , for any function @math and any @math such that @math . In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide languages . Finally, we introduce the notion of local reduction, and establish some completeness results." ] }
1302.2570
2054573207
Do unique node identifiers help in deciding whether a network G has a prescribed property P? We study this question in the context of distributed local decision, where the objective is to decide whether G has property P by having each node run a constant-time distributed decision algorithm. In a yes-instance all nodes should output yes, while in a no-instance at least one node should output no. Recently, (OPODIS 2012) gave several conditions under which identifiers are not needed, and they conjectured that identifiers are not needed in any decision problem. In the present work, we disprove the conjecture. More than that, we analyse two critical variations of the underlying model of distributed computing: (B): the size of the identifiers is bounded by a function of the size of the input network, (¬B): the identifiers are unbounded, (C): the nodes run a computable algorithm, (¬C): the nodes can compute any, possibly uncomputable function. While it is easy to see that under (¬B, ¬C) identifiers are not needed, we show that under all other combinations there are properties that can be decided locally if and only if identifiers are present.
Several positive evidences where given supporting this conjecture @cite_7 . Specifically, it is shown that @math holds for hereditary languages and languages defined on paths, with a finite set of input values. Moreover, it was shown that equality holds in the non-deterministic setting, i.e., @math .
{ "cite_N": [ "@cite_7" ], "mid": [ "2123081773" ], "abstract": [ "The issue of identifiers is crucial in distributed computing. Informally, identities are used for tackling two of the fundamental difficulties that are inherent to deterministic distributed computing, namely: (1) symmetry breaking, and (2) topological information gathering. In the context of local computation, i.e., when nodes can gather information only from nodes at bounded distances, some insight regarding the role of identities has been established. For instance, it was shown that, for large classes of construction problems, the role of the identities can be rather small. However, for the identities to play no role, some other kinds of mechanisms for breaking symmetry must be employed, such as edge-labeling or sense of direction. When it comes to local distributed decision problems, the specification of the decision task does not seem to involve symmetry breaking. Therefore, it is expected that, assuming nodes can gather sufficient information about their neighborhood, one could get rid of the identities, without employing extra mechanisms for breaking symmetry. We tackle this question in the framework of the ( LOCAL ) model." ] }
1302.2570
2054573207
Do unique node identifiers help in deciding whether a network G has a prescribed property P? We study this question in the context of distributed local decision, where the objective is to decide whether G has property P by having each node run a constant-time distributed decision algorithm. In a yes-instance all nodes should output yes, while in a no-instance at least one node should output no. Recently, (OPODIS 2012) gave several conditions under which identifiers are not needed, and they conjectured that identifiers are not needed in any decision problem. In the present work, we disprove the conjecture. More than that, we analyse two critical variations of the underlying model of distributed computing: (B): the size of the identifiers is bounded by a function of the size of the input network, (¬B): the identifiers are unbounded, (C): the nodes run a computable algorithm, (¬C): the nodes can compute any, possibly uncomputable function. While it is easy to see that under (¬B, ¬C) identifiers are not needed, we show that under all other combinations there are properties that can be decided locally if and only if identifiers are present.
Therefore to ask meaningful questions related to the role of unique identifiers in construction tasks, we usually compare the @math model with models that retain some symmetry-breaking information---two such models are @math , , and @math , . [noitemsep] In the @math model @cite_2 , the output of an algorithm is not allowed to change if we reassign the identifier while preserving their relative order. In the @math model @cite_5 , there is an ordering on the incident edges, and all edges carry an orientation. Note that model @math is stronger than the Id-oblivious model: in the Id-oblivious model, @math for two assignments @math , while in the @math model, we only require this for assignments @math that satisfy @math . This difference makes the @math model much stronger.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2108918420", "2017345786" ], "abstract": [ "The purpose of this paper is a study of computation that can be done locally in a distributed network. By locally we mean within time (or distance) independent of the size of the network. In particular we are interested in algorithms that ore robust, i.e., perform well even if the underlying graph is not stable and links continuously fail and come-up. We introduce and study the happy coloring and orientation problem and show that it yields a robust local solution to the (d,m)-dining philosophers problem of Naor and Stockmeyer [17]. This problem is similar to the usual dining philosophers problem, except that each philosopher has access to d forks but needs only m of them to eat. We give a robust local solution if m spl les [d 2] (necessity of this inequality for any local solution was known previously). Two other problems we investigate are: (1) the amount of initial symmetry-breaking needed to solve certain problems locally (for example, our algorithms need considerably less symmetry-breaking than having a unique ID on each node), and (2) the single-step color reduction problem: given a coloring with c colors of the nodes of a graph, what is the smallest number of colors c' such that every node can recolor itself with one of c' colors as a function of its immediate neighborhood only. >", "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs)." ] }
1302.2570
2054573207
Do unique node identifiers help in deciding whether a network G has a prescribed property P? We study this question in the context of distributed local decision, where the objective is to decide whether G has property P by having each node run a constant-time distributed decision algorithm. In a yes-instance all nodes should output yes, while in a no-instance at least one node should output no. Recently, (OPODIS 2012) gave several conditions under which identifiers are not needed, and they conjectured that identifiers are not needed in any decision problem. In the present work, we disprove the conjecture. More than that, we analyse two critical variations of the underlying model of distributed computing: (B): the size of the identifiers is bounded by a function of the size of the input network, (¬B): the identifiers are unbounded, (C): the nodes run a computable algorithm, (¬C): the nodes can compute any, possibly uncomputable function. While it is easy to see that under (¬B, ¬C) identifiers are not needed, we show that under all other combinations there are properties that can be decided locally if and only if identifiers are present.
It turns out that in decision problems, unique identifiers are helpful for one reason, and for one reason only: obtaining an estimate on @math , the number of nodes. Indeed, by prior work we already know that @math holds assuming that every node knows an upper bound on the total number of nodes in the input graph @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2123081773" ], "abstract": [ "The issue of identifiers is crucial in distributed computing. Informally, identities are used for tackling two of the fundamental difficulties that are inherent to deterministic distributed computing, namely: (1) symmetry breaking, and (2) topological information gathering. In the context of local computation, i.e., when nodes can gather information only from nodes at bounded distances, some insight regarding the role of identities has been established. For instance, it was shown that, for large classes of construction problems, the role of the identities can be rather small. However, for the identities to play no role, some other kinds of mechanisms for breaking symmetry must be employed, such as edge-labeling or sense of direction. When it comes to local distributed decision problems, the specification of the decision task does not seem to involve symmetry breaking. Therefore, it is expected that, assuming nodes can gather sufficient information about their neighborhood, one could get rid of the identities, without employing extra mechanisms for breaking symmetry. We tackle this question in the framework of the ( LOCAL ) model." ] }
1302.2712
2119047398
We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled (k ) -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.
We use the following notation: Let @math be a @math MR image in vectorized form. Let @math , @math , be the undersampled Fourier encoding matrix and @math represent the sub-sampled set of @math -space measurements. The goal is to estimate @math from the small fraction of @math -space measurements @math . For dictionary learning, let @math be the @math th patch extraction matrix. That is, @math is a @math matrix of all zeros except for a one in each row that extracts a vectorized @math patch from the image, @math for @math . We work with overlapping image patches with a shift of one pixel and allow a patch to wrap around the image at the boundaries for mathematical convenience @cite_16 @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_16" ], "mid": [ "2107214962", "2093621384" ], "abstract": [ "In this paper, we consider denoising of image sequences that are corrupted by zero-mean additive white Gaussian noise. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising. In the single image setting, the K-SVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: i) the atoms used are 3-D; ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The algorithm's performance is experimentally compared to several state-of-the-art algorithms, demonstrating comparable or favorable results.", "Compressed sensing has shown great potential in reducing data acquisition time in magnetic resonance imaging (MRI). In traditional compressed sensing MRI methods, an image is reconstructed by enforcing its sparse representation with respect to a preconstructed basis or dictionary. In this paper, patch-based directional wavelets are proposed to reconstruct images from undersampled k-space data. A parameter of patch-based directional wavelets, indicating the geometric direction of each patch, is trained from the reconstructed image using conventional compressed sensing MRI methods and incorporated into the sparsifying transform to provide the sparse representation for the image to be reconstructed. A reconstruction formulation is proposed and solved via an efficient alternating direction algorithm. Simulation results on phantom and in vivo data indicate that the proposed method outperforms conventional compressed sensing MRI methods in preserving the edges and suppressing the noise. Besides, the proposed method is not sensitive to the initial image when training directions." ] }
1302.2157
1569358759
In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning process. Our analysis reveals a surprising result on the sample complexity of learning: by exploiting the target risk in the learning algorithm, we show that when the loss function is both strongly convex and smooth, the sample complexity reduces to @math , an exponential improvement compared to the sample complexity @math for learning with strongly convex loss functions. Furthermore, our proof is constructive and is based on a computationally efficient stochastic optimization algorithm for such settings which demonstrate that the proposed algorithm is practically useful.
The proposed algorithm is closely related to the recent works that stated @math is the optimal convergence rate for stochastic optimization when the objective function is strongly convex @cite_12 @cite_27 @cite_8 . In contrast, the proposed algorithm is able to achieve a geometric convergence rate for a target optimization error. Similar to the previous argument, our result does not contradict the lower bound given in @cite_27 because of the knowledge of a feasible optimization error. Moreover, in contrast to the multistage algorithm in @cite_27 where the size of stages increases exponentially, in our algorithm, the size of each stage is fixed to be a constant.
{ "cite_N": [ "@cite_27", "@cite_12", "@cite_8" ], "mid": [ "2154682027", "", "2951196414" ], "abstract": [ "We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O(1 T)-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. his rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.", "", "Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be O( (T) T), by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm, one can get an optimal O(1 T) rate. This might lead one to believe that standard SGD is suboptimal, and maybe should even be replaced as a method of choice. In this paper, we investigate the optimality of SGD in a stochastic setting. We show that for smooth problems, the algorithm attains the optimal O(1 T) rate. However, for non-smooth problems, the convergence rate with averaging might really be ( (T) T), and this is not just an artifact of the analysis. On the flip side, we show that a simple modification of the averaging step suffices to recover the O(1 T) rate, and no other change of the algorithm is necessary. We also present experimental results which support our findings, and point out open problems." ] }
1302.1741
2168868689
The Tardos scheme is a well-known traitor tracing scheme to protect copyrighted content against collusion attacks. The original scheme contained some suboptimal design choices, such as the score function and the distribution function used for generating the biases. previously showed that a symbol-symmetric score function leads to shorter codes, while obtained the optimal distribution functions for arbitrary coalition sizes. Later, showed that combining these results leads to even shorter codes when the coalition size is small. We extend their analysis to the case of large coalitions and prove that these optimal distributions converge to the arcsine distribution, thus showing that the arcsine distribution is asymptotically optimal in the symmetric Tardos scheme. We also present a new, practical alternative to the discrete distributions of and give a comparison of the estimated lengths of the fingerprinting codes for each of these distributions.
In 2003, Tardos @cite_6 showed that the optimal length of such codes (i.e., the number of segments needed) is of the order @math with @math , where @math is an upper bound on the probability of catching one or more innocent users. Note that @math , commonly used for an upper bound on the probability of not catching any pirates, does not appear in the leading term of the code length for most practical values of @math and @math . In the same paper, Tardos gave a construction of a scheme with @math , which is widely known as the Tardos scheme. This shows that @math is optimal, and that the Tardos scheme has the optimal order code length.
{ "cite_N": [ "@cite_6" ], "mid": [ "2031722321" ], "abstract": [ "We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r. By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting." ] }
1302.1741
2168868689
The Tardos scheme is a well-known traitor tracing scheme to protect copyrighted content against collusion attacks. The original scheme contained some suboptimal design choices, such as the score function and the distribution function used for generating the biases. previously showed that a symbol-symmetric score function leads to shorter codes, while obtained the optimal distribution functions for arbitrary coalition sizes. Later, showed that combining these results leads to even shorter codes when the coalition size is small. We extend their analysis to the case of large coalitions and prove that these optimal distributions converge to the arcsine distribution, thus showing that the arcsine distribution is asymptotically optimal in the symmetric Tardos scheme. We also present a new, practical alternative to the discrete distributions of and give a comparison of the estimated lengths of the fingerprinting codes for each of these distributions.
Over the last ten years, improvements to the Tardos scheme have lead to a significant decrease in the code length parameter @math . We previously showed @cite_7 that combining the symbol-symmetric score function of S kori ' c et al @cite_4 with the improved analysis of Blayer and Tassa @cite_11 leads to an asymptotic code length constant of @math for large @math . For small coalitions, @cite_3 showed that even smaller values @math can be obtained by combining the symmetric score function with the optimized, discrete distribution functions previously obtained by @cite_0 . For large @math , this lead to an asymptotic code length constant of about @math .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "1977832049", "2145102561", "1968422945", "1541113101", "2087879325" ], "abstract": [ "Fingerprinting provides a means of tracing unauthorized redistribution of digital data by individually marking each authorized copy with a personalized serial number. In order to prevent a group of users from collectively escaping identification, collusion-secure fingerprinting codes have been proposed. In this paper, we introduce a new construction of a collusion-secure fingerprinting code which is similar to a recent construction by Tardos but achieves shorter code lengths and allows for codes over arbitrary alphabets. We present results for symmetric' coalition strategies. For binary alphabets and a false accusation probability @math , a code length of @math symbols is provably sufficient, for large c 0, to withstand collusion attacks of up to c 0 colluders. This improves Tardos' construction by a factor of 10. Furthermore, invoking the Central Limit Theorem in the case of sufficiently large c 0, we show that even a code length of @math is adequate. Assuming the restricted digit model, the code length can be further reduced by moving from a binary alphabet to a q-ary alphabet. Numerical results show that a reduction of 35 is achievable for q = 3 and 80 for q = 10.", "For the Tardos traitor tracing scheme, we show that by combining the symbol-symmetric accusation function of ? with the improved analysis of Blayer and Tassa we get further improvements. Our construction gives codes that are up to four times shorter than Blayer and Tassa's, and up to two times shorter than the codes from ? Asymptotically, we achieve the theoretical optimal codelength for Tardos' distribution function and the symmetric score function. For large coalitions, our codelengths are asymptotically about 4.93 of Tardos' original codelengths, which also improves upon results from", "It has been proven that the code lengths of Tardos's collusion-secure fingerprinting codes are of theoretically minimal order with respect to the number of adversarial users (pirates). However, the code lengths can be further reduced as some preceding studies have revealed. In this article we improve a recent discrete variant of Tardos's codes, and give a security proof of our codes under an assumption weaker than the original Marking Assumption. Our analysis shows that our codes have significantly shorter lengths than Tardos's codes. For example, when c = 8, our code length is about 4.94 of Tardos's code in a practical setting and about 4.62 in a certain limit case. Our code lengths for large c are asymptotically about 5.35 of Tardos's codes.", "It is known that Tardos's collusion-secure probabilistic fingerprinting code (Tardos code) has length of theoretically minimal order. However, Tardos code uses certain continuous probability distribution, which causes that huge amount of extra memory is required in a practical use. An essential solution is to replace the continuous distributions with finite discrete ones, preserving the security. In this paper, we determine the optimal finite distribution for the purpose of reducing memory amount; the required extra memory is reduced to less than 1 32 of the original in some practical setting. Moreover, the code length is also reduced (to, asymptotically, about 20.6 of Tardos code), and some further practical problems such as approximation errors are also considered.", "We study the Tardos' probabilistic fingerprinting scheme and show that its codeword length may be shortened by a factor of approximately 4. We achieve this by retracing Tardos' analysis of the scheme and extracting from it all constants that were arbitrarily selected. We replace those constants with parameters and derive a set of inequalities that those parameters must satisfy so that the desired security properties of the scheme still hold. Then we look for a solution of those inequalities in which the parameter that governs the codeword length is minimal. A further reduction in the codeword length is achieved by decoupling the error probability of falsely accusing innocent users from the error probability of missing all colluding pirates. Finally, we simulate the Tardos scheme and show that, in practice, one may use codewords that are shorter than those in the original Tardos scheme by a factor of at least 16." ] }
1302.1741
2168868689
The Tardos scheme is a well-known traitor tracing scheme to protect copyrighted content against collusion attacks. The original scheme contained some suboptimal design choices, such as the score function and the distribution function used for generating the biases. previously showed that a symbol-symmetric score function leads to shorter codes, while obtained the optimal distribution functions for arbitrary coalition sizes. Later, showed that combining these results leads to even shorter codes when the coalition size is small. We extend their analysis to the case of large coalitions and prove that these optimal distributions converge to the arcsine distribution, thus showing that the arcsine distribution is asymptotically optimal in the symmetric Tardos scheme. We also present a new, practical alternative to the discrete distributions of and give a comparison of the estimated lengths of the fingerprinting codes for each of these distributions.
Besides practical constructions of traitor tracing schemes, some papers have also studied absolute lower bounds on the asymptotic code lengths that any secure traitor tracing scheme must satisfy. Huang and Moulin @cite_8 and Amiri and Tardos @cite_2 showed that for large @math , the code length constant of any scheme must satisfy @math , but no practical constructions of schemes achieving this lower bound are known. Huang and Moulin did show that this lower bound is tight, and that in the related min-max game between the traitors and the tracer, the optimal pirate strategy is to use the interleaving attack, and the optimal tracing strategy is to use a Tardos-like code with biases distributed according to the arcsine distribution. Note that this does not say anything about specific schemes such as the Tardos scheme, for which the related min-max games are different and may lead to a completely different optimal pirate strategy and tracing strategy.
{ "cite_N": [ "@cite_2", "@cite_8" ], "mid": [ "1560505065", "2083594366" ], "abstract": [ "Including a unique code in each copy of a distributed document is an effective way of fighting intellectual piracy. Codes designed for this purpose that are secure against collusion attacks are called fingerprinting codes. In this paper we consider fingerprinting with the marking assumption and design codes that achieve much higher rates than previous constructions. We conjecture that these codes attain the maximum possible rate (the fingerprinting capacity) for any fixed number of pirates. We prove new upper bounds for the fingerprinting capacity that are not far from the rate of our codes. On the downside the accusation algorithm of our codes are much slower than those of earlier codes. We introduce the novel model of weak fingerprinting codes where one pirate should be caught only if the identity of all other pirates are revealed. We construct fingerprinting codes in this model with improved rates but our upper bound on the rate still applies. In fact, these improved codes achieve the fingerprinting capacity of the weak model by a recent upper bound. Using analytic techniques we compare the rates of our codes in the standard model and the rates of the optimal codes in the weak model. To our surprise these rates asymptotically agree, that is, their ratio tends to 1 as t goes to infinity. Although we cannot prove that each one of our codes in the standard model achieves the fingerprinting capacity, this proves that asymptotically they do.", "We study a fingerprinting game in which the number of colluders and the collusion channel are unknown. The encoder embeds fingerprints into a host sequence and provides the decoder with the capability to trace back pirated copies to the colluders. Fingerprinting capacity has recently been derived as the limit value of a sequence of maximin games with mutual information as their payoff functions. However, these games generally do not admit saddle-point solutions and are very hard to solve numerically. Here under the so-called Boneh-Shaw marking assumption, we reformulate the capacity as the value of a single two-person zero-sum game, and show that it is achieved by a saddle-point solution. If the maximal coalition size is k and the fingerprinting alphabet is binary, we show that capacity decays quadratically with k. Furthermore, we prove rigorously that the asymptotic capacity is 1 (k221n2) and we confirm our earlier conjecture that Tardos' choice of the arcsine distribution asymptotically maximizes the mutual information payoff function while the interleaving attack minimizes it. Along with the asymptotics, numerical solutions to the game for small k are also presented." ] }
1302.1611
1831477278
We study the stochastic multi-armed bandit problem when one knows the value @math of an optimal arm, as a well as a positive lower bound on the smallest positive gap @math . We propose a new randomized policy that attains a regret uniformly bounded over time in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows @math , and bounded regret of order @math is not possible if one only knows @math
The knowledge of @math was also exploited in other works. For instance in @cite_12 , the authors showed that knowing @math allows for policies with provably better concentration properties. Their policies are based on sequential likelihood ratio tests for @math vs. @math (assuming Gaussian distributions to compute the likelihoods). To some extent it was to be expected that the knowledge of @math leads to an improved regret as it partially removes the need for exploration: if one arm has empirical performances close to @math , one can be confident that this is the best arm without worrying that it could be the best arm only because we have not yet explored enough the other options. However note that the problem turns out to be more subtle than the above simple argument and underlines the fact that one needs more than the knowledge of @math in order to have a bounded regret with optimal scaling in @math . Indeed, Theorem implies that the sole knowledge of @math does not warrant the bounded property for the rescaled regret @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "2167875775" ], "abstract": [ "This paper studies the deviations of the regret in a stochastic multi-armed bandit problem.When the total number of plays n is known beforehand by the agent, (2009) exhibit a policy such that with probability at least 1-1 n, the regret of the policy is of order log n. They have also shown that such a property is not shared by the popular ucb1 policy of (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms." ] }
1302.1747
2950487498
In this work, we investigate the potential utility of parallelization for meeting real-time constraints and minimizing energy. We consider malleable Gang scheduling of implicit-deadline sporadic tasks upon multiprocessors. We first show the non-necessity of dynamic voltage frequency regarding optimality of our scheduling problem. We adapt the canonical schedule for DVFS multiprocessor platforms and propose a polynomial-time optimal processor frequency-selection algorithm. We evaluate the performance of our algorithm via simulations using parameters obtained from a hardware testbed implementation. Our algorithm has up to a 60 watt decrease in power consumption over the optimal non-parallel approach.
Very little research has addressed both real-time parallelization and power-consumption issues @cite_2 @cite_0 . Furthermore, some basic fundamental questions on the potential utility of parallelization for meeting real-time constraints and minimizing energy have not been addressed at all in the literature.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2117212672", "2028313909" ], "abstract": [ "This paper studies the important interaction between parallelization and energy consumption in a parallelizable application. Given the ratio of serial and parallel portion in an application and the number of processors, we first derive the optimal frequencies allocated to the serial and parallel regions in the application to minimize the total energy consumption, while the execution time is preserved (i.e., speedup = 1). We show that dynamic energy improvement due to parallelization has a function rising faster with the increasing number of processors than the speed improvement function given by the well-known Amdahl's Law. Furthermore, we determine the conditions under which one can obtain both energy and speed improvement, as well as the amount of improvement. The formulas we obtain capture the fundamental relationship between parallelization, speedup, and energy consumption and can be directly utilized in energy aware processor resource management. Our results form a basis for several interesting research directions in the area of power and energy aware parallel processing.", "While much work has addressed energy-efficient scheduling for sequential tasks where each task can run on only one processor at a time, little work has been done for parallel tasks where an individual task can be executed by multiple processors simultaneously. In this paper, we develop energy minimizing algorithms for parallel task systems with timing guarantees. For parallel tasks executed by a fixed number of processors, we first propose several heuristic algorithms based on level-packing for task scheduling, and then present a polynomial-time complexity energy minimizing algorithm which is optimal for any given level-packed task schedule. For parallel tasks that can run on a variable number of processors, we propose another polynomial-time complexity algorithm to determine the number of processors executing each task, task schedule and frequency assignment. To the best of our knowledge, this is the first work that addresses energy-efficient scheduling for parallel real-time tasks. Our simulation result shows that the proposed approach can significantly reduce the system energy consumption." ] }
1302.1493
156301160
Information-Centric Networks place content as the narrow waist of the network architecture. This allows to route based upon the content name, and not based upon the locations of the content consumer and producer. However, current Internet architecture does not support content routing at the network layer. We present ContentFlow, an Information-Centric network architecture which supports content routing by mapping the content name to an IP flow, and thus enables the use of OpenFlow switches to achieve content routing over a legacy IP architecture. ContentFlow is viewed as an evolutionary step between the current IP networking architecture, and a full fledged ICN architecture. It supports content management, content caching and content routing at the network layer, while using a legacy OpenFlow infrastructure and a modified controller. In particular, ContentFlow is transparent from the point of view of the client and the server, and can be inserted in between with no modification at either end. We have implemented ContentFlow and describe our implementation choices as well as the overall architecture specification. We evaluate the performance of ContentFlow in our testbed.
Current approaches for SDN, such as OpenFlow @cite_5 , focus on controlling switching element, and adopt a definition of the forwarding plane which takes traffic flows as the unit to which a policy is applied.
{ "cite_N": [ "@cite_5" ], "mid": [ "2147118406" ], "abstract": [ "This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too" ] }
1302.1493
156301160
Information-Centric Networks place content as the narrow waist of the network architecture. This allows to route based upon the content name, and not based upon the locations of the content consumer and producer. However, current Internet architecture does not support content routing at the network layer. We present ContentFlow, an Information-Centric network architecture which supports content routing by mapping the content name to an IP flow, and thus enables the use of OpenFlow switches to achieve content routing over a legacy IP architecture. ContentFlow is viewed as an evolutionary step between the current IP networking architecture, and a full fledged ICN architecture. It supports content management, content caching and content routing at the network layer, while using a legacy OpenFlow infrastructure and a modified controller. In particular, ContentFlow is transparent from the point of view of the client and the server, and can be inserted in between with no modification at either end. We have implemented ContentFlow and describe our implementation choices as well as the overall architecture specification. We evaluate the performance of ContentFlow in our testbed.
Other proposals to extend the SDN framework include integration with the application layer transport optimization @cite_1 . In an earlier paper @cite_16 , we have discussed extending the definition of network elements to include caches. However, this still requires a mapping layer between content and flow which we introduce here. @cite_13 proposed to use HTTP as the basis for an ICN, and our work finds inspiration in this idea, as we focus on HTTP content, in our implementation in particular. The extensions to HTTP from @cite_13 could be used in ContentFlow, but would require modifications of the end points.
{ "cite_N": [ "@cite_13", "@cite_16", "@cite_1" ], "mid": [ "2139833303", "2407760116", "" ], "abstract": [ "Over the past decade a variety of network architectures have been proposed to address IP's limitations in terms of flexible forwarding, security, and data distribution. Meanwhile, fueled by the explosive growth of video traffic and HTTP infrastructure (e.g., CDNs, web caches), HTTP has became the de-facto protocol for deploying new services and applications. Given these developments, we argue that these architectures should be evaluated not only with respect to IP, but also with respect to HTTP, and that HTTP could be a fertile ground (more so than IP) for deploying the newly proposed functionalities. In this paper, we take a step in this direction, and find that HTTP already provides many of the desired properties for new Internet architectures. HTTP is a content centric protocol, provides middlebox support in the form of reverse and forward proxies, and leverages DNS to decouple names from addresses. We then investigate HTTP's limitations, and propose an extension, called S-GET that provides support for low-latency applications, such as VoIP and chat.", "The current functionality supported by OpenFlowbased software defined networking (SDN) includes switching, routing, tunneling, and some basic fire walling while operating on traffic flows. However, the semantics of SDN do not allow for other operations on the traffic, nor does it allow operations at a higher granularity. In this work, we describe a method to expand the SDN framework to add other network primitives. In particular, we present a method to integrate different network elements (like cache, proxy etc). Here, we focus on storage and caching, but our method could be expanded to other functionality seamlessly. We also present a method to identify content so as to perform per-content policy, as opposed to per flow policy. We have implemented the proposed mechanisms to demonstrate its feasibility.", "" ] }
1302.0963
2949065033
We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.
In machine learning, random projections have been applied to both supervised learning and unsupervised clustering problems. Fern and Broadley show that random projections can be used to improve the clustering result for high dimensional data @cite_8 . Bingham and Manilla compare random projections with several dimensionality reduction methods on text and image data and conclude that the random lower dimension subspace yields results comparable to other conventional dimensionality reduction techniques with significantly less computation time @cite_26 . Fradkin and Madigan explore random projections in a supervised learning context @cite_42 . They conclude that random projections offer clear computational advantage over principal component analysis while providing a comparable degree of accuracy. Thus far, we are not aware of any existing works which apply random projections to multi-class boosting.
{ "cite_N": [ "@cite_26", "@cite_42", "@cite_8" ], "mid": [ "2089497633", "2090898720", "2169446650" ], "abstract": [ "Random projections have recently emerged as a powerful method for dimensionality reduction. Theoretical results indicate that the method preserves distances quite nicely; however, empirical results are sparse. We present experimental results on using random projection as a dimensionality reduction tool in a number of cases, where the high dimensionality of the data would otherwise lead to burden-some computations. Our application areas are the processing of both noisy and noiseless images, and information retrieval in text documents. We show that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis: the similarity of data vectors is preserved well under random projection. However, using random projections is computationally significantly less expensive than using, e.g., principal component analysis. We also show experimentally that using a sparse random matrix gives additional computational savings in random projection.", "Dimensionality reduction via Random Projections has attracted considerable attention in recent years. The approach has interesting theoretical underpinnings and offers computational advantages. In this paper we report a number of experiments to evaluate Random Projections in the context of inductive supervised learning. In particular, we compare Random Projections and PCA on a number of different datasets and using different machine learning methods. While we find that the random projection approach predictively underperforms PCA, its computational advantages may make it attractive for certain applications.", "We investigate how random projection can best be used for clustering high dimensional data. Random projection has been shown to have promising theoretical properties. In practice, however, we find that it results in highly unstable clustering performance. Our solution is to use random projection in a cluster ensemble approach. Empirical results show that the proposed approach achieves better and more robust clustering performance compared to not only single runs of random projection clustering but also clustering with PCA, a traditional data reduction method for high dimensional data. To gain insights into the performance improvement obtained by our ensemble method, we analyze and identify the influence of the quality and the diversity of the individual clustering solutions on the final ensemble performance." ] }
1302.0963
2949065033
We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.
Boosting is a supervised learning algorithm which has attracted significant research attention over the past decade due to its effectiveness and efficiency. The first practical boosting algorithm, AdaBoost, was introduced for binary classification problems @cite_0 . Since then, many subsequent works have been focusing on binary classification problems. Recently, however, several multi-class boosting algorithms have been proposed. Many of these algorithms convert multi-class problems into a set of binary classification problems. Here we loosely divide existing work on multi-class boosting into four categories.
{ "cite_N": [ "@cite_0" ], "mid": [ "1988790447" ], "abstract": [ "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line." ] }
1302.0963
2949065033
We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.
All-versus-all In all-versus-all classifiers, the algorithm compares each class to all other classes. A binary classifier is built to discriminate between each pair of classes while discarding the rest of the classes. The algorithm thus builds @math binary classifiers. During evaluation the class with the maximum number of votes wins. Allwein al conclude that all-versus-all often has better generalization performance than one-versus-all algorithm @cite_7 . The drawback of this algorithm is that the complexity grows quadratically with the number of classes. Thus it is not scalable in the number of classes.
{ "cite_N": [ "@cite_7" ], "mid": [ "2101276256" ], "abstract": [ "We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a margin-based binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with error-correcting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound given the empirical loss of the individual binary learning algorithms. The scheme and the corresponding bounds apply to many popular classification learning algorithms including support-vector machines, AdaBoost, regression, logistic regression and decision-tree algorithms. We also give a multiclass generalization error analysis for general output codes with AdaBoost as the binary learner. Experimental results with SVM and AdaBoost show that our scheme provides a viable alternative to the most commonly used multiclass algorithms." ] }
1302.0963
2949065033
We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.
Error correcting output coding (ECOC) The above two algorithms are special cases of ECOC. The idea of ECOC is to associate each class with a codeword which is a row of a coding matrix @math and @math . The algorithm trains @math binary classifiers to distinguish between @math different classes. During evaluation, the output of @math binary classifiers (a @math -bit string) is compared to each codeword and the sample is assigned to the class whose codeword has the minimal hamming distance. Diettrich and Bakiri report improved generalization ability of this method over the above two techniques @cite_44 . In boosting, the binary classifier is viewed as weak learner and each is learned one at a time in sequence. Some well-known ECOC based boostings are AdaBoost.MO, AdaBoost.OC and AdaBoost.ECC @cite_12 @cite_10 . Although this technique provides a simple solution to multi-class classification, it does not fully exploit the pairwise correlations between classes.
{ "cite_N": [ "@cite_44", "@cite_10", "@cite_12" ], "mid": [ "1676820704", "", "1578772208" ], "abstract": [ "Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k \"classes\"). The definition is acquired by studying collections of training examples of the form (xi, f(xi)). Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that--like the other methods--the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.", "", "This paper describes a new technique for solv- ing multiclass learning problems by combining Freund and Schapire's boosting algorithm with the main ideas of Diet- terich and Bakiri's method of error-correcting output codes (ECOC). Boosting is a general method of improving the ac- curacy of a given base or \"weak\" learning algorithm. ECOC is a robust method of solving multiclass learning problems by reducing to a sequence of two-class problems. We show that our new hybrid method has advantages of both: Like ECOC, our method only requires that the base learning al- gorithm work on binary-labeled data. Like boosting, we prove that the method comes with strong theoretical guar- antees on the training and generalization error of the final combined hypothesis assuming only that the base learning algorithm perform slightly better than random guessing. Although previous methods were known for boosting multi- class problems, the new method may be significantly faster and require less programming effort in creating the base learning algorithm. We also compare the new algorithm experimentally to other voting methods." ] }
1302.0963
2949065033
We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.
Learning a matrix of coefficients in a single optimization problem One learns a linear ensemble for each class. Given a test example, the label is predicted by @math @math Each row of the matrix @math corresponds to one of the classes. The sample is assigned to the class whose row has the largest value of the weighted combination. To learn the matrix @math , one can formulate the problem in the framework of multi-class maximum-margin learning. Shen and Hao show that the large-margin multi-class boosting can be implemented using column generation @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "2011143997" ], "abstract": [ "Boosting-based object detection has received significant attention recently. In this paper, we propose totally corrective asymmetric boosting algorithms for real-time object detection. Our algorithms differ from Viola and Jones' detection framework in two ways. Firstly, our boosting algorithms explicitly optimize asymmetric loss of objectives, while AdaBoost used by Viola and Jones optimizes a symmetric loss. Secondly, by carefully deriving the Lagrange duals of the optimization problems, we design more efficient boosting in that the coefficients of the selected weak classifiers are updated in a totally corrective fashion, in contrast to the stagewise optimization commonly used by most boosting algorithms. Column generation is employed to solve the proposed optimization problems. Unlike conventional boosting, the proposed boosting algorithms are able to de-select those irrelevant weak classifiers in the ensemble while training a classification cascade. This results in improved detection performance as well as fewer weak classifiers in the learned strong classifier. Compared with AsymBoost of Viola and Jones, our proposed asymmetric boosting is nonheuristic and the training procedure is much simpler. Experiments on face and pedestrian detection demonstrate that our methods have superior detection performance than some of the state-of-the-art object detectors." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
In addition to neural networks with online learning capability, offline methods have also been used to design neural associative memories. For instance, in @cite_26 the authors assume the complete set of pattern is given in advance and calculate the weight matrix using the pseudo-inverse rule @cite_22 offline. In return, this approach helps them improve the capacity of a Hopfield network to @math , under vanishing pattern error probability condition, while being able to correct of error in the recall phase. Although this is a significant improvement to the @math scaling of the pattern retrieval capacity in @cite_27 , it comes at the price of much higher computational complexity and the lack of gradual learning ability.
{ "cite_N": [ "@cite_27", "@cite_26", "@cite_22" ], "mid": [ "", "1964142240", "2133671888" ], "abstract": [ "", "A model of associate memory incorporating global linearity and pointwise nonlinearities in a state space of n-dimensional binary vectors is considered. Attention is focused on the ability to store a prescribed set of state vectors as attractors within the model. Within the framework of such associative nets, a specific strategy for information storage that utilizes the spectrum of a linear operator is considered in some detail. Comparisons are made between this spectral strategy and a prior scheme that utilizes the sum of Kronecker outer products of the prescribed set of state vectors, which are to function nominally as memories. The storage capacity of the spectral strategy is linear in n (the dimension of the state space under consideration), whereas an asymptotic result of n 4 log n holds for the storage capacity of the outer product scheme. Computer-simulated results show that the spectral strategy stores information more efficiently. The preprocessing costs incurred in the two algorithms are estimated, and recursive strategies are developed for their computation. >", "From the Publisher: This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
While the connectivity graph of a Hopfield network is a complete graph, Komlos and Paturi @cite_21 extended the work of McEliece to sparse neural graphs. Their results are of particular interest as physiological data is also in favor of sparsely interconnected neural networks. They have considered a network in which each neuron is connected to @math other neurons, i.e., a @math -regular network. Assuming that the network graph satisfies certain connectivity measures, they prove that it is possible to store a linear number of patterns (in terms of @math ) with vanishing bit error probability or @math random patterns with vanishing pattern error probability. Furthermore, they show that in spite of the capacity reduction, the error correction capability remains the same as the network can still tolerate a number of errors which is linear in @math .
{ "cite_N": [ "@cite_21" ], "mid": [ "2158780150" ], "abstract": [ "The authors investigate how good connectivity properties translate into good error-correcting behavior in sparse networks of threshold elements. They determine how the eigenvalues of the interconnection graph (which in turn reflect connectivity properties) relate to the quantities, number of items stored, amount of error-correction, radius of attraction, and rate of convergence in an associative memory model consisting of a sparse network of threshold elements or neurons. >" ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
It is also known that the capacity of neural associative memories could be enhanced if the patterns are of nature, in the sense that at any time instant many of the neurons are silent @cite_22 . However, even these schemes fail when required to correct a fair amount of erroneous bits as the information retrieval is not better compared to that of normal networks.
{ "cite_N": [ "@cite_22" ], "mid": [ "2133671888" ], "abstract": [ "From the Publisher: This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Extension of associative memories to non-binary neural models has also been explored in the past. Hopfield addressed the case of continuous neurons and showed that similar to the binary case, neurons with states between @math and @math can memorize a set of random patterns, albeit with less capacity @cite_18 . Prados and Kak considered a digital version of non-binary neural networks in which neural states could assume integer (positive and negative) values @cite_32 . They show that the storage capacity of such networks are in general larger than their binary peers. However, the capacity would still be less than @math in the sense that the proposed neural network can not have more than @math patterns that are stable states of the network, let alone being able to retrieve the correct pattern from corrupted input queries.
{ "cite_N": [ "@cite_18", "@cite_32" ], "mid": [ "2112246162", "196929266" ], "abstract": [ "Abstract A model for a large network of \"neurons\" with a graded response (or sigmoid input-output relation) is studied. This deterministic system has collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons. The content- addressable memory and other emergent collective properties of the original model also are present in the graded response model. The idea that such collective properties are used in biological systems is given added credence by the continued presence of such properties for more nearly biological \"neurons.\" Collective analog electrical circuits of the kind described will certainly function. The collective states of the two models have a simple correspondence. The original model will continue to be useful for simulations, because its connection to graded response systems is established. Equations that include the effect of action potentials in the graded response system are also developed.", "On capacity considerations it is clear that it is advantageous to use non-binary neural networks." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Later, Gripon and Berrou came up with a different approach based on neural cliques, which increased the pattern retrieval capacity to @math @cite_9 . Their method is based on dividing a neural network of size @math into @math clusters of size @math each. Then, the messages are chosen such that only one neuron in each cluster is active for a given message. Therefore, one can think of messages as a random vector of length @math , where the @math part specifies the index of the active neuron in a given cluster. The authors also provide a learning algorithm, similar to that of Hopfield, to learn the pair-wise correlations within the patterns. Using this technique and exploiting the fact that the resulting patterns are very sparse, they could boost the capacity to @math while maintaining the computational simplicity of Hopfield networks.
{ "cite_N": [ "@cite_9" ], "mid": [ "2121160181" ], "abstract": [ "Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
In contrast to the pairwise correlation of the Hopfield model, @cite_24 deployed neural models: the models in which the state of the neurons not only depends on the state of their neighbors, but also on the correlation among them. Under this model, they showed that the storage capacity of a higher-order Hopfield network can be improved to @math , where @math is the degree of correlation considered. The main drawback of this model is the huge computational complexity required in the learning phase, as one has to keep track of @math neural links and their weights during the learning period.
{ "cite_N": [ "@cite_24" ], "mid": [ "2040870209" ], "abstract": [ "Quantitative expressions of long-term memory storage capacities of complex neural network are derived. The networks are made of neurons connected by synapses of any order, of the axono-axonal type considered by for example. The effect of link deletion possibly related to aging, is also considered. The central result of this study is that, within the framework of Hebb's laws, the number of stored bits is proportional to the number of synapses. The proportionality factor however, decreases when the order of involved synaptic contact increases. This tends to favor neural architectures with low-order synaptic connectivities. It is finally shown that the memory storage capacities can be optimized by a partition of the network into neuron clusters with size comparable with that observed for cortical microcolumns." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Recently, the present authors introduced a novel model inspired by modern coding techniques in which a neural bipartite graph is used to memorize the patterns that belong to a subspace @cite_16 . The proposed model can be also thought of as a way to capture higher order correlations in given patterns while keeping the computational complexity to a minimal level (since instead of @math weights one needs to only keep track of @math of them). Under the assumptions that the bipartite graph is known, sparse, and expander, the proposed algorithm increased the pattern retrieval capacity to @math , for some @math , closing the gap between the pattern retrieval capacities achieved in neural networks and that of coding techniques. For completeness, this approach is presented in the appendix (along with the detailed proofs). The main drawbacks in the proposed approach were the lack of a learning algorithm as well as the expansion assumption on the neural graph.
{ "cite_N": [ "@cite_16" ], "mid": [ "2150610158" ], "abstract": [ "We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we consider storing patterns from a suitably chosen set of patterns, that are obtained by enforcing a set of simple constraints on the coordinates (such as those enforced in graph based codes). Such patterns may be generated from purely random information symbols by simple neural operations. Two simple neural update algorithms are presented, and it is shown that our proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size. Furthermore, using analytical results and simulations, we show that the suggested methods can tolerate a fair amount of errors in the input." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
In this paper, we focus on extending the results described in @cite_16 in several directions: first, we will suggest an iterative learning algorithm, to find the neural connectivity matrix from the patterns in the training set. Secondly, we provide an analysis of the proposed error correcting algorithm in the recall phase and investigate its performance as a function of input noise and network model. Finally, we discuss some variants of the error correcting method which achieve better performance in practice.
{ "cite_N": [ "@cite_16" ], "mid": [ "2150610158" ], "abstract": [ "We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we consider storing patterns from a suitably chosen set of patterns, that are obtained by enforcing a set of simple constraints on the coordinates (such as those enforced in graph based codes). Such patterns may be generated from purely random information symbols by simple neural operations. Two simple neural update algorithms are presented, and it is shown that our proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size. Furthermore, using analytical results and simulations, we show that the suggested methods can tolerate a fair amount of errors in the input." ] }
1302.1156
1598886137
We consider the problem of neural association for a network of non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e. the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern retrieval capacity and error correction capabilities. An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations, we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Another important point to note is that learning linear constraints by a neural network is hardly a new topic as one can learn a matrix orthogonal to a set of patterns in the training set (i.e., @math ) using simple neural learning rules (we refer the interested readers to @cite_4 and @cite_6 ). However, to the best of our knowledge, finding such a matrix subject to the sparsity constraints has not been investigated before. This problem can also be regarded as an instance of compressed sensing @cite_30 , in which the measurement matrix is given by the big patterns matrix @math and the set of measurements are the constraints we look to satisfy, denoted by the tall vector @math , which for simplicity reasons we assume to be all zero. Thus, we are interested in finding a sparse vector @math such that @math . Nevertheless, many decoders proposed in this area are very complicated and cannot be implemented by a neural network using simple neuron operations. Some exceptions are @cite_23 and @cite_20 which are closely related to the learning algorithm proposed in this paper.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_6", "@cite_23", "@cite_20" ], "mid": [ "2129638195", "", "1718034124", "2082029531", "2118297240" ], "abstract": [ "Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed", "", "Vector subspaces have been suggested for representations of structured information. In the theory of associative memory and associative information processing, the projection principle and subspaces are used in explaining the optimality of associative mappings and novelty filters. These formalisms seem to be very pertinent to neural networks, too. Based on these operations, the subspace method has been developed for a practical pattern-recognition algorithm. The method is reviewed, and some recent results on image analysis are given. >", "Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.", "The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications." ] }
1302.0413
2953051570
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
Serdyukov and Macdonald have surveyed the most important concepts and representative previous works in the expert finding task @cite_5 @cite_16 . Two of the most popular and well-performing types of methods are the profile-centric and the document-centric approaches @cite_14 @cite_19 . Profile-centric approaches build an expert profile as a pseudo document, by aggregating text segments relevant to the expert @cite_21 . These profiles of experts are latter indexed and used to support the search for experts on a topic. Document-centric approaches are typically based on traditional document retrieval techniques, using the documents directly. In a probabilistic approach to the problem, the first step is to estimate the conditional probability @math of the query topic @math given a document @math . Assuming that the terms co-occurring with an expert can be used to describe him, @math can be used to weight the co-occurrence evidence of experts with @math in documents. The conditional probability @math of an expert candidate @math given a query @math can then be estimated by aggregating all the evidences in all the documents where @math and @math co-occur. Experimental results show that document-centric approaches usually outperform profile-centric approaches @cite_19 .
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_19", "@cite_5", "@cite_16" ], "mid": [ "11620817", "", "38978401", "1566861348", "2007810306" ], "abstract": [ "The goal of the enterprise track is to conduct experiments with enterprise data — intranet pages, email archives, document repositories — that reflect the experiences of users in real organisations, such that for example, an email ranking technique that is effective here would be a good choice for deployment in a real multi-user email search application. This involves both understanding user needs in enterprise search and development of appropriate IR techniques.", "", "The goal of the enterprise track is to conduct experiments with enterprise data — intranet pages, email archives, document repositories — that reflect the experiences of users in real organizations, such that for example, an email ranking technique that is effective here would be a good choice for deployment in a real multi-user email search application. This involves both understanding user needs in enterprise search and development of appropriate IR techniques. The enterprise track began in TREC 2005 as the successor to the web track, and this is reflected in the tasks and measures. While the track takes much of its inspiration from the web track, the foci are on search at the enterprise scale, incorporating non-web data and discovering relationships between entities in the organization. As a result, we have created the first test collections for multi-user email search and expert finding. This year the track has continued using the W3C collection, a crawl of the publicly available web of the World Wide Web Consortium performed in June 2004. This collection contains not only web pages but numerous mailing lists, technical documents and other kinds of data that represent the day-to-day operation of the W3C. Details of the collection may be found in the 2005 track overview (, 2005). Additionally, this year we began creating a repository of information derived from the collection by participants. This data is hosted alongside the W3C collection at NIST. There were two tasks this year, email discussion search and expert search, and both represent refinements of the tasks initially done in 2005. NIST developed topics and relevance judgments for the email discussion search task this year. For expert search, rather than relying on found data as last year, the track participants created the topics and relevance judgments. Twenty-five groups took part across the two tasks.", "The automatic search for knowledgeable people in the scope of an organization is a key function which makes modern Enterprise search systems commercially successful and socially demanded. A number of effective approaches to expert finding were recently proposed in academic publications. Although, most of them use reasonably defined measures of personal expertise, they often limit themselves to rather unrealistic and sometimes oversimplified principles. In this thesis, we explore several ways to go beyond state-of-the-art assumptions used in research on expert finding and propose several novel solutions for this and related tasks. First, we describe measures of expertise that do not assume independent occurrence of terms and persons in a document what makes them perform better than the measures based on independence of all entities in a document. One of these measures makes persons central to the process of terms generation in a document. Another one assumes that the position of the person’s mention in a document with respect to the positions of query terms indicates the relation of the person to the document’s relevant content. Second, we find the ways to use not only direct expertise evidence for a person concentrated within the document space of the person’s current employer and only within those organizational documents that mention the person. We successfully utilize the predicting potential of additional indirect expertise evidence publicly available on the Web and in the organizational documents implicitly related to a person. Finally, besides the expert finding methods we proposed, we also demonstrate solutions for the tasks from related domains. In one case, we use several algorithms of multi-step relevance propagation to search for typed entities in Wikipedia. In another case, we suggest generic methods for placing photos uploaded to Flickr on the World map using language models of locations built entirely on the annotations provided by users with a few task specific extensions.", "In an expert search task, the users’ need is to identify people who have relevant expertise to a topic of interest. An expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users’ query. In this paper, we propose a novel approach for predicting and ranking candidate expertise with respect to a query, called the Voting Model for Expert Search. In the Voting Model, we see the problem of ranking experts as a voting problem. We model the voting problem using 12 various voting techniques, which are inspired from the data fusion field. We investigate the effectiveness of the Voting Model and the associated voting techniques across a range of document weighting models, in the context of the TREC 2005 and TREC 2006 Enterprise tracks. The evaluation results show that the voting paradigm is very effective, without using any query or collection-specific heuristics. Moreover, we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the voting techniques on an expert search task. In particular, we demonstrate that applying field-based weighting models improves the ranking of candidates. Finally, we demonstrate that the relative performance of the voting techniques for the proposed approach is stable on a given task regardless of the used weighting models, suggesting that some of the proposed voting techniques will always perform better than other voting techniques." ] }
1302.0413
2953051570
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
In the Scientometrics community, the evaluation of the scientific output of a scientist has also attracted significant interest due to the importance of obtaining unbiased and fair criteria. Most of the existing methods are based on metrics such as the total number of authored papers or the total number of citations. A comprehensive description of many of these metrics can be found in @cite_4 @cite_26 . Simple and elegant indexes, such as the Hirsch index, calculate how broad the research work of a scientist is, accounting for both productivity and impact. Graph centrality metrics inspired on PageRank, calculated over citation or co-authorship graphs, have also been extensively used @cite_0 . In the context of academic expert search systems, these metrics can easily be used as query-independent estimators of expertise, in much the same way as PageRank is used in the case of Web information retrieval systems.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_4" ], "mid": [ "2101599977", "2018392915", "2147032993" ], "abstract": [ "The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded $24 Million (US) for the Digital Library Initiative (DLI). In this paper we examine the state of the DL domain after a decade of activity by applying social network analysis to the co-authorship network of the past ACM, IEEE, and joint ACM IEEE digital library conferences. We base our analysis on a common binary undirectional network model to represent the co-authorship network, and from it we extract several established network measures. We also introduce a weighted directional network model to represent the co-authorship network, for which we define AuthorRank as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).", "Citation analysis helps in evaluating the impact of scientific collections (journals and conferences), publications and scholar authors. In this paper we examine known algorithms that are currently used for Link Analysis Ranking, and present their weaknesses over specific examples. We also introduce new alternative methods specifically designed for citation graphs. We use the SCEAS system as a base platform to introduce these new methods and perform a generalized comparison of all methods. We also introduce an aggregate function for the generation of author ranking based on publication ranking. Finally, we try to evaluate the rank results based on the prizes of 'VLDB 10 Year Award', 'SIGMOD Test of Time Award' and 'SIGMOD E.F.Codd Innovations Award'.", "Citation analysis is performed to evaluate the impact of scientific collections (journals and conferences), publications and scholar authors. In this paper we investigate alternative methods to provide a generalized approach to rank scientific publications. We use the SCEAS system [12] as a base platform to introduce new methods that can be used for ranking scientific publications. Moreover, we tune our approach along the reasoning of the prizes 'VLDB 10 Year Award' and 'SIGMOD Test of Time Award', which have been awarded in the course of the top two database conferences. Our approach can be used to objectively suggest the publications and the respective authors the are more likely to be awarded in the near future at these conferences." ] }
1302.0413
2953051570
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
For combining the multiple sources of expertise, we propose to leverage on previous works concerning the subject of learning to rank for information retrieval (L2R4IR). Tie-Yan Liu presented a good survey on the subject @cite_2 , categorizing the previously proposed algorithms into three groups, according to their input representation and optimization objectives:
{ "cite_N": [ "@cite_2" ], "mid": [ "2149427297" ], "abstract": [ "Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank." ] }
1302.0413
2953051570
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
Pairwise approach - L2R4IR is seen as a binary classification problem for document pairs, since the relevance degree can be regarded as a binary value which tells which document ordering is better for a given pair of documents. Given feature vectors of pairs of documents from the data for the input space, the relevance degree of each of those documents can be predicted with scoring functions which try to minimize the average number of misclassified document pairs. Several different pairwise methods have been proposed, including SVM @math @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2035720976" ], "abstract": [ "Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n as well as a large number of features N, while each example has only s << N non-zero features. This paper presents a Cutting Plane Algorithm for training linear SVMs that provably has training time 0(s,n) for classification problems and o(sn log (n))for ordinal regression problems. The algorithm is based on an alternative, but equivalent formulation of the SVM optimization problem. Empirically, the Cutting-Plane Algorithm is several orders of magnitude faster than decomposition methods like svm light for large datasets." ] }
1302.0413
2953051570
The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.
Listwise approach - L2R4IR is addressed in a way that takes into account an entire set of documents, associated with a query, as instances. These methods train a ranking function through the minimization of a listwise loss function defined on the predicted list and the ground truth list. Given feature vectors of a list of documents of the data for the input space, the relevance degree of each of those documents can be predicted with scoring functions which try to directly optimize the value of a particular information retrieval evaluation metric, averaged over all queries in the training data @cite_2 . Several different listwise methods have also been proposed, including SVM @math @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2127176025", "2149427297" ], "abstract": [ "Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores.", "Learning to rank for Information Retrieval (IR) is a task to automatically construct a ranking model using training data, such that the model can sort new objects according to their degrees of relevance, preference, or importance. Many IR problems are by nature ranking problems, and many IR technologies can be potentially enhanced by using learning-to-rank techniques. The objective of this tutorial is to give an introduction to this research direction. Specifically, the existing learning-to-rank algorithms are reviewed and categorized into three approaches: the pointwise, pairwise, and listwise approaches. The advantages and disadvantages with each approach are analyzed, and the relationships between the loss functions used in these approaches and IR evaluation measures are discussed. Then the empirical evaluations on typical learning-to-rank methods are shown, with the LETOR collection as a benchmark dataset, which seems to suggest that the listwise approach be the most effective one among all the approaches. After that, a statistical ranking theory is introduced, which can describe different learning-to-rank algorithms, and be used to analyze their query-level generalization abilities. At the end of the tutorial, we provide a summary and discuss potential future work on learning to rank." ] }
1302.0418
2950650152
We study the power of Arthur-Merlin probabilistic proof systems in the data stream model. We show a canonical @math streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an application, we give an @math streaming algorithm for the problem. Given a data stream of length @math over alphabet of size @math , the algorithm uses @math space and a proof of size @math , for every @math such that @math (where @math hides a @math factor). We also prove a lower bound, showing that every @math streaming algorithm for the problem that uses @math bits of space and a proof of size @math , satisfies @math . As a part of the proof of the lower bound for the problem, we show a new lower bound of @math on the @math communication complexity of the problem, and prove its tightness.
The data stream model has gained a great deal of attention after the publication of the seminal paper by Alon, Matias and Szegedy @cite_9 . In the scope of that work, the authors have shown a lower bound of @math (where @math is the size of the alphabet) on the streaming complexity of (i.e., the computation of the number of distinct elements in a data stream) where the length of the input is at least proportional to @math .
{ "cite_N": [ "@cite_9" ], "mid": [ "2064379477" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well." ] }
1302.0418
2950650152
We study the power of Arthur-Merlin probabilistic proof systems in the data stream model. We show a canonical @math streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an application, we give an @math streaming algorithm for the problem. Given a data stream of length @math over alphabet of size @math , the algorithm uses @math space and a proof of size @math , for every @math such that @math (where @math hides a @math factor). We also prove a lower bound, showing that every @math streaming algorithm for the problem that uses @math bits of space and a proof of size @math , satisfies @math . As a part of the proof of the lower bound for the problem, we show a new lower bound of @math on the @math communication complexity of the problem, and prove its tightness.
Following @cite_9 there was a long line of theoretical research on the approximation of the problem ( @cite_21 @cite_2 @cite_12 @cite_5 @cite_15 , see @cite_10 for a survey of earlier results). Finally, Kane at el. @cite_15 gave the first optimal approximation algorithm for estimating the number of distinct elements in a data stream; for a data stream with alphabet of size @math , given @math their algorithm computes a @math multiplicative approximation using @math bits of space, with @math success probability. This result matches the tight lower bound of Indyk and Woodruff @cite_2 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2064379477", "1785933978", "2132822431", "2131709403", "2103126020", "1965972569", "1992363839" ], "abstract": [ "The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.", "We present three algorithms to count the number of distinct elements in a data stream to within a factor of 1 ± ?. Our algorithms improve upon known algorithms for this problem, and offer a spectrum of time space tradeoffs.", "We prove strong lower bounds for the space complexity of ( spl epsi , spl delta )-approximating the number of distinct elements F sub 0 in a data stream. Let m be the size of the universe from which the stream elements are drawn. We show that any one-pass streaming algorithm for ( spl epsi , spl delta )-approximating F sub 0 must use spl Omega (1 spl epsi sup 2 ) space when spl epsi = spl Omega (m sup -1 (9 + k) ), for any k > 0, improving upon the known lower bound of spl Omega (1 spl epsi ) for this range of spl epsi . This lower bound is tight up to a factor of log log m for small spl epsi and log 1 spl epsi for large spl epsi . Our lower bound is derived from a reduction from the one-way communication complexity of approximating a Boolean function in Euclidean space. The reduction makes use of a low-distortion embedding from an l sub 2 to l sub 1 norm.", "The Gap-Hamming-Distance problem arose in the context of proving space lower bounds for a number of key problems in the data stream model. In this problem, Alice and Bob have to decide whether the Hamming distance between their @math -bit input strings is large (i.e., at least @math ) or small (i.e., at most @math ); they do not care if it is neither large nor small. This @math gap in the problem specification is crucial for capturing the approximation allowed to a data stream algorithm. Thus far, for randomized communication, an @math lower bound on this problem was known only in the one-way setting. We prove an @math lower bound for randomized protocols that use any constant number of rounds. As a consequence we conclude, for instance, that @math -approximately counting the number of distinct elements in a data stream requires @math space, even with multiple (a constant number of) passes over the input stream. This extends earlier one-pass lower bounds, answering a long-standing open question. We obtain similar results for approximating the frequency moments and for approximating the empirical entropy of a data stream. In the process, we also obtain tight @math lower and upper bounds on the one-way deterministic communication complexity of the problem. Finally, we give a simple combinatorial proof of an @math lower bound on the one-way randomized communication complexity.", "We give the first optimal algorithm for estimating the number of distinct elements in a data stream, closing a long line of theoretical research on this problem begun by Flajolet and Martin in their seminal paper in FOCS 1983. This problem has applications to query optimization, Internet routing, network topology, and data mining. For a stream of indices in 1,...,n , our algorithm computes a (1 ± e)-approximation using an optimal O(1 e-2 + log(n)) bits of space with 2 3 success probability, where 0 We also give an algorithm to estimate the Hamming norm of a stream, a generalization of the number of distinct elements, which is useful in data cleaning, packet tracing, and database auditing. Our algorithm uses nearly optimal space, and has optimal O(1) update and reporting times.", "1 Introduction 2 Map 3 The Data Stream Phenomenon 4 Data Streaming: Formal Aspects 5 Foundations: Basic Mathematical Ideas 6 Foundations: Basic Algorithmic Techniques 7 Foundations: Summary 8 Streaming Systems 9 New Directions 10 Historic Notes 11 Concluding Remarks Acknowledgements References.", "The task of estimating the number of distinct values (DVs) in a large dataset arises in a wide variety of settings in computer science and elsewhere. We provide DV estimation techniques that are designed for use within a flexible and scalable \"synopsis warehouse\" architecture. In this setting, incoming data is split into partitions and a synopsis is created for each partition; each synopsis can then be used to quickly estimate the number of DVs in its corresponding partition. By combining and extending a number of results in the literature, we obtain both appropriate synopses and novel DV estimators to use in conjunction with these synopses. Our synopses can be created in parallel, and can then be easily combined to yield synopses and DV estimates for arbitrary unions, intersections or differences of partitions. Our synopses can also handle deletions of individual partition elements. We use the theory of order statistics to show that our DV estimators are unbiased, and to establish moment formulas and sharp error bounds. Based on a novel limit theorem, we can exploit results due to Cohen in order to select synopsis sizes when initially designing the warehouse. Experiments and theory indicate that our synopses and estimators lead to lower computational costs and more accurate DV estimates than previous approaches." ] }
1302.0418
2950650152
We study the power of Arthur-Merlin probabilistic proof systems in the data stream model. We show a canonical @math streaming algorithm for a wide class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it. As an application, we give an @math streaming algorithm for the problem. Given a data stream of length @math over alphabet of size @math , the algorithm uses @math space and a proof of size @math , for every @math such that @math (where @math hides a @math factor). We also prove a lower bound, showing that every @math streaming algorithm for the problem that uses @math bits of space and a proof of size @math , satisfies @math . As a part of the proof of the lower bound for the problem, we show a new lower bound of @math on the @math communication complexity of the problem, and prove its tightness.
In a recent sequence of works, the data stream model was extended to support several interactive and non-interactive proof systems @cite_17 @cite_7 @cite_16 . The model of streaming algorithms with non-interactive proofs was first introduced in @cite_17 and extended in @cite_7 @cite_0 . In @cite_17 the authors gave an optimal (up to polylogarithmic factors) algorithm for computing the @math 'th frequency moment exactly, for every integer @math .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_7", "@cite_17" ], "mid": [ "2952062204", "", "2949292204", "2132588842" ], "abstract": [ "When delegating computation to a service provider, as in cloud computing, we seek some reassurance that the output is correct and complete. Yet recomputing the output as a check is inefficient and expensive, and it may not even be feasible to store all the data locally. We are therefore interested in proof systems which allow a service provider to prove the correctness of its output to a streaming (sublinear space) user, who cannot store the full input or perform the full computation herself. Our approach is two-fold. First, we describe a carefully chosen instantiation of one of the most efficient general-purpose constructions for arbitrary computations (streaming or otherwise), due to Goldwasser, Kalai, and Rothblum. This requires several new insights to make the methodology more practical. Our main contribution is in achieving a prover who runs in time O(S(n) log S(n)), where S(n) is the size of an arithmetic circuit computing the function of interest. Our experimental results demonstrate that a practical general-purpose protocol for verifiable computation may be significantly closer to reality than previously realized. Second, we describe techniques that achieve genuine scalability for protocols fine-tuned for specific important problems in streaming and database processing. Focusing in particular on non-interactive protocols for problems ranging from matrix-vector multiplication to bipartite perfect matching, we build on prior work to achieve a prover who runs in nearly linear-time, while obtaining optimal tradeoffs between communication cost and the user's working memory. Existing techniques required (substantially) superlinear time for the prover. We argue that even if general-purpose methods improve, fine-tuned protocols will remain valuable in real-world settings for key problems, and hence special attention to specific problems is warranted.", "", "Motivated by the trend to outsource work to commercial cloud computing services, we consider a variation of the streaming paradigm where a streaming algorithm can be assisted by a powerful helper that can provide annotations to the data stream. We extend previous work on such annotation models by considering a number of graph streaming problems. Without annotations, streaming algorithms for graph problems generally require significant memory; we show that for many standard problems, including all graph problems that can be expressed with totally unimodular integer programming formulations, only a constant number of hash values are needed for single-pass algorithms given linear-sized annotations. We also obtain a protocol achieving tradeoffs between annotation length and memory usage for matrix-vector multiplication; this result contributes to a trend of recent research on numerical linear algebra in streaming models.", "The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more powerful \"helper\" who can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a non-trivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend -- including recent studies of multi-pass streaming, read write streams and randomly ordered streams -- of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right." ] }
1302.0441
2048408281
In linear inverse problems, we have data derived from a noisy linear transformation of some unknown parameters, and we wish to estimate these unknowns from the data. Separable inverse problems are a powerful generalization in which the transformation itself depends on additional unknown parameters and we wish to determine both sets of parameters simultaneously. When separable problems are solved by optimization, convergence can often be accelerated by elimination of the linear variables, a strategy which appears most prominently in the variable projection methods due to Golub and Pereyra. Existing variable elimination methods require an explicit formula for the optimal value of the linear variables, so they cannot be used in problems with Poisson likelihoods, bound constraints, or other important departures from least squares. To address this limitation, we propose a generalization of variable elimination in which standard optimization methods are modified to behave as though a variable has been eliminated. We verify that this approach is a proper generalization by using it to re-derive several existing variable elimination techniques. We then extend the approach to bound-constrained and Poissonian problems, showing in the process that many of the best features of variable elimination methods can be duplicated in our framework. Tests on difficult exponential sum fitting and blind deconvolution problems indicate that the proposed approach can have significant speed and robustness advantages over standard methods.
While the relationship between full and reduced update methods has been explored several times, the relationship established here is a major extension of previous work. In @cite_9 Ruhe and Wedin developed the connection between full and reduced update Newton and Gauss-Newton methods, and semi-reduced methods are described by Smyth as or methods in @cite_51 . Our work extends theirs in that we consider general Newton-type methods, nonquadratic likelihoods, and the effect of globalization strategies, such as line search or trust regions, which ensure convergence to a stationary point from arbitrary initialization. A very general theoretical analysis of the relationship between the full and reduced problems is given in @cite_25 , but there is little discussion of practical algorithms and no mention of semi-reduced methods.
{ "cite_N": [ "@cite_9", "@cite_51", "@cite_25" ], "mid": [ "2040763577", "2017638737", "2587432537" ], "abstract": [ "Iterative algorithms of Gauss–Newton type for the solution of nonlinear least squares problems are considered. They separate the variables into two sets in such a way that in each iteration, optimization with respect to the first set is performed first, and corrections to those of the second after that. The linear-nonlinear case, where the first set consists of variables that occur linearly, is given special attention, and a new algorithm is derived which is simpler to apply than the variable projection algorithm as described by Golub and Pereyra, and can be performed with no more arithmetical operations than the unseparated Gauss–Newton algorithm. A detailed analysis of the asymptotical convergence properties of both separated and unseparated algorithms is performed. It is found that they have comparable rates of convergence, and all converge almost quadratically for almost compatible problems. Simpler separation schemes, on the other hand, converge only linearly. An efficient and simple computer impleme...", "There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal at each step with a proper subset of the parameters are called in this paper partitioned algorithms. Partitioned algorithms in effect replace the original estimation problem with a series of problems of lower dimension. The purpose of the paper is to characterize some of the circumstances under which this process of dimension reduction leads to significant benefits.", "" ] }
1302.0441
2048408281
In linear inverse problems, we have data derived from a noisy linear transformation of some unknown parameters, and we wish to estimate these unknowns from the data. Separable inverse problems are a powerful generalization in which the transformation itself depends on additional unknown parameters and we wish to determine both sets of parameters simultaneously. When separable problems are solved by optimization, convergence can often be accelerated by elimination of the linear variables, a strategy which appears most prominently in the variable projection methods due to Golub and Pereyra. Existing variable elimination methods require an explicit formula for the optimal value of the linear variables, so they cannot be used in problems with Poisson likelihoods, bound constraints, or other important departures from least squares. To address this limitation, we propose a generalization of variable elimination in which standard optimization methods are modified to behave as though a variable has been eliminated. We verify that this approach is a proper generalization by using it to re-derive several existing variable elimination techniques. We then extend the approach to bound-constrained and Poissonian problems, showing in the process that many of the best features of variable elimination methods can be duplicated in our framework. Tests on difficult exponential sum fitting and blind deconvolution problems indicate that the proposed approach can have significant speed and robustness advantages over standard methods.
Structured linear algebra techniques such as block Gaussian elimination are known to be useful @cite_18 @cite_10 , but they are underutilized in practice. This is apparent from the fact that most optimization codes employ a limited set of broadly applicable linear algebra techniques @cite_7 , and very few are designed to accommodate user-defined linear solvers such as the ones we propose in . We contend that since significant speed gains are attainable with special linear solvers, optimization algorithm implementations should accommodate user-customized linear algebra by adding appropriate callback and reverse communication protocols.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_7" ], "mid": [ "1978006681", "1542938076", "" ], "abstract": [ "We propose an approximate Newton method for solving the coupled nonlinear system @math and @math where @math , @math , @math and @math . The method involves applying the basic iteration S of a general solver for the equation @math , withtfixed. It is therefore well suited for problems for which such a solver already exists or can be implemented more efficiently than a solver for the coupled system. We derive conditions for S under which the method is locally convergent. Basically, if S is sufficiently contractive for G, then convergence for the coupled system is guaranteed. Otherwise, we show how to construct an @math from S for which convergence is assured. These results are applied to continuation methods where N represents a pseudo-arclength condition. We show that under certain conditions the algorithm converges if S is convergent for G. Numerical results are given for a two-level nonlinear multi-grid solver applied ...", "Historical Introduction: Issai Schur and the Early Development of the Schur Complement.- Basic Properties of the Schur Complement.- Eigenvalue and Singular Value Inequalities of Schur Complements.- Block Matrix Techniques.- Closure Properties.- Schur Complements and Matrix Inequalities: Operator-Theoretic Approach.- Schur complements in statistics and probability.- Schur Complements and Applications in Numerical Analysis.", "" ] }
1302.0441
2048408281
In linear inverse problems, we have data derived from a noisy linear transformation of some unknown parameters, and we wish to estimate these unknowns from the data. Separable inverse problems are a powerful generalization in which the transformation itself depends on additional unknown parameters and we wish to determine both sets of parameters simultaneously. When separable problems are solved by optimization, convergence can often be accelerated by elimination of the linear variables, a strategy which appears most prominently in the variable projection methods due to Golub and Pereyra. Existing variable elimination methods require an explicit formula for the optimal value of the linear variables, so they cannot be used in problems with Poisson likelihoods, bound constraints, or other important departures from least squares. To address this limitation, we propose a generalization of variable elimination in which standard optimization methods are modified to behave as though a variable has been eliminated. We verify that this approach is a proper generalization by using it to re-derive several existing variable elimination techniques. We then extend the approach to bound-constrained and Poissonian problems, showing in the process that many of the best features of variable elimination methods can be duplicated in our framework. Tests on difficult exponential sum fitting and blind deconvolution problems indicate that the proposed approach can have significant speed and robustness advantages over standard methods.
Trial point adjustment is a key idea in the two-step line search and trust region algorithms of @cite_49 and @cite_17 . General convergence results are proven in @cite_30 for accelerated' line search and trust region methods employing trial point adjustment. These works are not concerned with separable inverse problems or the relationship with reduced methods.
{ "cite_N": [ "@cite_30", "@cite_49", "@cite_17" ], "mid": [ "2116901831", "1968070980", "2152119413" ], "abstract": [ "In numerical optimization, line-search and trust-region methods are two important classes of descent schemes, with well-understood global convergence properties. We say that these methods are “accelerated” when the conventional iterate is replaced by any point that produces at least as much of a decrease in the cost function as a fixed fraction of the decrease produced by the conventional iterate. A detailed convergence analysis reveals that global convergence properties of line-search and trust-region methods still hold when the methods are accelerated. The analysis is performed in the general context of optimization on manifolds, of which optimization in @math is a particular case. This general convergence analysis sheds new light on the behavior of several existing algorithms.", "In this paper we propose extensions to trust-region algorithms in which the classical step is augmented with a second step that we insist yields a decrease in the value of the objective function. The classical convergence theory for trust-region algorithms is adapted to this class of two-step algorithms. The algorithms can be applied to any problem with variable(s) whose contribution to the objective function is a known functional form. In the nonlinear programming package LANCELOT, they have been applied to update slack variables and variables introduced to solve minimax problems, leading to enhanced optimization efficiency. Extensive numerical results are presented to show the effectiveness of these techniques.", "This paper is concerned with the problem min @math where X is a convex subset of a linear space H, and f is a smooth real-valued function on H. We propose the class of methods @math , where P denotes projection on X with respect to a Hilbert space norm @math , @math denotes the Frechet derivative of f at @math with respect to another Hilbert space norm @math on H, and @math is a positive scalar stepsize. We thus remove an important restriction in the original proposal of Goldstein [1] and Levitin and Poljak [2], where the norms @math and @math must be the same. It is therefore possible to match the norm @math with the structure of X so that the projection operation is simplified while at the same time reserving the option to choose @math on the basis of approximations to the Hessian of f so as to attain a typically superlinear rate of convergence. The resulting methods are particularly attrac..." ] }
1302.0756
1974661392
We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.
The key idea in the proposed SCA schemes, e.g., ), is to convexify the nonconvex part of @math via partial linearization of @math , resulting in the term @math . In the same spirit of @cite_7 @cite_0 @cite_11 , it is not difficult to show that one can generalize this idea and replace the linear term @math in ) with a nonlinear scalar function @math . All the results presented so far are still valid provided that @math enjoys the following properties: for all @math ,
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_11" ], "mid": [ "2112820013", "1969648546", "" ], "abstract": [ "In wireless cellular or ad hoc networks where Quality of Service (QoS) is interference-limited, a variety of power control problems can be formulated as nonlinear optimization with a system-wide objective, e.g., maximizing the total system throughput or the worst user throughput, subject to QoS constraints from individual users, e.g., on data rate, delay, and outage probability. We show that in the high Signal-to- interference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimization problems in the form of geometric programming; hence they can be very efficiently solved for global optimality even with a large number of users. In the medium to low SIR regime, some of these constrained nonlinear optimization of power control cannot be turned into tractable convex formulations, but a heuristic can be used to compute in most cases the optimal solution by solving a series of geometric programs through the approach of successive convex approximation. While efficient and robust algorithms have been extensively studied for centralized solutions of geometric programs, distributed algorithms have not been explored before. We present a systematic method of distributed algorithms for power control that is geometric-programming-based. These techniques for power control, together with their implications to admission control and pricing in wireless networks, are illustrated through several numerical examples.", "Inner approximation algorithms have had two major roles in the mathematical programming literature. Their first role was in the construction of algorithms for the decomposition of large-scale mathematical programs, such as in the Dantzig-Wolfe decomposition principle. However, recently they have been used in the creation of algorithms that locate Kuhn-Tucker solutions to nonconvex programs. Avriel and Williams' Avriel, M., A. C. Williams. 1970. Complementary geometric programming. SIAM J. Appl. Math.19 125-141. complementary geometric programming algorithm, Duffin and Peterson's Duffin, R. J., E. L. Peterson. 1972. Reversed geometric programs treated by harmonic means. Indiana Univ. Math. J.22 531-550. reversed geometric programming algorithms, Reklaitis and Wilde's Reklaitis, G. V., D. J. Wilde. 1974. Geometric programming via a primal auxiliary problem. AIIE Trans.6 308-317. primal reversed geometric programming algorithm, and Bitran and Novaes' Bitran, G. R., A. G. Novaes. 1973. Linear programming with a fractional objective function. Opns. Res.21 22-29. linear fractional programming algorithm are all examples of this class of inner approximation algorithms. A sequence of approximating convex programs are solved in each of these algorithms. Rosen's Rosen, J. B. 1966. Iterative solution of nonlinear optimal control problems. SIAM J. Control4 223-244. inner approximation algorithm is a special case of the general inner approximation algorithm presented in this note.", "" ] }
1302.0756
1974661392
We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.
Similar conditions can be written in the real case for the nonlinear function @math replacing the linear pricing @math . It is interesting to compare P1-P3 with conditions in @cite_7 @cite_0 @cite_11 . First of all, our conditions do not require that the approximation function is a global upper bound of the original sum-utility function, a constraint that remains elusive for sum-utility problems with no special structure. Second, even when the aforementioned constraint can be met, it is not always guaranteed that the resulting convex subproblems are decomposable across the users, implying that a centralized implementation might be required. Third, SCA algorithms @cite_7 @cite_0 @cite_11 , even when distributed, are generally schemes (unless the sum-utility has a special structure). On the contrary, the algorithms proposed in this paper do not suffer from any of the above drawbacks, which enlarges substantially the class of (large scale) nonconvex problems solvable using our framework.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_11" ], "mid": [ "2112820013", "1969648546", "" ], "abstract": [ "In wireless cellular or ad hoc networks where Quality of Service (QoS) is interference-limited, a variety of power control problems can be formulated as nonlinear optimization with a system-wide objective, e.g., maximizing the total system throughput or the worst user throughput, subject to QoS constraints from individual users, e.g., on data rate, delay, and outage probability. We show that in the high Signal-to- interference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimization problems in the form of geometric programming; hence they can be very efficiently solved for global optimality even with a large number of users. In the medium to low SIR regime, some of these constrained nonlinear optimization of power control cannot be turned into tractable convex formulations, but a heuristic can be used to compute in most cases the optimal solution by solving a series of geometric programs through the approach of successive convex approximation. While efficient and robust algorithms have been extensively studied for centralized solutions of geometric programs, distributed algorithms have not been explored before. We present a systematic method of distributed algorithms for power control that is geometric-programming-based. These techniques for power control, together with their implications to admission control and pricing in wireless networks, are illustrated through several numerical examples.", "Inner approximation algorithms have had two major roles in the mathematical programming literature. Their first role was in the construction of algorithms for the decomposition of large-scale mathematical programs, such as in the Dantzig-Wolfe decomposition principle. However, recently they have been used in the creation of algorithms that locate Kuhn-Tucker solutions to nonconvex programs. Avriel and Williams' Avriel, M., A. C. Williams. 1970. Complementary geometric programming. SIAM J. Appl. Math.19 125-141. complementary geometric programming algorithm, Duffin and Peterson's Duffin, R. J., E. L. Peterson. 1972. Reversed geometric programs treated by harmonic means. Indiana Univ. Math. J.22 531-550. reversed geometric programming algorithms, Reklaitis and Wilde's Reklaitis, G. V., D. J. Wilde. 1974. Geometric programming via a primal auxiliary problem. AIIE Trans.6 308-317. primal reversed geometric programming algorithm, and Bitran and Novaes' Bitran, G. R., A. G. Novaes. 1973. Linear programming with a fractional objective function. Opns. Res.21 22-29. linear fractional programming algorithm are all examples of this class of inner approximation algorithms. A sequence of approximating convex programs are solved in each of these algorithms. Rosen's Rosen, J. B. 1966. Iterative solution of nonlinear optimal control problems. SIAM J. Control4 223-244. inner approximation algorithm is a special case of the general inner approximation algorithm presented in this note.", "" ] }
1302.0189
2015275582
We study non-adaptive pooling strategies for detection of rare faulty items. Given a binary sparse N dimensional signal x, how to construct a sparse binary M × N pooling matrix F such that the signal can be reconstructed from the smallest possible number M of measurements y = Fx? We show that a very small number of measurements is possible for random spatially coupled design of pools F. Our design might find application in genetic screening or compressed genotyping. We show that our results are robust with respect to the uncertainty in the matrix F when some elements are mistaken.
This problem is reminiscent of compressed sensing @cite_13 @cite_12 which is designed to measure signals directly in their compressed form. In fact our problem is compressed sensing with the additional constraint that the signal is binary and with a sparse binary measurement matrix @math . We also consider matrix uncertainty: some elements assumed to be @math are in fact @math with probability @math . The field of low-density parity check (LDPC) error correcting codes @cite_15 provides information about sparse measurement matrices with which tractable reconstruction can be achieved. Indeed, the only difference between non-adaptive group testing with linear tests and LPDC is that the algebra is over integers in group testing instead of @math in LDPC. The spatially coupled pooling design we study here was first discovered and validated in the field of error correcting codes @cite_28 @cite_19 @cite_30 @cite_16 .
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_19", "@cite_15", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2085903266", "1991528082", "2123119950", "", "1618499549", "2129131372", "2013366998" ], "abstract": [ "In this paper, we perform an iterative decoding threshold analysis of LDPC block code ensembles formed by terminating (J,K)-regular and irregular AR4JA-based LDPC convolutional codes. These ensembles have minimum distance growing linearly with block length and their thresholds approach the Shannon limit as the termination factor tends to infinity. Results are presented for various ensembles and termination factors, which allow a code designer to trade-off between distance growth rate and threshold.", "We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes. The performance of this decoding is close to the performance of turbo decoding. Our simulation shows that for the rate R=1 2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit. As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10 sup -5 and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10.", "An iterative decoding threshold analysis for terminated regular LDPC convolutional (LDPCC) codes is presented. Using density evolution techniques, the convergence behavior of an iterative belief propagation decoder is analyzed for the binary erasure channel and the AWGN channel with binary inputs. It is shown that for a terminated LDPCC code ensemble, the thresholds are better than for corresponding regular and irregular LDPC block codes.", "", "Convolutional LDPC ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing as a function of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism which explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of the individual code structure has the effect of increasing the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum-a-posteriori (MAP) threshold of the underlying ensemble. For this reason we call this phenomenon “threshold saturation”. This gives an entirely new way of approaching capacity. One significant advantage of such a construction is that one can create capacity-approaching ensembles with an error correcting radius which is increasing in the blocklength. Our proof makes use of the area theorem of the BP-EXIT curve and the connection between the MAP and BP threshold recently pointed out by Measson, Montanari, Richardson, and Urbanke. Although we prove the connection between the MAP and the BP threshold only for a very specific ensemble and only for the binary erasure channel, empirically the same statement holds for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar collapse of thresholds occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms as well as to new techniques for analysis.", "This paper considers a natural error correcting problem with real valued input output. We wish to recover an input vector f spl isin R sup n from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the spl lscr sub 1 -minimization problem ( spl par x spl par sub spl lscr 1 := spl Sigma sub i |x sub i |) min(g spl isin R sup n ) spl par y - Ag spl par sub spl lscr 1 provided that the support of the vector of errors is not too large, spl par e spl par sub spl lscr 0 :=| i:e sub i spl ne 0 | spl les spl rho spl middot m for some spl rho >0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of spl lscr sub 1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.", "In the context of the compressed sensing problem, we propose a new ensemble of sparse random matrices which allow one (i) to acquire and compress a ρ 0 -sparse signal of length N in a time linear in N and (ii) to perfectly recover the original signal, compressed at a rate α, by using a message passing algorithm (Expectation Maximization Belief Propagation) that runs in a time linear in N. In the large N limit, the scheme proposed here closely approaches the theoretical bound ρ 0 = α, and so it is both optimal and efficient (linear time complexity). More generally, we show that several ensembles of dense random matrices can be converted into ensembles of sparse random matrices, having the same thresholds, but much lower computational complexity." ] }
1302.0189
2015275582
We study non-adaptive pooling strategies for detection of rare faulty items. Given a binary sparse N dimensional signal x, how to construct a sparse binary M × N pooling matrix F such that the signal can be reconstructed from the smallest possible number M of measurements y = Fx? We show that a very small number of measurements is possible for random spatially coupled design of pools F. Our design might find application in genetic screening or compressed genotyping. We show that our results are robust with respect to the uncertainty in the matrix F when some elements are mistaken.
The reconstruction algorithm that is most commonly used in compressed sensing and that has been also discussed several times for reconstruction in group testing and pooling experiments @cite_4 @cite_17 @cite_7 is based on a linear relaxation of the problem to real signal components @math . One then minimizes the @math -norm of the signal under the constraints @math . This is a convex problem that can be solved efficiently using linear programing. In what follows we will use the @math reconstruction as a reference benchmark to demonstrate the improvement that can be achieved using our pooling design and the belief propagation based reconstruction.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_17" ], "mid": [ "2112118326", "2142707529", "2124036006" ], "abstract": [ "Traditionally, group testing is a design problem. The goal is to design an optimally efficient set of tests of items such that the test results contain enough information to determine a small subset of items of interest. It has its roots in the statistics community and was originally designed for the selective service during World War II to remove men with syphilis from the draft. It appears in many forms, including coin-weighing problems, experimental designs, and public health. We are interested in both the design of tests and the design of an efficient algorithm that works with the tests to determine the group of interest because many of the same techniques that are useful for designing tests are also used to solve algorithmic problems in compressive sensing, as well as to analyze and recover statistical quantities from streaming data. This article is an expository article, with the purpose of examining the relationship between group testing and compressive sensing, along with their applications and connections to sparse function learning.", "Identification of rare variants by resequencing is important both for detecting novel variations and for screening individuals for known disease alleles. New technologies enable low-cost resequencing of target regions, although it is still prohibitive to test more than a few individuals. We propose a novel pooling design that enables the recovery of novel or known rare alleles and their carriers in groups of individuals. The method is based on a Compressed Sensing (CS) approach, which is general, simple and efficient. CS allows the use of generic algorithmic tools for simultaneous identification of multiple variants and their carriers. We model the experimental procedure and show via computer simulations that it enables the recovery of rare alleles and their carriers in larger groups than were possible before. Our approach can also be combined with barcoding techniques to provide a feasible solution based on current resequencing costs. For example, when targeting a small enough genomic region (∼100 bp) and using only ∼10 sequencing lanes and ∼10 distinct barcodes per lane, one recovers the identity of 4 rare allele carriers out of a population of over 4000 individuals. We demonstrate the performance of our approach over several publicly available experimental data sets.", "Detection of rare variants by resequencing is important for the identification of individuals carrying disease variants. Rapid sequencing by new technologies enables low-cost resequencing of target regions, although it is still prohibitive to test more than a few individuals. In order to improve cost trade-offs, it has recently been suggested to apply pooling designs which enable the detection of carriers of rare alleles in groups of individuals. However, this was shown to hold only for a relatively low number of individuals in a pool, and requires the design of pooling schemes for particular cases. We propose a novel pooling design, based on a compressed sensing approach, which is both general, simple and efficient. We model the experimental procedure and show via computer simulations that it enables the recovery of rare allele carriers out of larger groups than were possible before, especially in situations where high coverage is obtained for each individual. Our approach can also be combined with barcoding techniques to enhance performance and provide a feasible solution based on current resequencing costs. For example, when targeting a small enough genomic region ( 100 base-pairs) and using only 10 sequencing lanes and 10 distinct barcodes, one can recover the identity of 4 rare allele carriers out of a population of over 4000 individuals." ] }
1302.0189
2015275582
We study non-adaptive pooling strategies for detection of rare faulty items. Given a binary sparse N dimensional signal x, how to construct a sparse binary M × N pooling matrix F such that the signal can be reconstructed from the smallest possible number M of measurements y = Fx? We show that a very small number of measurements is possible for random spatially coupled design of pools F. Our design might find application in genetic screening or compressed genotyping. We show that our results are robust with respect to the uncertainty in the matrix F when some elements are mistaken.
Note that when the number of faulty items @math is much smaller than @math , then another line of works should be considered. The best polynomial time non-adaptive algorithms known for the coin weighting problem then need @math measurements @cite_10 . Our approach is thus useful only in regimes where the number of faulty items is larger than @math . Another problem that is closely related to non-adaptive pooling as considered here is the sparse code division multiple access (CDMA) method @cite_21 @cite_20 ; with the difference that the signal @math is not sparse and that there is usually a considerable Gaussian additive noise on the measurement vector @math . The goal in CDMA is not to minimize the number of measurements @math (the number of chips), but to support the largest possible amount of noise. Spatial coupling was investigated for (dense) CDMA in @cite_8 @cite_5 .
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2154935897", "2078780937", "2164196863", "2061761364", "2144015943" ], "abstract": [ "Demodulation in a random multiple access channel is considered where the signals are chosen uniformly randomly with unit energy. It is shown that by lifting (replicating) the graph of this system and randomizing the graph connections, a simple iterative cancellation demodulator achieves the same performance as an optimal symbol-by-symbol detector of the original system. The iterative detector has a complexity that is linear in the number of users, while the direct optimal approach is known to be NP-hard. However, the maximal system load of this lifted graph is limited to @math $ can go to infinity as the SNR goes to infinity. Our results apply to several well-documented system proposals, such as interleave-division multiple access, partitioned spreading, and certain forms of multiple-input multiple-output communications.", "Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.", "proved that the belief-propagation (BP) threshold for low-density parity-check codes can be boosted up to the maximum-a-posteriori (MAP) threshold by spatial coupling. In this paper, spatial coupling is applied to randomly-spread code-division multiple-access (CDMA) systems in order to improve the performance of BP-based multiuser detection (MUD). Spatially-coupled CDMA systems can be regarded as multi-code CDMA systems with two transmission phases. The large-system analysis shows that spatial coupling can improve the BP performance, while there is a gap between the BP performance and the individually-optimal (IO) performance.", "Abstract We shall prove a generalization of a classic inequality of Erdos and Turan on B 2 -sequences of integers to B 2 -sequences of vectors with integer components. We improve a recent result by the author when the dimension tends to infinity.", "Code-division multiple access (CDMA) is the basis of a family of advanced air interfaces in current and future generation networks. The benefits promised by CDMA have not been fully realized partly due to the prohibitive complexity of optimal detection and decoding of many users communicating simultaneously using the same frequency band. From both theoretical and practical perspectives, this paper advocates a new paradigm of CDMA with sparse spreading sequences, which enables near-optimal multiuser detection using belief propagation (BP) with low-complexity. The scheme is in part inspired by capacity-approaching low-density parity-check (LDPC) codes and the success of iterative decoding techniques. Specifically, it is shown that BP-based detection is optimal in the large-system limit under many practical circumstances, which is a unique advantage of sparsely spread CDMA systems. Moreover, it is shown that, from the viewpoint of an individual user, the CDMA channel is asymptotically equivalent to a scalar Gaussian channel with some degradation in the signal-to-noise ratio (SNR). The degradation factor, known as the multiuser efficiency, can be determined from a fixed-point equation. The results in this paper apply to a broad class of sparse, semi-regular CDMA systems with arbitrary input and power distribution. Numerical results support the theoretical findings for systems of moderate size, which further demonstrate the appeal of sparse spreading in practical applications." ] }
1301.7533
1941102682
We propose a parallel algorithm for local, on the fly, model checking of a fragment of CTL that is well-suited for modern, multi-core architectures. This model-checking algorithm benefits from a parallel state space construction algorithm, which we described in a previous work, and shares the same basic set of principles: there are no assumptions on the models that can be analyzed; no restrictions on the way states are distributed; and no restrictions on the way work is shared among processors. We evaluate the performance of different versions of our algorithm and compare our results with those obtained using other parallel model checking tools. One of the most novel contributions of this work is to study a space-efficient variant for CTL model-checking that does not require to store the whole transition graph but that operates, instead, on a reverse spanning tree.
In contrast with the number of solutions proposed for parallel LTL model checking, just two specifically target CTL model checking on shared memory machines: Inggs and Barringer work @cite_5 supports CTL @math , while van de Pol and Weber work @cite_4 supports the @math -calculus.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "1977090137", "2150539434" ], "abstract": [ "In this article we present the parallelisation of an explicit-state CTL* model checking algorithm for a virtual shared-memory high-performance parallel machine architecture. The algorithm uses a combination of private and shared data structures for implicit and dynamic load balancing with minimal synchronisation overhead. The performance of the algorithm and the impact that different design decisions have on the performance are analysed using both mathematical cost models and experimental results. The analysis shows not only the practicality and effective speedup of the algorithm, but also the main pitfalls of parallelising model checking for shared-memory architectures.", "We describe a parallel algorithm for solving parity games, with applications in, e.g., modal @m-calculus model checking with arbitrary alternations, and (branching) bisimulation checking. The algorithm is based on Jurdzinski's Small Progress Measures. Actually, this is a class of algorithms, depending on a selection heuristics. Our algorithm operates lock-free, and mostly wait-free (except for infrequent termination detection), and thus allows maximum parallelism. Additionally, we conserve memory by avoiding storage of predecessor edges for the parity graph through strictly forward-looking heuristics. We evaluate our multi-core implementation's behaviour on parity games obtained from @m-calculus model checking problems for a set of communication protocols, randomly generated problem instances, and parametric problem instances from the literature." ] }
1301.7533
1941102682
We propose a parallel algorithm for local, on the fly, model checking of a fragment of CTL that is well-suited for modern, multi-core architectures. This model-checking algorithm benefits from a parallel state space construction algorithm, which we described in a previous work, and shares the same basic set of principles: there are no assumptions on the models that can be analyzed; no restrictions on the way states are distributed; and no restrictions on the way work is shared among processors. We evaluate the performance of different versions of our algorithm and compare our results with those obtained using other parallel model checking tools. One of the most novel contributions of this work is to study a space-efficient variant for CTL model-checking that does not require to store the whole transition graph but that operates, instead, on a reverse spanning tree.
Comparison with DiVinE. We now compare our algorithms with DiVinE @cite_0 , which is the state of the art tool for parallel model checking of LTL. The results given here have been obtained with DiVinE 2.5.2, considering only the best results given by the owcty or map, separately. This benchmark (experimental data and examples are available in report @cite_1 ) is based on the set of models borrowed from DiVinE on which, for a broader comparison, we check both valid and non valid properties. Figure shows the exact set of models and formulas that are used. All experiments were carried out using 16 cores and with an initial hash table sized enough to store all states. The DiVinE experiments were executed with flag () to remove counter-example generation.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2533662917", "2889067012" ], "abstract": [ "Model checking became a standard method of analyzing complex systems in many application domains. No doubt, a number of applications is placing great demands on model checking tools. The process of analysis of complex and real-life systems often requires vast computation resources, memory in particular. This phenomenon, referred to as the state space explosion problem, has been tackled by many researchers during the past two decades. A plethora of more or less successful techniques to fight the problem have been introduced, including parallel and distributed-memory processing. DiVinE is a tool for LTL model checking and reach ability analysis of discrete distributed systems. The tool is able to efficiently exploit the aggregate computing power of multiple network-interconnected multi-cored workstations in order to deal with extremely large verification tasks. As such it allows to analyze systems whose size is far beyond the size of systems that can be handled with regular sequential tools. While the main focus of the tool is on high-performance explicit state model checking, an emphasis is also put on ease of deployment and usage. Additionally, the component architecture and publicly available source code of Divine allow for its usage as a platform for research on parallel and distributed-memory model checking techniques.", "In this work, we present new algorithms for exhaustive parallel model checking that are as efficient as possible, but also ''friendly'' with respect to the work-sharing policies that are used for the state space generation (e.g. a work-stealing strategy): at no point do we impose a restriction on the way work is shared among the processors. This includes both the construction of the state space as the detection of cycles in parallel, which is is one of the key points of performance for the evaluation of more complex formulas." ] }
1301.7015
2952457055
Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. Differential privacy has recently emerged as the de facto standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent itemsets cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision.
In a broad sense, our paper belongs to the general problem of privacy-preserving data mining - a topic that has been studied extensively for a decade because of its numerous applications to a wide variety of problems in the literature. A general overview of various research works on this topic can be found in @cite_32 . Below we briefly review the results relevant to this paper.
{ "cite_N": [ "@cite_32" ], "mid": [ "1548445892" ], "abstract": [ "Advances in hardware technology have increased the capability to store and record personal data about consumers and individuals, causing concerns that personal data may be used for a variety of intrusive or malicious purposes. Privacy-Preserving Data Mining: Models and Algorithms proposes a number of techniques to perform the data mining tasks in a privacy-preserving way. These techniques generally fall into the following categories: data modification techniques, cryptographic methods and protocols for data sharing, statistical techniques for disclosure and inference control, query auditing methods, randomization and perturbation-based techniques. This edited volume contains surveys by distinguished researchers in the privacy field. Each survey includes the key research content as well as future research directions. Privacy-Preserving Data Mining: Models and Algorithms is designed for researchers, professors, and advanced-level students in computer science, and is also suitable for industry practitioners." ] }
1301.7015
2952457055
Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. Differential privacy has recently emerged as the de facto standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent itemsets cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision.
In contrast, we have a different problem setting from @cite_8 in this paper. First, like @cite_14 , our privacy-preserving algorithm is associated with a specific and more complicated data mining task. Second, we consider a graph database containing a collection of graphs related to individuals. The only work we can find on privacy protection for a graph database is @cite_25 , which follows the publishing model'. Their goal is to achieve @math -anonymity by first constructing a set of super-structures and then generating synthetic representations from them.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_8" ], "mid": [ "2164778205", "2406877715", "1992461601" ], "abstract": [ "Discovering frequent patterns from data is a popular exploratory technique in datamining. However, if the data are sensitive (e.g., patient health records, user behavior records) releasing information about significant patterns or trends carries significant risk to privacy. This paper shows how one can accurately discover and release the most significant patterns along with their frequencies in a data set containing sensitive information, while providing rigorous guarantees of privacy for the individuals whose information is stored there. We present two efficient algorithms for discovering the k most frequent patterns in a data set of sensitive records. Our algorithms satisfy differential privacy, a recently introduced definition that provides meaningful privacy guarantees in the presence of arbitrary external information. Differentially private algorithms require a degree of uncertainty in their output to preserve privacy. Our algorithms handle this by returning 'noisy' lists of patterns that are close to the actual list of k most frequent patterns in the data. We define a new notion of utility that quantifies the output accuracy of private top-k pattern mining algorithms. In typical data sets, our utility criterion implies low false positive and false negative rates in the reported lists. We prove that our methods meet the new utility criterion; we also demonstrate the performance of our algorithms through extensive experiments on the transaction data sets from the FIMI repository. While the paper focuses on frequent pattern mining, the techniques developed here are relevant whenever the data mining output is a list of elements ordered according to an appropriately 'robust' measure of interest.", "The problem of privacy-preserving data mining has attracted considerable attention in recent years because of increasing concerns about the privacy of the underlying data. In recent years, an important data domain which has emerged is that of graphs and structured data. Many data sets such as XML data, transportation networks, traffic in IP networks, social networks and hierarchically structured data are naturally represented as graphs. Existing work on graph privacy has focussed on the problem of anonymizing nodes or edges of a single graph, in which the identity is assumed to be associated with individual nodes. In this paper, we examine the more complex case, where we have a collection of graphs, and the identity is associated with individual graphs rather than nodes or edges. In such cases, the problem of identity anonymization is extremely difficult, since we need to not only anonymize the labels on the nodes, but also the underlying global structural information. In such cases, both the global and local structural information can be a challenge to the anonymization process, since any combination of such information can be used in order to de-identify the underlying graphs. In order to achieve this goal, we will create synthesized representations of the underlying graphs based on aggregate structural analytics of the collection of graphs. The synthesized graphs retain the properties of the original data while satisfying the k-anonymity requirement. Our experimental results show that the synthesized graphs maintain a high level of structural information and compatible classification accuracies with the original data.", "We present efficient algorithms for releasing useful statistics about graph data while providing rigorous privacy guarantees. Our algorithms work on datasets that consist of relationships between individuals, such as social ties or email communication. The algorithms satisfy edge differential privacy, which essentially requires that the presence or absence of any particular relationship be hidden. Our algorithms output approximate answers to subgraph counting queries. Given a query graph H, for example, a triangle, k-star, or k-triangle, the goal is to return the number of edge-induced isomorphic copies of H in the input graph. The special case of triangles was considered by [2007] and a more general investigation of arbitrary query graphs was initiated by [2009]. We extend the approach of to a new class of statistics, namely k-star queries. We also give algorithms for k-triangle queries using a different approach based on the higher-order local sensitivity. For the specific graph statistics we consider (i.e., k-stars and k-triangles), we significantly improve on the work of : our algorithms satisfy a stronger notion of privacy that does not rely on the adversary having a particular prior distribution on the data, and add less noise to the answers before releasing them. We evaluate the accuracy of our algorithms both theoretically and empirically, using a variety of real and synthetic datasets. We give explicit, simple conditions under which these algorithms add a small amount of noise. We also provide the average-case analysis in the Erdős-Renyi-Gilbert G(n,p) random graph model. Finally, we give hardness results indicating that the approach used for triangles cannot easily be extended to k-triangles (hence justifying our development of a new algorithmic approach)." ] }
1301.7015
2952457055
Discovering frequent graph patterns in a graph database offers valuable information in a variety of applications. However, if the graph dataset contains sensitive data of individuals such as mobile phone-call graphs and web-click graphs, releasing discovered frequent patterns may present a threat to the privacy of individuals. Differential privacy has recently emerged as the de facto standard for private data analysis due to its provable privacy guarantee. In this paper we propose the first differentially private algorithm for mining frequent graph patterns. We first show that previous techniques on differentially private discovery of frequent itemsets cannot apply in mining frequent graph patterns due to the inherent complexity of handling structural information in graphs. We then address this challenge by proposing a Markov Chain Monte Carlo (MCMC) sampling based algorithm. Unlike previous work on frequent itemset mining, our techniques do not rely on the output of a non-private mining algorithm. Instead, we observe that both frequent graph pattern mining and the guarantee of differential privacy can be unified into an MCMC sampling framework. In addition, we establish the privacy and utility guarantee of our algorithm and propose an efficient neighboring pattern counting technique as well. Experimental results show that the proposed algorithm is able to output frequent patterns with good precision.
Graph Pattern Mining. Finally, we briefly discuss relevant works on traditional non-private graph pattern mining. A more comprehensive survey can be found in @cite_36 . Earlier works which aim at finding all the frequent patterns in a graph database usually explore the search space in a certain manner. Representative approaches include a priori -based (e.g. @cite_30 ) and pattern growth based (e.g. gSpan @cite_13 ). An issue with this direction is that the search space grows exponentially with the pattern size, which may reach a computation bottleneck. Thus later works aim at mining significant or representative patterns with scalability. One way of achieving this is through random walk @cite_28 , which also motivates our use of MCMC sampling for privacy preserving purpose. Another remotely related work is @cite_23 , which connects probabilistic inference and differential privacy. It differs from this work by focusing on inferencing on the output of a differentially private algorithm.
{ "cite_N": [ "@cite_30", "@cite_36", "@cite_28", "@cite_23", "@cite_13" ], "mid": [ "1641749581", "", "2143211346", "2106882672", "2170726034" ], "abstract": [ "This paper proposes a novel approach named AGM to efficiently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the artificial simulation data and the carcinogenesis data of Oxford University and NTP. Its high efficiency has been confirmed for the size of a real-world problem.", "", "Recent interest in graph pattern mining has shifted from finding all frequent subgraphs to obtaining a small subset of frequent subgraphs that are representative, discriminative or significant. The main motivation behind that is to cope with the scalability problem that the graph mining algorithms suffer when mining databases of large graphs. Another motivation is to obtain a succinct output set that is informative and useful. In the same spirit, researchers also proposed sampling based algorithms that sample the output space of the frequent patterns to obtain representative subgraphs. In this work, we propose a generic sampling framework that is based on Metropolis-Hastings algorithm to sample the output space of frequent subgraphs. Our experiments on various sampling strategies show the versatility, utility and efficiency of the proposed sampling approach.", "We identify and investigate a strong connection between probabilistic inference and differential privacy, the latter being a recent privacy definition that permits only indirect observation of data through noisy measurement. Previous research on differential privacy has focused on designing measurement processes whose output is likely to be useful on its own. We consider the potential of applying probabilistic inference to the measurements and measurement process to derive posterior distributions over the data sets and model parameters thereof. We find that probabilistic inference can improve accuracy, integrate multiple observations, measure uncertainty, and even provide posterior distributions over quantities that were not directly measured.", "We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude." ] }
1301.6667
1797010409
Let S be a set of 2n points on a circle such that for each point p 2 S also its antipodal (mirrored with respect to the circle center) point p 0 belongs to S. A polygon P of size n is called antipodal if it consists of precisely one point of each antipodal pair (p;p 0 ) of S. We provide a complete characterization of antipodal polygons which maximize (minimize, respectively) the area among all antipodal polygons ofS. Based on this characterization, a simple linear time algorithm is presented for computing extremal antipodal polygons. Moreover, for the generalization of antipodal polygons to higher dimensions we show that a similar characterization does not exist.
ph Extremal problems: Plane geometry is rich of extremal problems, often dating back till the ancient Greeks. During the centuries many of these problems have been solved by geometrical reasoning. Specifically, extremal problems on convex polygons have attracted the attention of both fields, geometry and optimization. In computational geometry, efficient algorithms have been proposed for computing extremal polygons w.r.t. several different properties @cite_3 . In operations research, global optimization techniques have been extensively studied to find convex polygons maximizing a given parameter @cite_12 . A geometric extremal problem similar to the one studied in this paper was solved by Fejes T ' o th @cite_10 almost fifty years ago. He showed that the sum of pairwise distances determined by @math points contained in a circle is maximized when the points are the vertices of a regular @math -gon inscribed in the circle. Recently, the discrete version of this problem has been reviewed in @cite_7 and problems considering maximal area instead of the sum of inter-point distances have been solved in @cite_2 .
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "2116417064", "2052562833", "2006554643", "2108509297", "2002285779" ], "abstract": [ "Many problems concerning the theory and technology of rhythm, melody, and voice-leading are fundamentally geometric in nature. It is therefore not surprising that the field of computational geometry can contribute greatly to these problems. The interaction between computational geometry and music yields new insights into the theories of rhythm, melody, and voice-leading, as well as new problems for research in several areas, ranging from mathematics and computer science to music theory, music perception, and musicology. Recent results on the geometric and computational aspects of rhythm, melody, and voice-leading are reviewed, connections to established areas of computer science, mathematics, statistics, computational biology, and crystallography are pointed out, and new open problems are proposed.", "Given n points in the plane, we present algorithms for finding maximum perimeter or area convex k-gons with vertices k of the given n points. Our algorithms work in linear space and time @math . For the special case @math we give @math algorithms for these problems. Several related issues are discussed.", "A musical scale can be viewed as a subset of notes or pitches taken from a chromatic universe. For the purposes of this paper we consider a chromatic universe of twelve equally spaced pitches. Given integers (N,K) with N > K we use particular integer partitions of N into K parts to construct distinguished sets, or scales. We show that a natural geometric realization of these sets have maximal area, so we call them maximal area sets. We then discuss properties of maximal area sets for the integer pairs (12,5) (12,6) (12,7) and (12,8) with the obvious relevance to scales in our normal chromatic collection of 12 pitches. Complementary maximal area sets are those sets where the chosen K notes realize maximal area, and the complementary N − K notes also realize maximal area. The complementary maximal area sets closely match a significant collection of scales identified in a book on jazz theory by Mark Levine(The jazz theory book. Sher Music Co., Pet aluma, 1995).", "where d denotes the diameter of the circumcircle of the points. The constant cannot be replaced by a greater one. In order to prove our theorem we denote the points by P . . . . ,P ,Fixing the points . . . . , P,,, we consider S,, as a function of the point P . Since the distances P P., , . . . , P1P, are convex functions of P , the same can b e 'stated of S,i. It follows that S, takes its maximum for a point P1 ly!ng on the boundary C of the circumcircle of the points. Therefore all points may be supposed to lie on C. Furthermore we suppose that C is a unit circle and that the cyclical order of the points is PI . . . . . P,,,. Introducing the notations P,+I = P1, . . . , P , 1 P,,- we consider the sum", "Consider a convex polygon V n with n sides, perimeter P n , diameter D n , area A n , sum of distances between vertices S n and width W n . Minimizing or maximizing any of these quantities while fixing another defines 10 pairs of extremal polygon problems (one of which usually has a trivial solution or no solution at all). We survey research on these problems, which uses geometrical reasoning increasingly complemented by global optimization methods. Numerous open problems are mentioned, as well as series of test problems for global optimization and non-linear programming codes." ] }
1301.6667
1797010409
Let S be a set of 2n points on a circle such that for each point p 2 S also its antipodal (mirrored with respect to the circle center) point p 0 belongs to S. A polygon P of size n is called antipodal if it consists of precisely one point of each antipodal pair (p;p 0 ) of S. We provide a complete characterization of antipodal polygons which maximize (minimize, respectively) the area among all antipodal polygons ofS. Based on this characterization, a simple linear time algorithm is presented for computing extremal antipodal polygons. Moreover, for the generalization of antipodal polygons to higher dimensions we show that a similar characterization does not exist.
In our case, an antipodal polygon is related with the concept in music theory. Typically, the notes of a scale are represented by a polygon in a clock diagram. In a chromatic scale, each whole tone can be further divided into two semitones. Thus, we can think in a clock diagram with twelve points representing the twelve equally spaced pitches that represent the chromatic universe (using an equal tempered tuning). The pitch class diagram is illustrated in Figure . A tritone is traditionally defined as a musical interval composed of three whole tones. Thus, it is any interval spanning six semitones. In Figure a), the polygon represents a scale containing the tritones @math . The tritone is defined as a restless interval or dissonance in Western music from the early Middle Ages. This interval was frequently avoided in medieval ecclesiastical singing because of its dissonant quality. The name (the Devil in music) has been applied to the interval from at least the early 18th century @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2006090347" ], "abstract": [ "This work contains almost 30,000 articles containing over 25 million words on musicians, composers, musicologists, instruments, places, genres, terms, performance practice, concepts, acoustics and more. All the articles are written by experts in their subject. There are over 500 biographies of composers, performers and writers on music and over 1,500 articles on styles, terms, and genres. It also includes: over 500 articles on ancient music and church music over 700 articles on regions, countries and cities over 2,000 articles on instruments and their makers and performance practice over 650 articles on printing and publishing over 1,200 articles on world music over 1,000 articles on popular music, light music and jazz over 250 articles on concepts 85 articles on acoustics 126 articles on sources and a one volume index." ] }
1301.6626
2228771891
Mining discriminative features for graph data has attracted much attention in recent years due to its important role in constructing graph classifiers, generating graph indices, etc. Most measurement of interestingness of discriminative subgraph features are defined on certain graphs, where the structure of graph objects are certain, and the binary edges within each graph represent the "presence" of linkages among the nodes. In many real-world applications, however, the linkage structure of the graphs is inherently uncertain. Therefore, existing measurements of interestingness based upon certain graphs are unable to capture the structural uncertainty in these applications effectively. In this paper, we study the problem of discriminative subgraph feature selection from uncertain graphs. This problem is challenging and different from conventional subgraph mining problems because both the structure of the graph objects and the discrimination score of each subgraph feature are uncertain. To address these challenges, we propose a novel discriminative subgraph feature selection method, DUG, which can find discriminative subgraph features in uncertain graphs based upon different statistical measures including expectation, median, mode and phi-probability. We first compute the probability distribution of the discrimination scores for each subgraph feature based on dynamic programming. Then a branch-and-bound algorithm is proposed to search for discriminative subgraphs efficiently. Extensive experiments on various neuroimaging applications (i.e., Alzheimer's Disease, ADHD and HIV) have been performed to analyze the gain in performance by taking into account structural uncertainties in identifying discriminative subgraph features for graph classification.
Our work is also motivated by the recent advances in analyzing neuroimaging data using data mining and machine learning approaches @cite_16 @cite_17 @cite_15 @cite_6 . Huang et. al. @cite_16 developed a sparse inverse covariance estimation method for analyzing brain connectivities in PET images of patients with Alzheimer's disease.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_6", "@cite_17" ], "mid": [ "2085779166", "2138382427", "2031250362", "" ], "abstract": [ "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.", "Recent advances in neuroimaging techniques provide great potentials for effective diagnosis of Alzheimer's disease (AD), the most common form of dementia. Previous studies have shown that AD is closely related to the alternation in the functional brain network, i.e., the functional connectivity among different brain regions. In this paper, we consider the problem of learning functional brain connectivity from neuroimaging, which holds great promise for identifying image-based markers used to distinguish Normal Controls (NC), patients with Mild Cognitive Impairment (MCI), and patients with AD. More specifically, we study sparse inverse covariance estimation (SICE), also known as exploratory Gaussian graphical models, for brain connectivity modeling. In particular, we apply SICE to learn and analyze functional brain connectivity patterns from different subject groups, based on a key property of SICE, called the \"monotone property\" we established in this paper. Our experimental results on neuroimaging PET data of 42 AD, 116 MCI, and 67 NC subjects reveal several interesting connectivity patterns consistent with literature findings, and also some new patterns that can help the knowledge discovery of AD.", "Alzheimer's Disease (AD), the most common type of dementia, is a severe neurodegenerative disorder. Identifying markers that can track the progress of the disease has recently received increasing attentions in AD research. A definitive diagnosis of AD requires autopsy confirmation, thus many clinical cognitive measures including Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale cognitive subscale (ADAS-Cog) have been designed to evaluate the cognitive status of the patients and used as important criteria for clinical diagnosis of probable AD. In this paper, we propose a multi-task learning formulation for predicting the disease progression measured by the cognitive scores and selecting markers predictive of the progression. Specifically, we formulate the prediction problem as a multi-task regression problem by considering the prediction at each time point as a task. We capture the intrinsic relatedness among different tasks by a temporal group Lasso regularizer. The regularizer consists of two components including an L2,1-norm penalty on the regression weight vectors, which ensures that a small subset of features will be selected for the regression models at all time points, and a temporal smoothness term which ensures a small deviation between two regression models at successive time points. We have performed extensive evaluations using various types of data at the baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database for predicting the future MMSE and ADAS-Cog scores. Our experimental studies demonstrate the effectiveness of the proposed algorithm for capturing the progression trend and the cross-sectional group differences of AD severity. Results also show that most markers selected by the proposed algorithm are consistent with findings from existing cross-sectional studies.", "" ] }
1301.6927
1728846598
We classify the solutions to an overdetermined elliptic problem in the plane in the finite connectivity case. This is achieved by establishing a one-to-one correspondence between the solutions to this problem and a certain type of minimal surfaces.
Laurent Hauswirth pointed out to me that Problem has been studied by D. Khavinson, E. Lundberg and R. Teodorescu in a recent paper @cite_5 . They obtain classification results in the @math -dimensional case under stronger topological hypotheses than ours. They prove that if an exceptional domain @math is the complement of a bounded domain, then it is the complement of a disk; if it is simply-connected and Smirnov, then it is a half-plane or the domain @math , up to similitude. In the simply-connected case, their results are stronger than ours because they do not assume that the boundary of @math has a finite number of components. They also prove that in higher dimension, an exceptional domain in @math whose complement is bounded, connected and has @math boundary, is the exterior of a sphere.
{ "cite_N": [ "@cite_5" ], "mid": [ "2116325468" ], "abstract": [ "We investigate a problem posed by L. Hauswirth, F. H 'elein, and F. Pacard, namely, to characterize all the domains in the plane that admit a \"roof function\", i.e., a positive harmonic function which solves simultaneously a Dirichlet problem with null boundary data, and a Neumann problem with constant boundary data. Under some a priori assumptions, we show that the only three examples are the exterior of a disk, a halfplane, and a nontrivial example. We show that in four dimensions the nontrivial simply connected example does not have any axially symmetric analog containing its own axis of symmetry." ] }
1301.6299
1738526133
AbstractThe overwhelming majority of survivable (fault-tolerant) network design models assume auniform scenario set. Such a scenario set assumes that every subset of the network resources(edges or vertices) of a given cardinality k comprises a scenario. While this approach yieldsproblems with clean combinatorial structure and good algorithms, it often fails to capture thetrue nature of the scenario set coming from applications.One natural refinement of the uniform model is obtained by partitioning the set of resourcesinto faulty and secure resources. The scenario set contains every subset of at most k faultyresources. This work studies the Fault-Tolerant Path (FTP) problem, the counterpart of theShortest Path problem in this failure model. We present complexity results alongside exact andapproximation algorithms for FTP. We emphasize the vast increase in the complexity of theproblem with respect to its uniform analogue, the Edge-Disjoint Paths problem. 1 Introduction The Minimum-Cost Edge-Disjoint Path (EDP) problem is a classical network design problem,defined as follows. Given an edge-weighted graph G = (V,E), two terminals s,t ∈V and an integerparameter k ∈Z
The robustness model proposed in this paper is natural for various classical combinatorial optimization problems. Of particular interest is the counterpart of the Minimum Spanning Tree problem. This problem is closely related to the Minimum @math -Edge Connected Spanning Subgraph (ECSS) problem, a well-understood robust connection problem. Gabow, Goemans, Tardos and Williamson @cite_7 developed a polynomial time @math -approximation algorithm for ECSS, for some fixed constant @math . The authors also show that for some constant @math , the existence of a polynomial time @math -approximation algorithm implies P @math NP. An intriguing property of ECSS is that the problem becomes easier to approximate when @math grows. Concretely, while for every fixed @math , ECSS is NP-hard to approximate within some factor @math , the latter result asserts that there is function @math tending to one as @math tends to infinity such that ECSS is approximable within a factor @math . This phenomenon was already discovered by Cheriyan and Thurimella @cite_4 , who gave algorithms with a weaker approximation guarantee. The more general Generalized Steiner Network problem admits a polynomial @math -approximation algorithm due to Jain @cite_12 .
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_7" ], "mid": [ "2134875769", "2153014861", "1992614365" ], "abstract": [ "An efficient heuristic is presented for the problem of finding a minimum-size k-connected spanning subgraph of an (undirected or directed) simple graph G=(V,E). There are four versions of the problem, and the approximation guarantees are as follows: minimum-size k-node connected spanning subgraph of an undirected graph 1 + [1 k], minimum-size k-node connected spanning subgraph of a directed graph 1 + [1 k], minimum-size k-edge connected spanning subgraph of an undirected graph 1+[2 (k+1)], minimum-size k-edge connected spanning subgraph of a directed graph 1 + [4 k ]. The heuristic is based on a subroutine for the degree-constrained subgraph (b-matching) problem. It is simple and deterministic and runs in time O(k|E|2). The following result on simple undirected graphs is used in the analysis: The number of edges required for augmenting a graph of minimum degree k to be k-edge connected is at most k,|V| (k+1). For undirected graphs and k=2, a (deterministic) parallel NC version of the heuristic finds a 2-node connected (or 2-edge connected) spanning subgraph whose size is within a factor of ( @math ) of minimum, where @math is a constant.", "We present a factor 2 approximation algorithm for finding a minimum-cost subgraph having at least a specified number of edges in each cut. This class of problems includes, among others, the generalized Steiner network problem, which is also known as the survivable network design problem. Our algorithm first solves the linear relaxation of this problem, and then iteratively rounds off the solution. The key idea in rounding off is that in a basic solution of the LP relaxation, at least one edge gets included at least to the extent of half. We include this edge into our integral solution and solve the residual problem.", "The smallest k-ECSS problem is, given a graph along with an integer k, find a spanning subgraph that is k-edge connected and contains the fewest possible number of edges. We examine a natural approximation algorithm based on rounding an LP solution. A tight bound on the approximation ratio is 1 + 3-k for undirected graphs with k > 1 odd, 1 + 2-k for undirected graphs with k even, and 1 + 2-k for directed graphs with k arbitrary. Using iterated rounding improves the first upper bound to 1 + 2-k. On the hardness side we show that for some absolute constant c > 0, for any integer k ≥ 2 (k ≥ 1), a polynomial-time algorithm approximating the smallest k-ECSS on undirected (directed) multigraphs to within ratio 1 + c-k would imply P = NP. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009" ] }
1301.6299
1738526133
AbstractThe overwhelming majority of survivable (fault-tolerant) network design models assume auniform scenario set. Such a scenario set assumes that every subset of the network resources(edges or vertices) of a given cardinality k comprises a scenario. While this approach yieldsproblems with clean combinatorial structure and good algorithms, it often fails to capture thetrue nature of the scenario set coming from applications.One natural refinement of the uniform model is obtained by partitioning the set of resourcesinto faulty and secure resources. The scenario set contains every subset of at most k faultyresources. This work studies the Fault-Tolerant Path (FTP) problem, the counterpart of theShortest Path problem in this failure model. We present complexity results alongside exact andapproximation algorithms for FTP. We emphasize the vast increase in the complexity of theproblem with respect to its uniform analogue, the Edge-Disjoint Paths problem. 1 Introduction The Minimum-Cost Edge-Disjoint Path (EDP) problem is a classical network design problem,defined as follows. Given an edge-weighted graph G = (V,E), two terminals s,t ∈V and an integerparameter k ∈Z
Kouvalis and Yu @cite_19 provide an extensive study of complexity and algorithms for robust counterparts of many classical discrete optimization. Instead of resource removal, these models assume uncertainty in final cost of the resources. Bertsimas and Sim @cite_0 proposed a framework for incorporating robustness in discrete optimization. They show that robust counterparts of mixed integer linear programs with uncertainty in the matrix coefficients can be solved using moderately larger mixed integer linear programs with no uncertainty.
{ "cite_N": [ "@cite_0", "@cite_19" ], "mid": [ "2165775468", "1963888335" ], "abstract": [ "We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0−1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n+1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0−1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP-hard α-approximable 0−1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.", "Preface. 1. Approaches to Handle Uncertainty In Decision Making. 2. A Robust Discrete Optimization Framework. 3. Computational Complexity Results of Robust Discrete Optimization Problems. 4. Easily Solvable Cases of Robust Discrete Optimization Problems. 5. Algorithmic Developments for Difficult Robust Discrete Optimization Problems. 6. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty. 7. Robust Scheduling Problems. 8. Robust Uncapacitated Network Design and International Sourcing Problems. 9. Robust Discrete Optimization: Past Successes and Future Challenges." ] }
1301.6299
1738526133
AbstractThe overwhelming majority of survivable (fault-tolerant) network design models assume auniform scenario set. Such a scenario set assumes that every subset of the network resources(edges or vertices) of a given cardinality k comprises a scenario. While this approach yieldsproblems with clean combinatorial structure and good algorithms, it often fails to capture thetrue nature of the scenario set coming from applications.One natural refinement of the uniform model is obtained by partitioning the set of resourcesinto faulty and secure resources. The scenario set contains every subset of at most k faultyresources. This work studies the Fault-Tolerant Path (FTP) problem, the counterpart of theShortest Path problem in this failure model. We present complexity results alongside exact andapproximation algorithms for FTP. We emphasize the vast increase in the complexity of theproblem with respect to its uniform analogue, the Edge-Disjoint Paths problem. 1 Introduction The Minimum-Cost Edge-Disjoint Path (EDP) problem is a classical network design problem,defined as follows. Given an edge-weighted graph G = (V,E), two terminals s,t ∈V and an integerparameter k ∈Z
A two-stage robust model called was introduced by @cite_18 . In this model a scenario corresponds to a subset of the initially given constraints that need to be satisfied. For shortest path this means that the source node and the target node are not fixed, but are rather revealed in the second stage. The authors provide approximation algorithms for some problems such as Steiner Tree, Multi-Cut, facility Location etc. in the demand-robust setting. @cite_14 later improved some of those results, including a constant factor approximation algorithm for the robust shortest path problem. @cite_17 and @cite_8 studied an extension of the demand-robust model that allows exponentially many scenarios.
{ "cite_N": [ "@cite_8", "@cite_18", "@cite_14", "@cite_17" ], "mid": [ "2167240010", "2134396776", "1579892319", "1604332079" ], "abstract": [ "We study two-stage robustvariants of combinatorial optimization problems like Steiner tree, Steiner forest, and uncapacitated facility location. The robust optimization problems, previously studied by [1], [6], and [4], are two-stage planning problems in which the requirements are revealed after some decisions are taken in stage one. One has to then complete the solution, at a higher cost, to meet the given requirements. In the robust Steiner tree problem, for example, one buys some edges in stage one after which some terminals are revealed. In the second stage, one has to buy more edges, at a higher cost, to complete the stage one solution to build a Steiner tree on these terminals. The objective is to minimize the total cost under the worst-case scenario. In this paper, we focus on the case of exponentially manyscenarios given implicitly. A scenario consists of any subset of kterminals (for Steiner tree), or any subset of kterminal-pairs (for Steiner forest), or any subset of kclients (for facility location). We present the first constant-factor approximation algorithms for the robust Steiner tree and robust uncapacitated facility location problems. For the robust Steiner forest problem with uniform inflation, we present an O(logn)-approximation and show that the problem with two inflation factors is impossible to approximate within O(log1 2 i¾? i¾?n) factor, for any constant i¾?> 0, unless NP has randomized quasi-polynomial time algorithms. Finally, we show APX-hardness of the robust min-cut problem (even with singleton-set scenarios), resolving an open question by [1] and [6].", "Robust optimization has traditionally focused on uncertainty in data and costs in optimization problems to formulate models whose solutions will be optimal in the worst-case among the various uncertain scenarios in the model. While these approaches may be thought of defining data- or cost-robust problems, we formulate a new \"demand-robust\" model motivated by recent work on two-stage stochastic optimization problems. We propose this in the framework of general covering problems and prove a general structural lemma about special types of first-stage solutions for such problems: there exists a first-stage solution that is a minimal feasible solution for the union of the demands for some subset of the scenarios and its objective function value is no more than twice the optimal. We then provide approximation algorithms for a variety of standard discrete covering problems in this setting, including minimum cut, minimum multi-cut, shortest paths, Steiner trees, vertex cover and un-capacitated facility location. While many of our results draw from rounding approaches recently developed for stochastic programming problems, we also show new applications of old metric rounding techniques for cut problems in this demand-robust setting.", "Demand-robust versions of common optimization problems were recently introduced by [4] motivated by the worst-case considerations of two-stage stochastic optimization models. We study the demand robust min-cut and shortest path problems, and exploit the nature of the robust objective to give improved approximation factors. Specifically, we give a @math approximation for robust min-cut and a 7.1 approximation for robust shortest path. Previously, the best approximation factors were O(log n) for robust min-cut and 16 for robust shortest paths, both due to [4]. Our main technique can be summarized as follows: We investigate each of the second stage scenarios individually, checking if it can be independently serviced in the second stage within an acceptable cost (namely, a guess of the optimal second stage costs). For the costly scenarios that cannot be serviced in this way (“rainy days”), we show that they can be fully taken care of in a near-optimal first stage solution (i.e., by ”paying today”). We also consider “hitting-set” extensions of the robust min-cut and shortest path problems and show that our techniques can be combined with algorithms for Steiner multicut and group Steiner tree problems to give similar approximation guarantees for the hitting-set versions of robust min-cut and shortest path problems respectively.", "Following the well-studied two-stage optimization framework for stochastic optimization [15,8], we study approximation algorithms for robust two-stage optimization problems with an exponential number of scenarios. Prior to this work, [8] introduced approximation algorithms for two-stage robust optimization problems with explicitly given scenarios. In this paper, we assume the set of possible scenarios is given implicitly, for example by an upper bound on the number of active clients. In two-stage robust optimization, we need to pre-purchase some resources in the first stage before the adversary's action. In the second stage, after the adversary chooses the clients that need to be covered, we need to complement our solution by purchasing additional resources at an inflated price. The goal is to minimize the cost in the worst-case scenario. We give a general approach for solving such problems using LP rounding. Our approach uncovers an interesting connection between robust optimization and online competitive algorithms. We use this approach, together with known online algorithms, to develop approximation algorithms for several robust covering problems, such as set cover, vertex cover, and edge cover. We also study a simple buy-at-oncealgorithm that either covers all items in the first stage or does nothing in the first stage and waits to build the complete solution in the second stage. We show that this algorithm gives tight approximation factors for unweighted variants of these covering problems, but performs poorly for general weighted problems." ] }
1301.5912
1680985513
This chapter presents joint interference suppression and power allocation algorithms for DS-CDMA and MIMO networks with multiple hops and amplify-and-forward and decode-and-forward (DF) protocols. A scheme for joint allocation of power levels across the relays and linear interference suppression is proposed. We also consider another strategy for joint interference suppression and relay selection that maximizes the diversity available in the system. Simulations show that the proposed cross-layer optimization algorithms obtain significant gains in capacity and performance over existing schemes.
Prior work on cross-layer design for cooperative and multihop communications has considered the problem of resource allocation @cite_0 @cite_28 in generic networks. These include power and rate allocation strategies. Related work on cooperative multiuser DS-CDMA networks has focused on the assessment of the impact of multiple access interference (MAI) and intersymbol interference (ISI), the problem of partner selection @cite_26 @cite_35 , the bit error ratio (BER) and outage performance analysis @cite_36 , and training-based joint power allocation and interference mitigation strategies @cite_31 @cite_22 . Previous works have also considered the problem of antenna selection, relay selection (RS) and diversity maximization, which are central themes in the MIMO relaying literature @cite_23 @cite_14 @cite_5 . However, current approaches are often limited to stationary, single relay systems and channels which assume the direct path from the source to the destination is negligible @cite_15 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_14", "@cite_22", "@cite_15", "@cite_28", "@cite_36", "@cite_0", "@cite_23", "@cite_5", "@cite_31" ], "mid": [ "1967708117", "2114182286", "", "2099561974", "", "2099554716", "2097279726", "2111874886", "2149155827", "", "2168595589" ], "abstract": [ "We investigate strategies for user cooperation in the uplink of a synchronous direct-sequence code-division multiple-access (DS CDMA) network employing nonorthogonal spreading codes and analyze their performance. We consider two repetition-based relay schemes: decode-and-forward (DAF) and amplify-and-forward (AAF). Focusing on the use of linear multiuser detectors, we first present cooperation strategies, i.e., signal processing at both the relay nodes and the base station (BS), under the assumption of perfectly known channel conditions of all links; then, we consider the more practical scenario where relays and BS have only partial information about the system parameters, which requires blind multiuser detection methods. We provide performance analysis of the proposed detection strategies in terms of the (asymptotic) signal-to-(interference plus noise) ratio and the bit error rate, and we show that AAF achieves a full second-order diversity when a minimum mean-square-error detector is employed at both the relay side and the BS. A simple, yet effective, partner selection algorithm is also presented. Finally, a thorough performance assessment is undertaken to study the impact of the multiple-access interference on the proposed cooperative strategies under different scenarios and system assumptions", "The use of multi-user detection (MUD) in a cooperative CDMA network is investigated for the uplink in synchronous CDMA systems. Suppose that, at any instant in time, part of the users serve as sources while the others serve as relays. The proposed MUD scheme decorrelates the sources' messages at the destination with the help of precoding at the relays. Three cooperation methods are considered: (1) transmit beamforming, (2) selective relaying and (3) distributed space- time coding. The optimal weighting factors of each method are determined by taking the quality of the source-to-relay and or the relay-to-destination links into account. We show that significant improvements in terms of the spatial diversity and multiple-access interference (MAI) mitigation can be attained when precoding is employed at the relays to aid the decorrelation at the destination. The advantages are even more pronounced when selective relaying is combined with the other two schemes.", "", "In this paper, multiple-input multiple-output (MIMO) relay transceiver processing is proposed for multiuser two-way relay communications. The relay processing is optimized based on both zero-forcing (ZF) and minimum mean-square-error (MMSE) criteria under relay power constraints. Various transmit and receive beamforming methods are compared including eigen beamforming, antenna selection, random beamforming, and modified equal gain beamforming. Local and global power control methods are designed to achieve fairness among all users and to maximize the system signal-to-noise ratio (SNR). Numerical results show that the proposed multiuser two-way relay processing can efficiently eliminate both co-channel interference (CCI) and self-interference (SI).", "", "We propose cross-layer optimization frameworks for multihop wireless networks using cooperative diversity. These frameworks provide solutions to fundamental relaying problems of determining who should be relays for whom and how to perform resource allocation for these relaying schemes jointly with routing and congestion control such that the system performance is optimized. We present a fully distributed algorithm where the joint routing, relay selection, and power allocation problem to minimize network power consumption is solved by using convex optimization. Via dual decomposition, the master optimization problem is decomposed into a routing subproblem in the network layer and a joint relay selection and power allocation subproblem in the physical layer, which can be solved efficiently in a distributed manner. We then extend the framework to incorporate congestion control and develop a framework for optimizing the sum rate utility and power tradeoff for wireless networks using cooperative diversity. The numerical results show the convergence of the proposed algorithms and significant improvement in terms of power consumption and source rates due to cooperative diversity.", "This paper investigates the impact of inter-user non-orthogonality and asynchronous communication on the information-outage probability performance of multi-user decode-and-forward (DF) cooperative diversity in a code-division multiple-access (CDMA) uplink. Each user in the proposed system transmits its own data towards the base station and also serves as a relay for other users. We assume full-duplex communication so that each user can transmit and receive simultaneously at the same frequency. Each user attempts to decode the messages of a plurality of other users and forwards the superposition of multiple re-encoded and re-spread messages. Our cooperative scheme employs a sub-optimum decorrelating receiver to suppress the multi-user interference at both the base station and the relay-side. We evaluate the information-outage probability performance of the proposed scheme in an underloaded, fully-loaded and overloaded CDMA uplink. We consider combining schemes at the base station where the source information is code combined with the relayed information, while the information from multiple relays is either code combined or diversity combined. Under the system parameters contemplated in this paper, diversity combining of the relayed information is nearly as good as code combining because of the associated probabilities of decoding at the relays. We then examine the effect of using practical modulation formats on the information-outage probability performance of the proposed DF multi-user sharing scheme under diversity combining. We see that the performance loss due to modulation constraints and the use of diversity combining instead of code combining is relatively small.", "We study power allocation for the decode-and-forward cooperative diversity protocol in a wireless network under the assumption that only mean channel gains are available at the transmitters. In a Rayleigh fading channel with uniformly distributed node locations, we aim to find the power allocation that minimizes the outage probability under a short-term power constraint, wherein the total power for all nodes is less than a prescribed value during each two-stage transmission. Due to the computational and implementation complexity of the optimal solution, we derived a simple near-optimal solution. In this near-optimal scheme, a fixed fraction of the total power is allocated to the source node in stage I. In stage II, the remaining power is split equally among a set of selected nodes if the selected set is not empty, and otherwise is allocated to the source node. A node is selected if it can decode the message from the source and its mean channel gain to the destination is above a threshold. In this scheme, each node only needs to know its own mean channel gain to the destination and the number of selected nodes. Simulation results show that the proposed scheme achieves an outage probability close to that for the optimal scheme obtained by numerical search, and achieves significant performance gain over other schemes in the literature", "We propose a greedy minimum mean squared error (MMSE)-based antenna selection algorithm for amplify-and-forward (AF) multiple-input multiple-output (MIMO) relay systems. Assuming equal-power allocation across the multi-stream data, we derive a closed form expression for the mean squared error (MSE) resulted from adding each additional antenna pair. Based on this result, we iteratively select the antenna-pairs at the relay nodes to minimize the MSE. Simulation results show that our algorithm greatly outperforms the existing schemes.", "", "This work presents joint iterative power allocation and interference suppression algorithms for spread spectrum networks with multiple relays and the amplify and forward cooperation strategy. A joint constrained optimization framework that considers the allocation of power levels across the relays subject to individual and global power constraints and the design of linear receivers for interference suppression is proposed. Constrained minimum mean-squared error (MMSE) expressions for the parameter vectors that determine the optimal power levels across the relays and the parameters of the linear receivers are derived. In order to solve the proposed optimization problems efficiently, stochastic gradient (SG) algorithms for adaptive joint iterative power allocation, and receiver and channel parameter estimation are developed. The results of simulations show that the proposed algorithms obtain significant gains in performance and capacity over existing cooperative and non-cooperative schemes." ] }
1301.5912
1680985513
This chapter presents joint interference suppression and power allocation algorithms for DS-CDMA and MIMO networks with multiple hops and amplify-and-forward and decode-and-forward (DF) protocols. A scheme for joint allocation of power levels across the relays and linear interference suppression is proposed. We also consider another strategy for joint interference suppression and relay selection that maximizes the diversity available in the system. Simulations show that the proposed cross-layer optimization algorithms obtain significant gains in capacity and performance over existing schemes.
Most of these resource allocation and interference mitigation strategies require a higher computational cost to implement the power allocation and a significant amount of signalling, decreasing the spectral efficiency of cooperative networks. This problem is central to ad-hoc and sensor networks @cite_38 that employ spread spectrum systems and require multiple hops to communicate with nodes that are far from the source node. This is also of paramount importance in cooperative cellular networks.
{ "cite_N": [ "@cite_38" ], "mid": [ "2054629881" ], "abstract": [ "This paper investigates the benefit of adaptive modulation based on channel state information (CSI) in direct-sequence code-division multiple-access (DS CDMA) multihop packet radio networks. By exploiting varying channel conditions, adaptive modulation can be used in ad hoc networks to provide upper layers with higher capacity links over which to relay traffic. Using the spl alpha -stable interference model, the distribution of the signal-to-interference ratio (SIR) is obtained for a slotted system of randomly, uniformly distributed nodes using multilevel coherent modulation schemes. Performance is evaluated in terms of the information efficiency, which is a new progress-related measure for multihop networks. Three types of adaptivity are analyzed, differing in the level of CSI available: 1) full knowledge of the SIR at the receiver; 2) knowledge of only the signal attenuation due to fading; and 3) knowledge of only the slow fading component of the signal attenuation. The effect of imperfect channel information is also investigated. Sample results are given for interference-limited networks experiencing fourth-power path loss with distance, Ricean fading, and lognormal shadowing." ] }