aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1208.0180
2949381561
In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states apart possibly from a unique leader node and by unknown that nodes have no a priori knowledge of the network (apart from some minimal knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model, in which communication is synchronous and a worst-case adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time. Then we focus on dynamic networks with broadcast. We conjecture that dynamicity renders nontrivial computation impossible. In view of this, we let the nodes know an upper bound on the maximum degree that will ever appear and show that in this case the nodes can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. Interestingly, this natural variation is proved to be computationally equivalent to a full-knowledge model, in which unique names exist and the size of the network is known.
Distributed systems with worst-case dynamicity were first studied in @cite_19 . Their outstanding novelty was to assume a communication network that may change arbitrarily from time to time subject to the condition that each instance of the network is connected. They studied asynchronous communication and allowed nodes detect local neighborhood changes. They studied the and problems in this setting and among others provided a uniform protocol for flooding that terminates in @math rounds using @math bit storage and message overhead, where @math is the maximum time it takes to transmit a message.
{ "cite_N": [ "@cite_19" ], "mid": [ "2120957127" ], "abstract": [ "We investigate to what extent flooding and routing is possible if the graph is allowed to change unpredictably at each time step. We study what minimal requirements are necessary so that a node may correctly flood or route a message in a network whose links may change arbitrarily at any given point, subject to the condition that the underlying graph is connected. We look at algorithmic constraints such as limited storage, no knowledge of an upper bound on the number of nodes, and no usage of identifiers. We look at flooding as well as routing to some existing specified destination and give algorithms." ] }
1208.0180
2949381561
In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states apart possibly from a unique leader node and by unknown that nodes have no a priori knowledge of the network (apart from some minimal knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model, in which communication is synchronous and a worst-case adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time. Then we focus on dynamic networks with broadcast. We conjecture that dynamicity renders nontrivial computation impossible. In view of this, we let the nodes know an upper bound on the maximum degree that will ever appear and show that in this case the nodes can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. Interestingly, this natural variation is proved to be computationally equivalent to a full-knowledge model, in which unique names exist and the size of the network is known.
Computation under worst-case dynamicity was further and extensively studied in a series of works by Kuhn in the synchronous case. In @cite_1 , among others, (in which nodes must determine the size of the network) and (in which @math different pieces of information, called tokens, are handed out to the @math nodes of the network, each node being assigned one token, and all nodes must collect all @math tokens) were solved in @math rounds using @math bits per message. Several variants of in 1-interval connected networks were studied in @cite_7 . Requiring continuous connectivity has been supported by the findings of @cite_8 , where a for mobile robot swarms that encapsulates an arbitrary motion planner and can refine any plan to preserve connectivity while ensuring progress was proposed.
{ "cite_N": [ "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2120741723", "2135254481", "1590585877" ], "abstract": [ "In this paper we investigate distributed computation in dynamic networks in which the network topology changes from round to round. We consider a worst-case model in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. The model captures mobile networks and wireless networks, in which mobility and interference render communication unpredictable. In contrast to much of the existing work on dynamic networks, we do not assume that the network eventually stops changing; we require correctness and termination even in networks that change continually. We introduce a stability property called T -interval connectivity (for T >= 1), which stipulates that for every T consecutive rounds there exists a stable connected spanning subgraph. For T = 1 this means that the graph is connected in every round, but changes arbitrarily between rounds. We show that in 1-interval connected graphs it is possible for nodes to determine the size of the network and compute any com- putable function of their initial inputs in O(n2) rounds using messages of size O(log n + d), where d is the size of the input to a single node. Further, if the graph is T-interval connected for T > 1, the computation can be sped up by a factor of T, and any function can be computed in O(n + n2 T) rounds using messages of size O(log n + d). We also give two lower bounds on the token dissemination problem, which requires the nodes to disseminate k pieces of information to all the nodes in the network. The T-interval connected dynamic graph model is a novel model, which we believe opens new avenues for research in the theory of distributed computing in wireless, mobile and dynamic networks.", "We study several variants of coordinated consensus in dynamic networks. We assume a synchronous model, where the communication graph for each round is chosen by a worst-case adversary. The network topology is always connected, but can change completely from one round to the next. The model captures mobile and wireless networks, where communication can be unpredictable. In this setting we study the fundamental problems of eventual, simultaneous, and Δ-coordinated consensus, as well as their relationship to other distributed problems, such as determining the size of the network. We show that in the absence of a good initial upper bound on the size of the network, eventual consensus is as hard as computing deterministic functions of the input, e.g., the minimum or maximum of inputs to the nodes. We also give an algorithm for computing such functions that is optimal in every execution. Next, we show that simultaneous consensus can never be achieved in less than n - 1 rounds in any execution, where n is the size of the network; consequently, simultaneous consensus is as hard as computing an upper bound on the number of nodes in the network. For Δ-coordinated consensus, we show that if the ratio between nodes with input 0 and input 1 is bounded away from 1, it is possible to decide in time n-Θ(√ nΔ), where Δ bounds the time from the first decision until all nodes decide. If the dynamic graph has diameter D, the time to decide is min O(nD Δ),n-Ω(nΔ D) , even if D is not known in advance. Finally, we show that (a) there is a dynamic graph such that for every input, no node can decide before time n-O(Δ0.28n0.72); and (b) for any diameter D = O(Δ), there is an execution with diameter D where no node can decide before time Ω(nD Δ). To our knowledge, our work constitutes the first study of Δ-coordinated consensus in general graphs.", "Designing robust algorithms for mobile agents with reliable communication is difficult due to the distributed nature of computation, in mobile ad hoc networks (MANETs) the matter is exacerbated by the need to ensure connectivity. Existing distributed algorithms provide coordination but typically assume connectivity is ensured by other means. We present a connectivity service that encapsulates an arbitrary motion planner and can refine any plan to preserve connectivity (the graph of agents remains connected) and ensure progress (the agents advance towards their goal). The service is realized by a distributed algorithm that is modular in that it makes no assumptions of the motion-planning mechanism except the ability for an agent to query its position and intended goal position, local in that it uses 1-hop broadcast to communicate with nearby agents but doesn't need any network routing infrastructure, and oblivious in that it does not depend on previous computations. We prove the progress of the algorithm in one round is at least Ω(min(d, r)), where d is the minimum distance between an agent and its target and r is the communication radius. We characterize the worst case configuration and show that when d ≥ r this bound is tight and the algorithm is optimal, since no algorithm can guarantee greater progress. Finally we show all agents get Ɛ-close to their targets within O(D0 r+n2 Ɛ) rounds where n is the number of agents and D0 is the sum of the initial distances to the targets." ] }
1208.0180
2949381561
In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states apart possibly from a unique leader node and by unknown that nodes have no a priori knowledge of the network (apart from some minimal knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model, in which communication is synchronous and a worst-case adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time. Then we focus on dynamic networks with broadcast. We conjecture that dynamicity renders nontrivial computation impossible. In view of this, we let the nodes know an upper bound on the maximum degree that will ever appear and show that in this case the nodes can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. Interestingly, this natural variation is proved to be computationally equivalent to a full-knowledge model, in which unique names exist and the size of the network is known.
Some recent works @cite_23 @cite_10 present information spreading algorithms in worst-case dynamic networks based on . An setting in which nodes constantly join and leave has very recently been considered in @cite_21 . For an excellent introduction to distributed computation under worst-case dynamicity see @cite_25 . Two very thorough surveys on dynamic networks are @cite_24 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_24", "@cite_23", "@cite_10", "@cite_25" ], "mid": [ "1746637951", "", "1496893056", "2952030388", "2098329996", "1970700632" ], "abstract": [ "The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs TVGs. Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal as in the majority of existing studies or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.", "", "In this paper we will present various models and techniques for communication in dynamic networks. Dynamic networks are networks of dynamically changing bandwidth or topology. Situations in which dynamic networks occur are, for example: faulty networks (links go up and down), the Internet (the bandwidth of connections may vary), and wireless networks (mobile units move around). We investigate the problem of how to ensure connectivity, how to route, and how to perform admission control in these networks. Some of these problems have already been partly solved, but many problems are still wide open. The aim of this paper is to give an overview of recent results in this area, to identify some of the most interesting open problems and to suggest models and techniques that allow us to study them.", "We give a new technique to analyze the stopping time of gossip protocols that are based on random linear network coding (RLNC). Our analysis drastically simplifies, extends and strengthens previous results. We analyze RLNC gossip in a general framework for network and communication models that encompasses and unifies the models used previously in this context. We show, in most settings for the first time, that it converges with high probability in the information-theoretically optimal time. Most stopping times are of the form O(k + T) where k is the number of messages to be distributed and T is the time it takes to disseminate one message. This means RLNC gossip achieves \"perfect pipelining\". Our analysis directly extends to highly dynamic networks in which the topology can change completely at any time. This remains true even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state. Virtually nothing besides simple O(kT) sequential flooding protocols was previously known for such a setting. While RLNC gossip works in this wide variety of networks its analysis remains the same and extremely simple. This contrasts with more complex proofs that were put forward to give less strong results for various special cases.", "We use network coding to improve the speed of distributed computation in the dynamic network model of Kuhn, Lynch and Oshman [STOC '10]. In this model an adversary adaptively chooses a new network topology in every round, making even basic distributed computations challenging. show that n nodes, each starting with a d-bit token, can broadcast them to all nodes in time O(n2) using b-bit messages, where b > d + log n. Their algorithms take the natural approach of token forwarding: in every round each node broadcasts some particular token it knows. They prove matching Ω(n2) lower bounds for a natural class of token forwarding algorithms and an Ω(n log n) lower bound that applies to all token-forwarding algorithms. We use network coding, transmitting random linear combinations of tokens, to break both lower bounds. Our algorithm's performance is quadratic in the message size b, broadcasting the n tokens in roughly d b2 * n2 rounds. For b = d = Θ(log n) our algorithms use O(n2 log n) rounds, breaking the first lower bound, while for larger message sizes we obtain linear-time algorithms. We also consider networks that change only every T rounds, and achieve an additional factor T2 speedup. This contrasts with related lower and upper bounds of implying that for natural token-forwarding algorithms a speedup of T, but not more, can be obtained. Lastly, we give a general way to derandomize random linear network coding, that also leads to new deterministic information dissemination algorithms.", "" ] }
1207.7103
2950182664
While a natural fit for modeling and understanding mobile networks, time-varying graphs remain poorly understood. Indeed, many of the usual concepts of static graphs have no obvious counterpart in time-varying ones. In this paper, we introduce the notion of temporal reachability graphs. A (tau,delta)-reachability graph is a time-varying directed graph derived from an existing connectivity graph. An edge exists from one node to another in the reachability graph at time t if there exists a journey (i.e., a spatiotemporal path) in the connectivity graph from the first node to the second, leaving after t, with a positive edge traversal time tau, and arriving within a maximum delay delta. We make three contributions. First, we develop the theoretical framework around temporal reachability graphs. Second, we harness our theoretical findings to propose an algorithm for their efficient computation. Finally, we demonstrate the analytic power of the temporal reachability graph concept by applying it to synthetic and real-life datasets. On top of defining clear upper bounds on communication capabilities, reachability graphs highlight asymmetric communication opportunities and offloading potential.
Adaptations of traditional static graph distance metrics and algorithms to time-varying graphs have yielded many different concepts. For example, Orda propose a shortest path algorithm for TVGs based on different waiting policies (unrestricted, forbidden, and source waiting) @cite_10 . Our work corresponds to the policy, in which a message may wait for an unlimited amount of time anywhere along its path through the TVG. Bui Xuan have proposed efficient algorithms for calculating shortest (in number of hops), fastest (in path traversal time), and foremost (i.e., earliest arrival) paths in TVGs @cite_1 . All these algorithms are designed to compute the shortest paths to all destination from a source and a fixed starting time. Our algorithms, in contrast, computes the reachibility graph by estimating the shortest paths for all possible starting times.
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "1984196269", "2089338760" ], "abstract": [ "New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be effectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journey (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys).", "In this paper the shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions is considered. Algorithms for finding the shortest path and minimum delay under various waiting constraints are presented and the properties of the derived path are investigated. It is shown that if departure time from the source node is unrestricted, then a shortest path can be found that is simple and achieves a delay as short as the most unrestricted path. In the case of restricted transit, it is shown that there exist cases in which the minimum delay is finite, but the path that achieves it is infinite." ] }
1207.7103
2950182664
While a natural fit for modeling and understanding mobile networks, time-varying graphs remain poorly understood. Indeed, many of the usual concepts of static graphs have no obvious counterpart in time-varying ones. In this paper, we introduce the notion of temporal reachability graphs. A (tau,delta)-reachability graph is a time-varying directed graph derived from an existing connectivity graph. An edge exists from one node to another in the reachability graph at time t if there exists a journey (i.e., a spatiotemporal path) in the connectivity graph from the first node to the second, leaving after t, with a positive edge traversal time tau, and arriving within a maximum delay delta. We make three contributions. First, we develop the theoretical framework around temporal reachability graphs. Second, we harness our theoretical findings to propose an algorithm for their efficient computation. Finally, we demonstrate the analytic power of the temporal reachability graph concept by applying it to synthetic and real-life datasets. On top of defining clear upper bounds on communication capabilities, reachability graphs highlight asymmetric communication opportunities and offloading potential.
Several approaches to reachability in time-varying graphs exists. For strictly positive edge traversal times, a simple heuristic consists in dividing time into successive slots of length @math and keeping only edges that are persistently present during each slot @cite_7 . This provides a good lower-bound approximation for small values of @math (i.e., less than @math ) whereas our approach can handle arbitrary edge traversal times. From a given starting time @math , reachability among all pairs of nodes can be calculated by iterating over all edge UP DOWN events @cite_24 . This calculation can then be repeated for a sample of starting times @cite_28 . This approach yields static reachability graphs for a discrete sequence of starting times, whereas the temporal reachability graphs defined in this paper calculate reachability in continuous time.
{ "cite_N": [ "@cite_24", "@cite_28", "@cite_7" ], "mid": [ "1986909918", "1981599289", "" ], "abstract": [ "The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e., networks that do not change over time or built as the results of aggregation of information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture. In this paper we propose new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes taking into account the evolution of a network from a global view. We show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions), compared to the metrics used in the past on static graphs. We also characterise network reachability with the concepts of in- and out-components. Then, we generalise them with a global perspective by defining temporal connected components. As a proof of concept we apply these techniques to two classes of time-varying networks, namely connectivity of mobile devices and interactions on an online social network.", "We use real-world contact sequences, time-ordered lists of contacts from one person to another, to study how fast information or disease can spread across network of contacts. Specifically we measure the reachability time the average shortest time for a series of contacts to spread information between a reachable pair of vertices (a pair where a chain of contacts exists leading from one person to the other) and the reachability ratio the fraction of reachable vertex pairs. These measures are studied using conditional uniform graph tests. We conclude, among other things, that the network reachability depends much on a core where the path lengths are short and communication frequent, that clustering of the contacts of an edge in time tends to decrease the reachability, and that the order of the contacts really does make sense for dynamical spreading processes.", "" ] }
1207.7103
2950182664
While a natural fit for modeling and understanding mobile networks, time-varying graphs remain poorly understood. Indeed, many of the usual concepts of static graphs have no obvious counterpart in time-varying ones. In this paper, we introduce the notion of temporal reachability graphs. A (tau,delta)-reachability graph is a time-varying directed graph derived from an existing connectivity graph. An edge exists from one node to another in the reachability graph at time t if there exists a journey (i.e., a spatiotemporal path) in the connectivity graph from the first node to the second, leaving after t, with a positive edge traversal time tau, and arriving within a maximum delay delta. We make three contributions. First, we develop the theoretical framework around temporal reachability graphs. Second, we harness our theoretical findings to propose an algorithm for their efficient computation. Finally, we demonstrate the analytic power of the temporal reachability graph concept by applying it to synthetic and real-life datasets. On top of defining clear upper bounds on communication capabilities, reachability graphs highlight asymmetric communication opportunities and offloading potential.
Chaintreau , in their work on the diameter of opportunistic networks, calculate a structure for each pair of nodes @cite_17 . This structure can tell for any pair of nodes @math , at any time @math , when is the earliest arrival for a message leaving @math for @math at time @math . A reachability graph could be easily derived from these structures, but would unfortunately only cover the @math case. Our approach is more general since it can also handle non-zero edge traversal times.
{ "cite_N": [ "@cite_17" ], "mid": [ "2164988386" ], "abstract": [ "Portable devices have more data storage and increasing communication capabilities everyday. In addition to classic infrastructure based communication, these devices can exploit human mobility and opportunistic contacts to communicate. We analyze the characteristics of such opportunistic forwarding paths. We establish that opportunistic mobile networks in general are characterized by a small diameter, a destination device is reachable using only a small number of relays under tight delay constraint. This property is first demonstrated analytically on a family of mobile networks which follow a random graph process. We then establish a similar result empirically with four data sets capturing human mobility, using a new methodology to efficiently compute all the paths that impact the diameter of an opportunistic mobile networks. We complete our analysis of network diameter by studying the impact of intensity of contact rate and contact duration. This work is, to our knowledge, the first validation that the so called \"small world\" phenomenon applies very generally to opportunistic networking between mobile nodes." ] }
1207.7298
2952415748
In this paper, we characterize the throughput of a broadcast network with n receivers using rateless codes with block size K. We assume that the underlying channel is a Markov modulated erasure channel that is i.i.d. across users, but can be correlated in time. We characterize the system throughput asymptotically in n. Specifically, we explicitly show how the throughput behaves for different values of the coding block size K as a function of n, as n approaches infinity. For finite values of K and n, under the more restrictive assumption of Gilbert-Elliott channels, we are able to provide a lower bound on the maximum achievable throughput. Using simulations we show the tightness of the bound with respect to system parameters n and K, and find that its performance is significantly better than the previously known lower bounds.
Among the works that investigate the throughput over erasure channels, @cite_10 , @cite_8 , @cite_3 and @cite_0 are the most relevant to this work. In @cite_3 , the authors investigate the asymptotic throughput as a function of @math and @math and also show that the asymptotic throughput will be non-zero only if K at least scales with @math . However, they only consider the channel correlation model with @math and use a completely different proof technique. Moreover, no explicit expression on the asymptotic throughput is provided. In @cite_10 and @cite_8 , two lower bounds on the maximum achievable rate @math are provided. However, their bound does not converge to the asymptotic throughput when @math approaches infinity. Moreover, our bound is shown to be better in a variety of simulation settings with finite @math and @math , as will be showed in . black In @cite_0 , the authors consider the case when instantaneous feedback is provided from every user after the transmission of each decoded packets, while we only assume that feedback is provided after the entire coding block has been decoded.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2097285691", "2134109936", "2951615571", "2018161086" ], "abstract": [ "We consider the throughput-delay tradeoff in network coded transmission over erasure broadcast channels. Interested in minimizing decoding delay, we formulate the problem of instantly decodable network coding as an integer linear program and propose algorithms to solve it heuristically. In particular, we investigate channels with memory and propose algorithms that can exploit channel erasure dependence to increase throughput and decrease delay.", "In this work we compare scheduling and coding strategies for a source node serving multiple multicast flows in a network. The coding strategy we consider is a form of random coding proposed in [6] and involves coding across flows, as treated in [2]. We show that there are configurations for which the coding strategy outperforms any scheduling strategy that uses channel state information.", "In an unreliable single-hop broadcast network setting, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size. Our model consists of a source transmitting packets of a single flow to a set of @math users over independent erasure channels. The source performs random linear network coding (RLNC) over @math (coding window size) packets and broadcasts them to the users. We note that the broadcast throughput of RLNC must vanish with increasing @math , for any fixed @math Hence, in contrast to other works in the literature, we investigate how the coding window size @math must scale for increasing @math . Our analysis reveals that the coding window size of @math represents a phase transition rate, below which the throughput converges to zero, and above which it converges to the broadcast capacity. Further, we characterize the asymptotic distribution of decoding delay and provide approximate expressions for the mean and variance of decoding delay for the scaling regime of @math These asymptotic expressions reveal the impact of channel correlations on the throughput and delay performance of RLNC. We also show how our analysis can be extended to other rateless block coding schemes such as the LT codes. Finally, we comment on the extension of our results to the cases of dependent channels across users and asymmetric channel model.", "This paper compares scheduling and coding strategies for a multicast version of a classic downlink problem. We consider scheduling strategies where, in each time slot, a scheduler observes the lengths of all queues and the connectivities of all links and can transmit the head-of-the-line packet from a single queue. We juxtapose this to a coding strategy that is simply a form of classical random linear coding. We show that there are configurations for which the stable throughput region of the scheduling strategy is a strict subset of the corresponding throughput region of the coding strategy. This analysis is performed for both time-invariant and time-varying channels. The analysis is also performed both with and without accounting for the impact on throughput of including coding overhead symbols in each encoded packet. Additionally, we compare coding strategies that only code within individual queues against a coding strategy that codes across separate queues. The strategy that codes across queues simply sends packets from all queues to all receivers. As a result, this strategy sends many packets to unnecessary recipients. We show, surprisingly, that there are cases where the strategy that codes across queues can achieve the same throughput region achievable by coding within individual queues." ] }
1207.6936
1610340100
This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.
Considerable research has been conducted on fault prediction using different models (system log analysis @cite_13 , event-driven approach @cite_9 @cite_13 @cite_0 , support vector machines @cite_6 @cite_11 ), nearest neighbors @cite_6 , ). In this section we give a brief overview of the results obtained by predictors. We focus on their results rather than on their methods of prediction.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_13", "@cite_11" ], "mid": [ "", "2094924503", "2040352153", "2141676918", "130470821" ], "abstract": [ "", "Frequent failures are becoming a serious concern to the community of high-end computing, especially when the applications and the underlying systems rapidly grow in size and complexity. In order to develop effective fault-tolerant strategies, there is a critical need to predict failure events. To this end, we have collected detailed event logs from IBM BlueGene L, which has 128 K processors, and is currently the fastest supercomputer in the world. In this study, we first show how the event records can be converted into a data set that is appropriate for running classification techniques. Then we apply classifiers on the data, including RIPPER (a rule-based classifier), Support Vector Machines (SVMs), a traditional Nearest Neighbor method, and a customized Nearest Neighbor method. We show that the customized nearest neighbor approach can outperform RIPPER and SVMs in terms of both coverage and precision. The results suggest that the customized nearest neighbor approach can be used to alleviate the impact of failures.", "Analyzing, understanding and predicting failure is of paramount importance to achieve effective fault management. While various fault prediction methods have been studied in the past, many of them are not practical for use in real systems. In particular, they fail to address two crucial issues: one is to provide location information (i.e., the components where the failure is expected to occur on) and the other is to provide sufficient lead time (i.e., the time interval preceding the time of failure occurrence). In this paper, we first refine the widely-used metrics for evaluating prediction accuracy by including location as well as lead time. We, then, present a practical failure prediction mechanism for IBM Blue Gene systems. A Genetic Algorithm based method is exploited, which takes into consideration the location and the lead time for failure prediction. We demonstrate the effectiveness of this mechanism by means of real failure logs and job logs collected from the IBM Blue Gene P system at Argonne National Laboratory. Our experiments show that the presented method can significantly improve fault management (e.g., to reduce service unit loss by up to 52.4 ) by incorporating location and lead time information in the prediction.", "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0 and 83.8 prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene P.", "Mitigating the impact of computer failure is possible if accurate failure predictions are provided. Resources, applications, and services can be scheduled around predicted failure and limit the impact. Such strategies are especially important for multi-computer systems, such as compute clusters, that experience a higher rate failure due to the large number of components. However providing accurate predictions with sufficient lead time remains a challenging problem. This paper describes a new spectrum-kernel Support Vector Machine (SVM) approach to predict failure events based on system log files. These files containmessages that represent a change of system state. While a single message in the file may not be sufficient for predicting failure, a sequence or pattern of messages may be. The approach described in this paper will use a sliding window (sub-sequence) of messages to predict the likelihood of failure. The a frequency representation of the message sub-sequences observed are then used as input to the SVM. The SVM then associates the messages to a class of failed or non-failed system. Experimental results using actual system log files from a Linux-based compute cluster indicate the proposed spectrum-kernel SVM approach has promise and can predict hard disk failure with an accuracy of 73 two days in advance." ] }
1207.6936
1610340100
This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.
The authors of @cite_0 introduce the , that is the time between the prediction and the actual fault. This time should be sufficient to take proactive actions. They are also able to give the location of the fault. While this has a negative impact on the precision (see the low value of in Table ), they state that it has a positive impact on the checkpointing time (from 1500 seconds to 120 seconds). The authors of @cite_13 also consider a lead time, and introduce a when the predicted fault should happen. The authors of @cite_6 study the impact of different prediction techniques with different prediction window sizes. They also consider a lead time, but do not state its value. These two latter studies motivate the work of , even though @cite_13 does not provide the size of their prediction window.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_6" ], "mid": [ "2040352153", "2141676918", "2094924503" ], "abstract": [ "Analyzing, understanding and predicting failure is of paramount importance to achieve effective fault management. While various fault prediction methods have been studied in the past, many of them are not practical for use in real systems. In particular, they fail to address two crucial issues: one is to provide location information (i.e., the components where the failure is expected to occur on) and the other is to provide sufficient lead time (i.e., the time interval preceding the time of failure occurrence). In this paper, we first refine the widely-used metrics for evaluating prediction accuracy by including location as well as lead time. We, then, present a practical failure prediction mechanism for IBM Blue Gene systems. A Genetic Algorithm based method is exploited, which takes into consideration the location and the lead time for failure prediction. We demonstrate the effectiveness of this mechanism by means of real failure logs and job logs collected from the IBM Blue Gene P system at Argonne National Laboratory. Our experiments show that the presented method can significantly improve fault management (e.g., to reduce service unit loss by up to 52.4 ) by incorporating location and lead time information in the prediction.", "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0 and 83.8 prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene P.", "Frequent failures are becoming a serious concern to the community of high-end computing, especially when the applications and the underlying systems rapidly grow in size and complexity. In order to develop effective fault-tolerant strategies, there is a critical need to predict failure events. To this end, we have collected detailed event logs from IBM BlueGene L, which has 128 K processors, and is currently the fastest supercomputer in the world. In this study, we first show how the event records can be converted into a data set that is appropriate for running classification techniques. Then we apply classifiers on the data, including RIPPER (a rule-based classifier), Support Vector Machines (SVMs), a traditional Nearest Neighbor method, and a customized Nearest Neighbor method. We show that the customized nearest neighbor approach can outperform RIPPER and SVMs in terms of both coverage and precision. The results suggest that the customized nearest neighbor approach can be used to alleviate the impact of failures." ] }
1207.6936
1610340100
This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical analysis of Young and Daly in the presence of a fault prediction system, which is characterized by its recall and its precision, and which provides either exact or window-based time predictions. We succeed in deriving the optimal value of the checkpointing period (thereby minimizing the waste of resource usage due to checkpoint overhead) in all scenarios. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. In addition, the results of this analytical evaluation are nicely corroborated by a comprehensive set of simulations, thereby demonstrating the validity of the model and the accuracy of the results.
While many studies on fault prediction focus on the conception of the predictor, most of them consider that the proactive action should simply be a checkpoint or a migration right in time before the fault. However, in their paper @cite_10 , consider the mathematical problem to determine when and how to migrate. In order to be able to use migration, they stated that at every time, 2 -based heuristic. Thanks to their algorithm, they were able to save 30 compared to an heuristic that does not take the reliability into account, with a precision and recall of 70 Finally, to the best of our knowledge, this work is the first to focus on the mathematical aspect of fault prediction, and to provide a model and a detailed analysis of the waste due to all three types of events (true and false predictions and unpredicted failures).
{ "cite_N": [ "@cite_10" ], "mid": [ "2148508494" ], "abstract": [ "In large-scale networked computing systems, component failures become norms instead of exceptions. Failure prediction is a crucial technique for self-managing resource burdens. Failure events in coalition systems exhibit strong correlations in time and space domain. In this paper, we develop a spherical covariance model with an adjustable timescale parameter to quantify the temporal correlation and a stochastic model to describe spatial correlation. We further utilize the information of application allocation to discover more correlations among failure instances. We cluster failure events based on their correlations and predict their future occurrences. We implemented a failure prediction framework, called PREdictor of Failure Events Correlated Temporal-Spatially (hPREFECTs), which explores correlations among failures and forecasts the time-between-failure of future instances. We evaluate the performance of hPREFECTs in both offline prediction of failure by using the Los Alamos HPC traces and online prediction in an institute-wide clusters coalition environment. Experimental results show the system achieves more than 76 accuracy in offline prediction and more than 70 accuracy in online prediction during the time from May 2006 to April 2007." ] }
1207.6745
1594413522
Abstract In this work we show that, using the eigen-decomposition of the adjacencymatrix, we can consistently estimate latent positions for random dot productgraphs provided the latent positions are i.i.d. from some distribution. If classlabels are observed for a number of vertices tending to infinity, then we showthat the remaining vertices can be classified with error converging to Bayes opti-mal using the k -nearest-neighbors classification rule. We evaluate the proposedmethods on simulated data and a graph derived from Wikipedia. 1 Introduction The classical statistical pattern recognition setting involves( X , Y ),( X 1 , Y 1 ),...,( X n , Y n ) i . i˘ . d . F X , Y ,where the X i : 7!R d are observed feature vectors and the Y i : 7!f0,1gare ob-served class labels for some probability space . We define D=f( X i , Y i )gas the train-ing set. The goal is to learn a classifier h (;D): R d !f0,1gsuch that the probability oferror P[ h ( X ;D)6= Y jD]approaches Bayes optimal as n !1for all distributions
The latent space approach is introduced in @cite_7 . Generally, one posits that the adjacency of two vertices is determined by a Bernoulli trial with parameter depending only on the latent positions associated with each vertex, and edges are independent conditioned on the latent positions of the vertices.
{ "cite_N": [ "@cite_7" ], "mid": [ "2066459332" ], "abstract": [ "Network models are widely used to represent relational information among interacting units. In studies of social networks, recent emphasis has been placed on random graph models where the nodes usually represent individual social actors and the edges represent the presence of a specified relation between actors. We develop a class of models where the probability of a relation between actors depends on the positions of individuals in an unobserved “social space.” We make inference for the social space within maximum likelihood and Bayesian frameworks, and propose Markov chain Monte Carlo procedures for making inference on latent positions and the effects of observed covariates. We present analyses of three standard datasets from the social networks literature, and compare the method to an alternative stochastic blockmodeling approach. In addition to improving on model fit for these datasets, our method provides a visual and interpretable model-based spatial representation of social relationships and improv..." ] }
1207.6745
1594413522
Abstract In this work we show that, using the eigen-decomposition of the adjacencymatrix, we can consistently estimate latent positions for random dot productgraphs provided the latent positions are i.i.d. from some distribution. If classlabels are observed for a number of vertices tending to infinity, then we showthat the remaining vertices can be classified with error converging to Bayes opti-mal using the k -nearest-neighbors classification rule. We evaluate the proposedmethods on simulated data and a graph derived from Wikipedia. 1 Introduction The classical statistical pattern recognition setting involves( X , Y ),( X 1 , Y 1 ),...,( X n , Y n ) i . i˘ . d . F X , Y ,where the X i : 7!R d are observed feature vectors and the Y i : 7!f0,1gare ob-served class labels for some probability space . We define D=f( X i , Y i )gas the train-ing set. The goal is to learn a classifier h (;D): R d !f0,1gsuch that the probability oferror P[ h ( X ;D)6= Y jD]approaches Bayes optimal as n !1for all distributions
If we suppose that the latent positions are i.i.d. from some distribution, then the latent space approach is closely related to the theory of exchangeable random graphs . For exchangeable graphs, we have a (measurable) link function @math and each vertex is associated with a latent i.i.d. uniform @math random variable denoted @math . Conditioned on the @math , the adjacency of vertices @math and @math is determined by a Bernoulli trial with parameter @math . For a treatment of exchangeable graphs and estimation using the method of moments, see @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2038949443" ], "abstract": [ "Probability models on graphs are becoming increasingly important in many applications, but statistical tools for fitting such models are not yet well developed. Here we propose a general method of moments approach that can be used to fit a large class of probability models through empirical counts of certain patterns in a graph. We establish some general asymptotic properties of empirical graph moments and prove consistency of the estimates as the graph size grows for all ranges of the average degree including @math . Additional results are obtained for the important special case of degree distributions." ] }
1207.6745
1594413522
Abstract In this work we show that, using the eigen-decomposition of the adjacencymatrix, we can consistently estimate latent positions for random dot productgraphs provided the latent positions are i.i.d. from some distribution. If classlabels are observed for a number of vertices tending to infinity, then we showthat the remaining vertices can be classified with error converging to Bayes opti-mal using the k -nearest-neighbors classification rule. We evaluate the proposedmethods on simulated data and a graph derived from Wikipedia. 1 Introduction The classical statistical pattern recognition setting involves( X , Y ),( X 1 , Y 1 ),...,( X n , Y n ) i . i˘ . d . F X , Y ,where the X i : 7!R d are observed feature vectors and the Y i : 7!f0,1gare ob-served class labels for some probability space . We define D=f( X i , Y i )gas the train-ing set. The goal is to learn a classifier h (;D): R d !f0,1gsuch that the probability oferror P[ h ( X ;D)6= Y jD]approaches Bayes optimal as n !1for all distributions
Many latent space approaches seek to generalize the stochastic blockmodel to allow for variation within blocks. For example, the mixed membership model of @cite_10 posits that a vertex could have partial membership in multiple blocks. In @cite_13 , latent vectors are presumed to be drawn from a mixture of multivariate normal distributions with the link function depending on the distance between the latent vectors. They use Bayesian techniques to estimate the latent vectors.
{ "cite_N": [ "@cite_13", "@cite_10" ], "mid": [ "2096091969", "2107107106" ], "abstract": [ "Network models are widely used to represent relations between interacting units or actors. Network data often exhibit transitivity, meaning that two actors that have ties to a third actor are more likely to be tied than actors that do not, homophily by attributes of the actors or dyads, and clustering. Interest often focuses on finding clusters of actors or ties, and the number of groups in the data is typically unknown. We propose a new model, the \"latent position cluster model\", under which the probability of a tie between two actors depends on the distance between them in an unobserved Euclidean 'social space', and the actors' locations in the latent social space arise from a mixture of distributions, each corresponding to a cluster. We propose two estimation methods: a two-stage maximum likelihood method and a fully Bayesian method that uses Markov chain Monte Carlo sampling. The former is quicker and simpler, but the latter performs better. We also propose a Bayesian way of determining the number of clusters that are present by using approximate conditional Bayes factors. Our model represents transitivity, homophily by attributes and clustering simultaneously and does not require the number of clusters to be known. The model makes it easy to simulate realistic networks with clustering, which are potentially useful as inputs to models of more complex systems of which the network is part, such as epidemic models of infectious disease. We apply the model to two networks of social relations. A free software package in the R statistical language, latentnet, is available to analyse data by using the model. Copyright 2007 Royal Statistical Society.", "Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks." ] }
1207.6269
2950254041
Community detection has arisen as one of the most relevant topics in the field of graph data mining due to its importance in many fields such as biology, social networks or network traffic analysis. The metrics proposed to shape communities are generic and follow two approaches: maximizing the internal density of such communities or reducing the connectivity of the internal vertices with those outside the community. However, these metrics take the edges as a set and do not consider the internal layout of the edges in the community. We define a set of properties oriented to social networks that ensure that communities are cohesive, structured and well defined. Then, we propose the Weighted Community Clustering (WCC), which is a community metric based on triangles. We proof that analyzing communities by triangles gives communities that fulfill the listed set of properties, in contrast to previous metrics. Finally, we experimentally show that WCC correctly captures the concept of community in social networks using real and syntethic datasets, and compare statistically some of the most relevant community detection algorithms in the state of the art.
There are basically two types of metrics to evaluate the quality of a community. First, those that focus on the internal density of the community. The most widely used metric that falls into this category is the modularity , which was proposed by @cite_0 . Modularity measures the internal connectivity of the community (omitting the external connectivity) compared to an Erd "os-R 'enyi graph model. It has become very popular in the literature, and a lot of algorithms are based on maximizing it. Algorithms apply several optimization procedures: agglomerative greedy @cite_15 , simulated annealing strategy @cite_21 or multistep approaches @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_21", "@cite_3" ], "mid": [ "2095293504", "2047940964", "1982322675", "2131681506" ], "abstract": [ "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.", "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.", "We present an analysis of communality structure in networks based on the application of simulated annealing techniques. In this case we use as “cost function” the already introduced modularity Q (1), which is based on the relative number of links within a commune against the number of links that would correspond in case the links were distributed randomly. We compare the results of our approach against other methodologies based on betweenness analysis and show that in all cases a better community structure can be attained.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks." ] }
1207.6269
2950254041
Community detection has arisen as one of the most relevant topics in the field of graph data mining due to its importance in many fields such as biology, social networks or network traffic analysis. The metrics proposed to shape communities are generic and follow two approaches: maximizing the internal density of such communities or reducing the connectivity of the internal vertices with those outside the community. However, these metrics take the edges as a set and do not consider the internal layout of the edges in the community. We define a set of properties oriented to social networks that ensure that communities are cohesive, structured and well defined. Then, we propose the Weighted Community Clustering (WCC), which is a community metric based on triangles. We proof that analyzing communities by triangles gives communities that fulfill the listed set of properties, in contrast to previous metrics. Finally, we experimentally show that WCC correctly captures the concept of community in social networks using real and syntethic datasets, and compare statistically some of the most relevant community detection algorithms in the state of the art.
However, it has been reported that modularity has resolution limits @cite_7 @cite_8 . Communities detected by modularity depend on the total graph size, and thus for large graphs, small well defined communities are never found. This means that maximizing the modularity leads to partitions where communities are far from intuitive. This is illustrated in Figure by an example.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2128366083", "2061099285" ], "abstract": [ "Detecting community structure is fundamental for uncovering the links between structure and function in complex networks and for practical applications in many disciplines such as biology and sociology. A popular method now widely used relies on the optimization of a quantity called modularity, which is a quality index for a partition of a network into communities. We find that modularity optimization may fail to identify modules smaller than a scale which depends on the total size of the network and on the degree of interconnectedness of the modules, even in cases where modules are unambiguously defined. This finding is confirmed through several examples, both in artificial and in real social, biological, and technological networks, where we show that modularity optimization indeed does not resolve a large number of modules. A check of the modules obtained through modularity optimization is thus necessary, and we provide here key elements for the assessment of the reliability of this community detection method.", "Although widely used in practice, the behavior and accuracy of the popular module identification technique called modularity maximization is not well understood in practical contexts. Here, we present a broad characterization of its performance in such situations. First, we revisit and clarify the resolution limit phenomenon for modularity maximization. Second, we show that the modularity function Q exhibits extreme degeneracies: it typically admits an exponential number of distinct high-scoring solutions and typically lacks a clear global maximum. Third, we derive the limiting behavior of the maximum modularity Q(max) for one model of infinitely modular networks, showing that it depends strongly both on the size of the network and on the number of modules it contains. Finally, using three real-world metabolic networks as examples, we show that the degenerate solutions can fundamentally disagree on many, but not all, partition properties such as the composition of the largest modules and the distribution of module sizes. These results imply that the output of any modularity maximization procedure should be interpreted cautiously in scientific contexts. They also explain why many heuristics are often successful at finding high-scoring partitions in practice and why different heuristics can disagree on the modular structure of the same network. We conclude by discussing avenues for mitigating some of these behaviors, such as combining information from many degenerate solutions or using generative models." ] }
1207.6269
2950254041
Community detection has arisen as one of the most relevant topics in the field of graph data mining due to its importance in many fields such as biology, social networks or network traffic analysis. The metrics proposed to shape communities are generic and follow two approaches: maximizing the internal density of such communities or reducing the connectivity of the internal vertices with those outside the community. However, these metrics take the edges as a set and do not consider the internal layout of the edges in the community. We define a set of properties oriented to social networks that ensure that communities are cohesive, structured and well defined. Then, we propose the Weighted Community Clustering (WCC), which is a community metric based on triangles. We proof that analyzing communities by triangles gives communities that fulfill the listed set of properties, in contrast to previous metrics. Finally, we experimentally show that WCC correctly captures the concept of community in social networks using real and syntethic datasets, and compare statistically some of the most relevant community detection algorithms in the state of the art.
The second type of metrics consists of those that focus on reducing the number of edges connecting communities. @cite_23 , introduce the conductance. Conductance, is the ratio between the edges going outside the community and the total number of edges between members of the community. However, conductance suffers from the fact that for any graph, the partition with a unique community containing all the vertices of the graph obtains the best conductance, making its direct optimization not viable. A recent survey @cite_19 of community metrics discusses the performance of many metrics on real networks: the cut ratio @cite_24 , the normalized cut @cite_18 , the Maximum-ODF (Out Degree Fraction), the Average-ODF and Flake-ODF @cite_10 . In this survey, showed that, among all these metrics, conductance is the metric that best captures the concept of community. Furthermore, their results reveal that the quality of communities decreases significantly for those of size greater than around 100 elements.
{ "cite_N": [ "@cite_18", "@cite_24", "@cite_19", "@cite_23", "@cite_10" ], "mid": [ "2121947440", "2127048411", "2111002549", "2034331023", "1984374364" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior.", "We motivate and develop a natural bicriteria measure for assessing the quality of a clustering that avoids the drawbacks of existing measures. A simple recursive heuristic is shown to have poly-logarithmic worst-case guarantees under the new measure. The main result of the article is the analysis of a popular spectral algorithm. One variant of spectral clustering turns out to have effective worst-case guarantees; another finds a \"good\" clustering, if one exists.", "We de ne a communit y on the web as a set of sites that have more links (in either direction) to members of the community than to non-members. Members of such a community can be eAEciently iden ti ed in a maximum ow minim um cut framework, where the source is composed of known members, and the sink consists of well-kno wn non-members. A focused crawler that crawls to a xed depth can approximate community membership by augmenting the graph induced by the cra wl with links to a virtual sink node.The effectiveness of the approximation algorithm is demonstrated with several crawl results that iden tify hubs, authorities, w eb rings, and other link topologies that are useful but not easily categorized. Applications of our approach include focused cra wlers and search engines, automatic population of portal categories, and improved ltering." ] }
1207.6246
2951159514
Given a large edge-weighted network @math with @math terminal vertices, we wish to compress it and store, using little memory, the value of the minimum cut (or equivalently, maximum flow) between every bipartition of terminals. One appealing methodology to implement a compression of @math is to construct a : a small network @math with the same @math terminals, in which the minimum cut value between every bipartition of terminals is the same as in @math . This notion was introduced by Hagerup, Katajainen, Nishimura, and Ragde [JCSS '98], who proved that such @math of size at most @math always exists. Obviously, by having access to the smaller network @math , certain computations involving cuts can be carried out much more efficiently. We provide several new bounds, which together narrow the previously known gap from doubly-exponential to only singly-exponential, both for planar and for general graphs. Our first and main result is that every @math -terminal planar network admits a mimicking network @math of size @math , which is moreover a minor of @math . On the other hand, some planar networks @math require @math . For general networks, we show that certain bipartite graphs only admit mimicking networks of size @math , and moreover, every data structure that stores the minimum cut value between all bipartitions of the terminals must use @math machine words.
Graph compression can be interpreted quite broadly, and indeed it was studied extensively in the past, with many results known for different graphical features (the properties we wish to preserve). For instance, in the context of preserving the graph distances, concepts such as spanners @cite_0 and probabilistic embedding into trees @cite_10 @cite_1 , have developed into a rich area with productive area, and variations of it that involve a subset of terminal vertices were studied more recently, see e.g. @cite_9 @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_1", "@cite_0", "@cite_10" ], "mid": [ "1521196338", "2031536548", "2114493937", "", "1981859328" ], "abstract": [ "We introduce the following notion of compressing an undirected graph G with (nonnegative) edge-lengths and terminal vertices R⊆V(G). A distance-preserving minor is a minor G′ (of G) with possibly different edge-lengths, such that R⊆V(G′) and the shortest-path distance between every pair of terminals is exactly the same in G and in G′. We ask: what is the smallest f*(k) such that every graph G with k=|R| terminals admits a distance-preserving minor G′ with at most f*(k) vertices? Simple analysis shows that f*(k)≤O(k4). Our main result proves that f*(k)≥Ω(k2), significantly improving over the trivial f*(k)≥k. Our lower bound holds even for planar graphs G, in contrast to graphs G of constant treewidth, for which we prove that O(k) vertices suffice.", "We introduce and study the notions of pairwise and sourcewise preservers. Given an undirected N-vertex graph G = (V,E) and a set P of pairs of vertices, let G' = (V,H), H E, be called a pairwise preserver of G with respect to P if for every pair u,w P, distG'(u,w) = distG(u,w). For a set S V of sources, a pairwise preserver of G with respect to the set of all pairs P = (S 2) of sources is called a sourcewise preserver of G with respect to S. We prove that for every undirected possibly weighted N-vertex graph G and every set P of P = O(N1 2) pairs of vertices of G, there exists a linear-size pairwise preserver of G with respect to P. Consequently, for every subset S V of S = O(N1 4) sources, there exists a linear-size sourcewise preserver of G with respect to S. On the negative side we show that neither of the two exponents (1 2 and 1 4) can be improved even when the attention is restricted to unweighted graphs. Our lower bounds involve constructions of dense convexly independent sets of vectors with small Euclidean norms. We believe that the link between the areas of discrete geometry and spanners that we establish is of independent interest and might be useful in the study of other problems in the area of low-distortion embeddings.", "This paper provides a novel technique for the analysis of randomized algorithms for optimization problems on metric spaces, by relating the randomized performance ratio for any, metric space to the randomized performance ratio for a set of \"simple\" metric spaces. We define a notion of a set of metric spaces that probabilistically-approximates another metric space. We prove that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion. These metric spaces are \"simple\" as being: (1) tree metrics; (2) natural for applying a divide-and-conquer algorithmic approach. The technique presented is of particular interest in the context of on-line computation. A large number of on-line algorithmic problems, including metrical task systems, server problems, distributed paging, and dynamic storage rearrangement are defined in terms of some metric space. Typically for these problems, there are linear lower bounds on the competitive ratio of deterministic algorithms. Although randomization against an oblivious adversary has the potential of overcoming these high ratios, very little progress has been made in the analysis. We demonstrate the use of our technique by obtaining substantially improved results for two different on-line problems.", "", "This paper investigates a zero-sum game played on a weighted connected graph @math between two players, the tree player and the edge player. At each play, the tree player chooses a spanning tree @math and the edge player chooses an edge @math . The payoff to the edge player is @math , defined as follows: If @math lies in the tree @math then @math ; if @math does not lie in the tree then @math , where @math is the weight of edge @math and @math is the weight of the unique cycle formed when edge @math is added to the tree @math . The main result is that the value of the game on any @math -vertex graph is bounded above by @math . It is conjectured that the value of the game is @math . The game arises in connection with the @math -server problem on a road network; i.e., a metric space that can be represented as a multigraph @math in which each edge @math represents a road of length @math . It is shown that, if the value of the game on @math is @math , then there is a randomized strategy that achieves a competitive ratio of @math against any oblivious adversary. Thus, on any @math -vertex road network, there is a randomized algorithm for the @math -server problem that is @math competitive against oblivious adversaries. At the heart of the analysis of the game is an algorithm that provides an approximate solution for the simple network design problem. Specifically, for any @math -vertex weighted, connected multigraph, the algorithm constructs a spanning tree @math such that the average, over all edges @math , of @math is less than or equal to @math . This result has potential application to the design of communication networks. It also improves substantially known estimates concerning the existence of a sparse basis for the cycle space of a graph." ] }
1207.6630
2282893565
A fundamental problem in the delay and backlog analysis across multi-hop paths in wireless networks is how to account for the random properties of the wireless channel. Since the usual statistical models for radio signals in a propagation environment do not lend themselves easily to a description of the available service rate on a wireless link, the performance analysis of wireless networks has resorted to higher-layer abstractions, e.g., using Markov chain models. In this work, we propose a network calculus that can incorporate common statistical models of fading channels and obtain statistical bounds on delay and backlog across multiple nodes. We conduct the analysis in a transfer domain, which we refer to as the SNR domain', where the service process at a link is characterized by the instantaneous signal-to-noise ratio at the receiver. We discover that, in the transfer domain, the network model is governed by a dioid algebra, which we refer to as (min,x)-algebra. Using this algebra we derive the desired delay and backlog bounds. An application of the analysis is demonstrated for a simple multi-hop network with Rayleigh fading channels and for a network with cross traffic.
Analytical approaches for network-layer performance analysis of wireless networks include queueing theory, effective bandwidth and, more recently, network calculus. Since the service processes corresponding to the channel capacity of common fading channel models such as Rician, Rayleigh, or Nakagami- @math , require to take a logarithm of their distributions, researchers often turn to higher-layer abstractions to model fading channels, which lend themselves more easily to an analysis. A widely used abstraction is the two-state channel model developed by Gilbert @cite_29 and Elliott @cite_28 , and subsequent extensions to a finite-state Markov channel (FSMC) @cite_33 . Markov channel models are well suited to express the time correlation of fading channel samples. We refer to @cite_3 for a survey of the development and applications of FSMC models. @cite_37 evaluated the accuracy of first-order Markov channel models of fading channels, where the next channel sample depends only on the current state of the Markov process, and higher order processes that can capture memory extending further back in the process history. The authors found that a first-order Markov model is a good approximation of the fading channel, and that using higher order Markov processes does not significantly improve the accuracy of the model.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_28", "@cite_29", "@cite_3" ], "mid": [ "", "2164599584", "2153623795", "2063142364", "2119683335" ], "abstract": [ "", "The authors first study the behavior of a finite-state channel where a binary symmetric channel is associated with each state and Markov transitions between states are assumed. Such a channel is referred to as a finite-state Markov channel (FSMC). By partitioning the range of the received signal-to-noise ratio into a finite number of intervals, FSMC models can be constructed for Rayleigh fading channels. A theoretical approach is conducted to show the usefulness of FSMCs compared to that of two-state Gilbert-Elliott channels. The crossover probabilities of the binary symmetric channels associated with its states are calculated. The authors use the second-order statistics of the received SNR to approximate the Markov transition probabilities. The validity and accuracy of the model are confirmed by the state equilibrium equations and computer simulation. >", "The error structure on communication channels used for data transmission may be so complex as to preclude the feasibility of accurately predicting the performance of given codes when employed on these channels. Use of an approximate error rate as an estimate of performance allows the complex statistics of errors to be reduced to a manageable table of parameters and used in an economical evaluation of large collections of error detecting codes. Exemplary evaluations of error detecting codes on the switched telephone network are included in this paper. On channels which may be represented by Gilbert's model of a burst-noise channel, the probabilities of error or of retransmission may be calculated without approximations for both error correcting and error detecting codes", "A model of a burst-noise binary channel uses a Markov chain with two states G and B. In state G, transmission is error-free. In state B, the channel has only probability h of transmitting a digit correctly. For suitably small values of the probabilities, p, P of the B @math rA B transitions, the model simulates burst-noise channels. Probability formulas relate the parameters p, P, h to easily measured statistics and provide run distributions for comparison with experimental measurements. The capacity C of the model channel exceeds the capacity C(sym. bin.) of a memoryless symmetric binary channel with the same error probability. However, the difference is slight for some values of h, p, P; then, time-division encoding schemes may be fairly efficient.", "This article's goal is to provide an in-depth understanding of the principles of FSMC modeling of fading channels with its applications in wireless communication systems. While the emphasis is on frequency nonselective or flat-fading channels, this understanding will be useful for future generalizations of FSMC models for frequency-selective fading channels. The target audience of this article include both theory- and practice-oriented researchers who would like to design accurate channel models for evaluating the performance of wireless communication systems in the physical or media access control layers, or those who would like to develop more efficient and reliable transceivers that take advantage of the inherent memory in fading channels. Both FSMC models and flat-fading channels will be formally introduced. FSMC models are particulary suitable to represent and estimate the relatively fast flat-fading channel gain in each subcarrier." ] }
1207.6630
2282893565
A fundamental problem in the delay and backlog analysis across multi-hop paths in wireless networks is how to account for the random properties of the wireless channel. Since the usual statistical models for radio signals in a propagation environment do not lend themselves easily to a description of the available service rate on a wireless link, the performance analysis of wireless networks has resorted to higher-layer abstractions, e.g., using Markov chain models. In this work, we propose a network calculus that can incorporate common statistical models of fading channels and obtain statistical bounds on delay and backlog across multiple nodes. We conduct the analysis in a transfer domain, which we refer to as the SNR domain', where the service process at a link is characterized by the instantaneous signal-to-noise ratio at the receiver. We discover that, in the transfer domain, the network model is governed by a dioid algebra, which we refer to as (min,x)-algebra. Using this algebra we derive the desired delay and backlog bounds. An application of the analysis is demonstrated for a simple multi-hop network with Rayleigh fading channels and for a network with cross traffic.
Queue-based channel (QBC) @cite_22 is an alternative model for fading channels, which models a binary additive noise channel with memory based on a finite queue. Here, a queue with size @math contains the last @math noise symbols, and the noise process is an @math -order Markov chain. The model was found to provide a better approximation to the Rayleigh and Rician slow fading channels compared to the Gilbert-Elliot model @cite_18 . An extension of the QBC model, called Weighted QBC @cite_18 permit queue cells (i.e., channel samples) to contribute with different weights to the noise process.
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2098592039", "2118047086" ], "abstract": [ "A new channel model for binary additive noise communication channel with memory, called weighted queue-based channel (WQBC), is introduced. The proposed WQBC generalizes the conventional queue-based channel (QBC) such that each queue cell has a different contribution to the noise process, i.e. the queue cells are selected with different probabilities. Suitably selecting the modeling function, the generalization introduced by the WQBC does not increase the number of modelling parameters required compared to the QBC. The statistical and information-theoretical properties of the new model are derived. The WQBC and the QBC are compared in terms of capacity and the accuracy in modeling a family of hard decision frequency-shift keying demodulated correlated Rayleigh and Rician fading channels. It is observed that the WQBC requires a much smaller Markovian memory than the QBC to achieve the same capacity, and provides a very good approximation of the fading channels as the QBC for a wide range of channel conditions.", "A model for a binary additive noise communication channel with memory is introduced. The channel noise process, which is generated according to a ball sampling mechanism involving a queue of finite length M, is a stationary ergodic Mth-order Markov source. The channel properties are analyzed and several of its statistical and information-theoretical quantities (e.g., block transition distribution, autocorrelation function (ACF), capacity, and error exponent) are derived in either closed or easily computable form in terms of its four parameters. The capacity of the queue-based channel (QBC) is also analytically and numerically compared for a variety of channel conditions with the capacity of other binary models, such as the well-known Gilbert-Elliott channel (GEC), the Fritchman channel, and the finite-memory contagion channel. We also investigate the modeling of the traditional GEC using this QBC model. The QBC parameters are estimated by minimizing the Kullback-Leibler divergence rate between the probability of noise sequences generated by the GEC and the QBC, while maintaining identical bit-error rates (BER) and correlation coefficients. The accuracy of fitting the GEC via the QBC is evaluated in terms of ACF, channel capacity, and error exponent. Numerical results indicate that the QBC provides a good approximation of the GEC for various channel conditions; it thus offers an interesting alternative to the GEC while remaining mathematically tractable." ] }
1207.6630
2282893565
A fundamental problem in the delay and backlog analysis across multi-hop paths in wireless networks is how to account for the random properties of the wireless channel. Since the usual statistical models for radio signals in a propagation environment do not lend themselves easily to a description of the available service rate on a wireless link, the performance analysis of wireless networks has resorted to higher-layer abstractions, e.g., using Markov chain models. In this work, we propose a network calculus that can incorporate common statistical models of fading channels and obtain statistical bounds on delay and backlog across multiple nodes. We conduct the analysis in a transfer domain, which we refer to as the SNR domain', where the service process at a link is characterized by the instantaneous signal-to-noise ratio at the receiver. We discover that, in the transfer domain, the network model is governed by a dioid algebra, which we refer to as (min,x)-algebra. Using this algebra we derive the desired delay and backlog bounds. An application of the analysis is demonstrated for a simple multi-hop network with Rayleigh fading channels and for a network with cross traffic.
There is a collection of recent works that apply stochastic network calculus methods @cite_40 to wireless networks with fading channels. The stochastic network calculus is closely related to the effective bandwidth theory, in that it seeks to develop bounds on performance metrics under assumptions also found in the effective bandwidth literature. Different from effective bandwidth literature, stochastic network calculus methods seek to develop non-asymptotic bounds. An attractive element of a network calculus analysis is that sometimes it is possible to extend a single node analysis to a tandem of nodes, using the convolution operation seen in the introduction.
{ "cite_N": [ "@cite_40" ], "mid": [ "1589801689" ], "abstract": [ "Network calculus, a theory dealing with queuing systems found in computer networks, focuses on performance guarantees. The development of an information theory for stochastic service-guarantee analysis has been identified as a grand challenge for future networking research. Towards that end, stochastic network calculus, the probabilistic version or generalization of the (deterministic) Network Calculus, has been recognized by researchers as a crucial step. Stochastic Network Calculus presents a comprehensive treatment for the state-of-the-art in stochastic service-guarantee analysis research and provides basic introductory material on the subject, as well as discusses the most recent research in the area. This helpful volume summarizes results for stochastic network calculus, which can be employed when designing computer networks to provide stochastic service guarantees. Features and Topics: Provides a solid introductory chapter, providing useful background knowledge Reviews fundamental concepts and results of deterministic network calculus Includes end-of-chapter problems, as well as summaries and bibliographic comments Defines traffic models and server models for stochastic network calculus Summarizes the basic properties of stochastic network calculus under different combinations of traffic and server models Highlights independent case analysis Discusses stochastic service guarantees under different scheduling disciplines Presents applications to admission control and traffic conformance study using the analysis results Offers an overall summary and some open research challenges for further study of the topic Key Topics: Queuing systems Performance analysis and guarantees Independent case analysis Traffic and server models Analysis of scheduling disciplines Generalized processor sharing Open research challenges Researchers and graduates in the area of performance evaluation of computer communication networks will benefit substantially from this comprehensive and easy-to-follow volume. Professionals will also find it a worthwhile reference text. Professor Yuming Jiang at the Norwegian University of Science and Technology (NTNU) has lectured using the material presented in this text since 2006. Dr Yong Liu works at the Optical Network Laboratory, National University of Singapore, where he researches QoS for optical communication networks and Metro Ethernet networks." ] }
1207.6475
1971142667
We consider a scenario in which leaders are required to recruit teams of followers. Each leader cannot recruit all followers, but interaction is constrained according to a bipartite network. The objective for each leader is to reach a state of local stability in which it controls a team whose size is equal to a given constraint. We focus on distributed strategies, in which agents have only local information of the network topology and propose a distributed algorithm in which leaders and followers act according to simple local rules. The performance of the algorithm is analyzed with respect to the convergence to a stable solution. Our results are as follows. For any network, the proposed algorithm is shown to converge to an approximate stable solution in polynomial time, namely the leaders quickly form teams in which the total number of additional followers required to satisfy all team size constraints is an arbitrarily small fraction of the entire population. In contrast, for general graphs there can be an exponential time gap between convergence to an approximate solution and to a stable solution.
We distinguish ourselves from all mentioned papers, as we propose a fully distributed algorithm for group formation on arbitrary networks in which agents act according to simple local rules and perform very limited computation, and we derive performance guarantees in the form of theorems. For an exhaustive overview on distributed algorithms in multi-agent systems, the interested reader is referred to the books by Lynch @cite_8 and by @cite_20 and the references therein, while the survey by Horling and Lesser @cite_19 offers an overview on three decades of research on organizational paradigms as team and coalition formation.
{ "cite_N": [ "@cite_19", "@cite_20", "@cite_8" ], "mid": [ "2039048406", "1788292158", "" ], "abstract": [ "Many researchers have demonstrated that the organizational design employed by an agent system can have a significant, quantitative effect on its performance characteristics. A range of organizational strategies have emerged from this line of research, each with different strengths and weaknesses. In this article we present a survey of the major organizational paradigms used in multi-agent systems. These include hierarchies, holarchies, coalitions, teams, congregations, societies, federations, markets, and matrix organizations. We will provide a description of each, discuss their advantages and disadvantages, and provide examples of how they may be instantiated and maintained. This summary will facilitate the comparative evaluation of organizational styles, allowing designers to first recognize the spectrum of possibilities, and then guiding the selection of an appropriate organizational design for a particular domain and environment.", "This self-contained introduction to the distributed control of robotic networks offers a distinctive blend of computer science and control theory. The book presents a broad set of tools for understanding coordination algorithms, determining their correctness, and assessing their complexity; and it analyzes various cooperative strategies for tasks such as consensus, rendezvous, connectivity maintenance, deployment, and boundary estimation. The unifying theme is a formal model for robotic networks that explicitly incorporates their communication, sensing, control, and processing capabilities--a model that in turn leads to a common formal language to describe and analyze coordination algorithms.Written for first- and second-year graduate students in control and robotics, the book will also be useful to researchers in control theory, robotics, distributed algorithms, and automata theory. The book provides explanations of the basic concepts and main results, as well as numerous examples and exercises.Self-contained exposition of graph-theoretic concepts, distributed algorithms, and complexity measures for processor networks with fixed interconnection topology and for robotic networks with position-dependent interconnection topology Detailed treatment of averaging and consensus algorithms interpreted as linear iterations on synchronous networks Introduction of geometric notions such as partitions, proximity graphs, and multicenter functions Detailed treatment of motion coordination algorithms for deployment, rendezvous, connectivity maintenance, and boundary estimation", "" ] }
1207.6475
1971142667
We consider a scenario in which leaders are required to recruit teams of followers. Each leader cannot recruit all followers, but interaction is constrained according to a bipartite network. The objective for each leader is to reach a state of local stability in which it controls a team whose size is equal to a given constraint. We focus on distributed strategies, in which agents have only local information of the network topology and propose a distributed algorithm in which leaders and followers act according to simple local rules. The performance of the algorithm is analyzed with respect to the convergence to a stable solution. Our results are as follows. For any network, the proposed algorithm is shown to converge to an approximate stable solution in polynomial time, namely the leaders quickly form teams in which the total number of additional followers required to satisfy all team size constraints is an arbitrarily small fraction of the entire population. In contrast, for general graphs there can be an exponential time gap between convergence to an approximate solution and to a stable solution.
A more recent line of research aims to study how humans connected over a network solve tasks in a distributed fashion @cite_33 @cite_27 @cite_37 @cite_25 @cite_39 @cite_0 . In the work of @cite_39 , human subjects positioned at the vertices of a virtual network were shown to be able to collectively reach a coloring of the network, given only local information about their neighbors. Similar papers further investigated human coordination in the case of coloring @cite_27 @cite_37 @cite_0 and consensus @cite_37 @cite_25 , with the main goal of characterizing how performance is affected by the network's structure. Using experimental data of maximum matching games performed by human subjects in a laboratory setting, @cite_33 proposed a simple algorithmic model of human coordination that allows complexity analysis and prediction.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_39", "@cite_0", "@cite_27", "@cite_25" ], "mid": [ "", "2103319610", "2044225472", "2014134214", "2095627347", "2056204623" ], "abstract": [ "", "We argue that algorithmic modeling is a powerful approach to understanding the collective dynamics of human behavior. We consider the task of pairing up individuals connected over a network, according to the following model: each individual is able to propose to match with and accept a proposal from a neighbor in the network; if a matched individual proposes to another neighbor or accepts another proposal, the current match will be broken; individuals can only observe whether their neighbors are currently matched but have no knowledge of the network topology or the status of other individuals; and all individuals have the common goal of maximizing the total number of matches. By examining the experimental data, we identify a behavioral principle called prudence, develop an algorithmic model, analyze its properties mathematically and by simulations, and validate the model with human subject experiments for various network sizes and topologies. Our results include i) a -approximate maximum matching is obtained in logarithmic time in the network size for bounded degree networks; ii) for any constant , a -approximate maximum matching is obtained in polynomial time, while obtaining a maximum matching can require an exponential time; and iii) convergence to a maximum matching is slower on preferential attachment networks than on small-world networks. These results allow us to predict that while humans can find a “good quality” matching quickly, they may be unable to find a maximum matching in feasible time. We show that the human subjects largely abide by prudence, and their collective behavior is closely tracked by the above predictions.", "Theoretical work suggests that structural properties of naturally occurring networks are important in shaping behavior and dynamics. However, the relationships between structure and behavior are difficult to establish through empirical studies, because the networks in such studies are typically fixed. We studied networks of human subjects attempting to solve the graph or network coloring problem, which models settings in which it is desirable to distinguish one9s behavior from that of one9s network neighbors. Networks generated by preferential attachment made solving the coloring problem more difficult than did networks based on cyclical structures, and “small worlds” networks were easier still. We also showed that providing more information can have opposite effects on performance, depending on network structure.", "Networks can affect a group's ability to solve a coordination problem. We utilize laboratory experiments to study the conditions under which groups of subjects can solve coordination games. We investigate a variety of different network structures, and we also investigate coordination games with symmetric and asymmetric payoffs. Our results show that network connections facilitate coordination in both symmetric and asymmetric games. Most significantly, we find that increases in the number of network connections encourage coordination even when payoffs are highly asymmetric. These results shed light on the conditions that may facilitate coordination in real-world networks.", "A growing literature on human networks suggests that the way we are connected influences both individual and group outcomes. Recent experimental studies in the social and computer sciences have claimed that higher network connectivity helps individuals solve coordination problems. However, this is not always the case, especially when we consider complex coordination tasks; we demonstrate that networks can have both constraining edges that inhibit collective action and redundant edges that encourage it. We show that the constraints imposed by additional edges can impede coordination even though these edges also increase communication. By contrast, edges that do not impose additional constraints facilitate coordination, as described in previous work. We explain why the negative effect of constraint trumps the positive effect of communication by analyzing coordination games as a special case of widely-studied constraint satisfaction problems. The results help us to understand the importance of problem complexity and network connections, and how different types of connections can influence real-world coordination.", "Many distributed collective decision-making processes must balance diverse individual preferences with a desire for collective unity. We report here on an extensive session of behavioral experiments on biased voting in networks of individuals. In each of 81 experiments, 36 human subjects arranged in a virtual network were financially motivated to reach global consensus to one of two opposing choices. No payments were made unless the entire population reached a unanimous decision within 1 min, but different subjects were paid more for consensus to one choice or the other, and subjects could view only the current choices of their network neighbors, thus creating tensions between private incentives and preferences, global unity, and network structure. Along with analyses of how collective and individual performance vary with network structure and incentives generally, we find that there are well-studied network topologies in which the minority preference consistently wins globally; that the presence of “extremist” individuals, or the awareness of opposing incentives, reliably improve collective performance; and that certain behavioral characteristics of individual subjects, such as “stubbornness,” are strongly correlated with earnings." ] }
1207.6475
1971142667
We consider a scenario in which leaders are required to recruit teams of followers. Each leader cannot recruit all followers, but interaction is constrained according to a bipartite network. The objective for each leader is to reach a state of local stability in which it controls a team whose size is equal to a given constraint. We focus on distributed strategies, in which agents have only local information of the network topology and propose a distributed algorithm in which leaders and followers act according to simple local rules. The performance of the algorithm is analyzed with respect to the convergence to a stable solution. Our results are as follows. For any network, the proposed algorithm is shown to converge to an approximate stable solution in polynomial time, namely the leaders quickly form teams in which the total number of additional followers required to satisfy all team size constraints is an arbitrarily small fraction of the entire population. In contrast, for general graphs there can be an exponential time gap between convergence to an approximate solution and to a stable solution.
Finally, related to our work is also the research on social exchange networks @cite_31 @cite_9 , that considers a networked scenario in which each edge is associated to an economic value, nodes have to come to an agreement on how to share these values, and each agent can only finalize a single mutual exchange with a single neighbor. Recently, @cite_35 proposed a distributed algorithm that reaches approximate stability in linear time. However, we consider a different setup since we allow leaders to build teams of multiple followers.
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_35" ], "mid": [ "2083438425", "1979862606", "2963808832" ], "abstract": [ "The study of bargaining has a long history, but many basic settings are still rich with unresolved questions. In particular, consider a set of agents who engage in bargaining with one another,but instead of pairs of agents interacting in isolation,agents have the opportunity to choose whom they want to negotiate with, along the edges of a graph representing social-network relations. The area of network exchange theory in sociology has developed a large body of experimental evidence for the way in which people behave in such network-constrained bargaining situations, and it is a challenging problem to develop models that are both mathematically tractable and in general agreement with the results of these experiments. We analyze a natural theoretical model arising in network exchange theory, which can be viewed as a direct extension of the well-known Nash bargaining solution to the case of multiple agents interacting on a graph. While this generalized Nash bargaining solution is surprisingly effective at picking up even subtle differences in bargaining power that have been observed experimentally on small examples, it has remained an open question to characterize the values taken by this solution on general graphs, or to find an efficient means to compute it. Here we resolve these questions, characterizing the possible values of this bargaining solution, and giving an efficient algorithm to compute the set of possible values. Our result exploits connections to the structure of matchings in graphs, including decomposition theorems for graphs with perfect matchings, and also involves the development of new techniques. In particular, the values we are seeking turn out to correspond to a novel combinatorially defined point in the interior of a fractional relaxation of the matching problem.", "This paper presents a theoretical analysis of the structural determinants of power in exchange networks, along with research findings from laboratory experiments and a computer simulation of bargaining in network structures. Two theoretical traditions are dealth with: (1) point centrality in graph-theoretic representations of structure, as an approach to power distributions; and (2) power dependence principles applied to exchange networks. Measures of centrality available in the literature have the advantage of being easily applied to large and complex networks. In contrast, power dependence concepts were conceived for use in microsociology and are found to be cumbersome in the analysis of complex networks. But despite the relative difficulty of applying power-dependence theory to network structures, that approach generates hypotheses about power distributions which are confirmed at nearly every point in a laboratory experiment with five-person networks and at every point in a computer simulation of netwo...", "" ] }
1207.6037
2949680968
Social (or folksonomic) tagging has become a very popular way to describe content within Web 2.0 websites. Unlike taxonomies, which overimpose a hierarchical categorisation of content, folksonomies enable end-users to freely create and choose the categories (in this case, tags) that best describe some content. However, as tags are informally defined, continually changing, and ungoverned, social tagging has often been criticised for lowering, rather than increasing, the efficiency of searching, due to the number of synonyms, homonyms, polysemy, as well as the heterogeneity of users and the noise they introduce. To address this issue, a variety of approaches have been proposed that recommend users what tags to use, both when labelling and when looking for resources. As we illustrate in this paper, real world folksonomies are characterized by power law distributions of tags, over which commonly used similarity metrics, including the Jaccard coefficient and the cosine similarity, fail to compute. We thus propose a novel metric, specifically developed to capture similarity in large-scale folksonomies, that is based on a mutual reinforcement principle: that is, two tags are deemed similar if they have been associated to similar resources, and vice-versa two resources are deemed similar if they have been labelled by similar tags. We offer an efficient realisation of this similarity metric, and assess its quality experimentally, by comparing it against cosine similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and CiteULike.
@cite_6 @cite_3 followed a different approach instead: they presented a formal model, which converts a folksonomy into an undirected weighted graph, and coupled it with a new search algorithm, namely FolkRank'', based on the well-known seminal PageRank'' @cite_1 . They applied this algorithm to Delicious, and showed how it can be used as a tag recommender system. Other extensions of recommender systems to folksonomy structures have been explored @cite_7 @cite_17 ; some of these have been assessed against one of the datasets we adopted in this study, namely BibSonomy @cite_15 @cite_13 .
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_3", "@cite_6", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "97061813", "2066636486", "1876696094", "1549014378", "34018906", "", "2090041477" ], "abstract": [ "Content organization over the Internet went through several interesting phases of evolution: from structured directories to unstructured Web search engines and more recently, to tagging as a way for aggregating information, a step towards the semantic web vision. Tagging allows ranking and data organization to directly utilize inputs from end users, enabling machine processing of Web content. Since tags are created by individual users in a free form, one important problem facing tagging is to identify most appropriate tags, while eliminating noise and spam. For this purpose, we define a set of general criteria for a good tagging system. These criteria include high coverage of multiple facets to ensure good recall, least effort to reduce the cost involved in browsing, and high popularity to ensure tag quality. We propose a collaborative tag suggestion algorithm using these criteria to spot high-quality tags. The proposed algorithm employs a goodness measure for tags derived from collective user authorities to combat spam. The goodness measure is iteratively adjusted by a reward-penalty algorithm, which also incorporates other sources of tags, e.g., content-based auto-generated tags. Our experiments based on My Web 2.0 show that the algorithm is effective.", "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.", "Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. These systems provide currently relatively few structure. We discuss in this paper, how association rule mining can be adopted to analyze and structure folksonomies, and how the results can be used for ontology learning and supporting emergent semantics. We demonstrate our approach on a large scale dataset stemming from an online system.", "Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. The reason for their immediate success is the fact that no specific skills are needed for participating. At the moment, however, the information retrieval support is limited. We present a formal model and a new search algorithm for folksonomies, called FolkRank, that exploits the structure of the folksonomy. The proposed algorithm is also applied to find communities within the folksonomy and is used to structure search results. All findings are demonstrated on a large scale dataset.", "Auch erschienen in: Moor, Aldo de u.a. (Hrsg.): Proceedings of the First Conceptual Structures Tool Interoperability Workshop at the 14th International Conference on Conceptual Structures. Aalborg : Universitetsforlag, 2006. S. 87-102", "", "In this paper, we look at the \"social tag prediction\" problem. Given a set of objects, and a set of tags applied to those objects by users, can we predict whether a given tag could should be applied to a particular object? We investigated this question using one of the largest crawls of the social bookmarking system del.icio.us gathered to date. For URLs in del.icio.us, we predicted tags based on page text, anchor text, surrounding hosts, and other tags applied to the URL. We found an entropy-based metric which captures the generality of a particular tag and informs an analysis of how well that tag can be predicted. We also found that tag-based association rules can produce very high-precision predictions as well as giving deeper understanding into the relationships between tags. Our results have implications for both the study of tagging systems as potential information retrieval tools, and for the design of such systems." ] }
1207.6037
2949680968
Social (or folksonomic) tagging has become a very popular way to describe content within Web 2.0 websites. Unlike taxonomies, which overimpose a hierarchical categorisation of content, folksonomies enable end-users to freely create and choose the categories (in this case, tags) that best describe some content. However, as tags are informally defined, continually changing, and ungoverned, social tagging has often been criticised for lowering, rather than increasing, the efficiency of searching, due to the number of synonyms, homonyms, polysemy, as well as the heterogeneity of users and the noise they introduce. To address this issue, a variety of approaches have been proposed that recommend users what tags to use, both when labelling and when looking for resources. As we illustrate in this paper, real world folksonomies are characterized by power law distributions of tags, over which commonly used similarity metrics, including the Jaccard coefficient and the cosine similarity, fail to compute. We thus propose a novel metric, specifically developed to capture similarity in large-scale folksonomies, that is based on a mutual reinforcement principle: that is, two tags are deemed similar if they have been associated to similar resources, and vice-versa two resources are deemed similar if they have been labelled by similar tags. We offer an efficient realisation of this similarity metric, and assess its quality experimentally, by comparing it against cosine similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and CiteULike.
Similarity measures have often been evaluated on different datasets, making it difficult to assess their relative advantages and disadvantage in different domains. Furthermore, they have often been applied to manipulated datasets, making the comparison even more difficult. Indeed, in order to critically compare them, an evaluation framework has recently been proposed @cite_18 , with the aim of providing support to systematically compare several tag similarity measures, using data from Delicious @cite_19 . This work contributes to the assessment of the suitability of similarity measures to scenarios characterized by power-law distribution of tags and non-independence of data, showing how traditional measures like cosine do not work, and proposing an alternative, iterative measure that provides good accuracy instead.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "1505460822", "2152019382" ], "abstract": [ "Collaborative tagging systems have nowadays become important data sources for populating semantic web applications. For tasks like synonym detection and discovery of concept hierarchies, many researchers introduced measures of tag similarity. Even though most of these measures appear very natural, their design often seems to be rather ad hoc, and the underlying assumptions on the notion of similarity are not made explicit. A more systematic characterization and validation of tag similarity in terms of formal representations of knowledge is still lacking. Here we address this issue and analyze several measures of tag similarity: Each measure is computed on data from the social bookmarking system del.icio.us and a semantic grounding is provided by mapping pairs of similar tags in the folksonomy to pairs of synsets in Wordnet, where we use validated measures of semantic distance to characterize the semantic relation between the mapped tags. This exposes important features of the investigated similarity measures and indicates which ones are better suited in the context of a given semantic application.", "Social bookmarking systems are becoming increasingly important data sources for bootstrapping and maintaining Semantic Web applications. Their emergent information structures have become known as folksonomies. A key question for harvesting semantics from these systems is how to extend and adapt traditional notions of similarity to folksonomies, and which measures are best suited for applications such as community detection, navigation support, semantic search, user profiling and ontology learning. Here we build an evaluation framework to compare various general folksonomy-based similarity measures, which are derived from several established information-theoretic, statistical, and practical measures. Our framework deals generally and symmetrically with users, tags, and resources. For evaluation purposes we focus on similarity between tags and between resources and consider different methods to aggregate annotations across users. After comparing the ability of several tag similarity measures to predict user-created tag relations, we provide an external grounding by user-validated semantic proxies based on WordNet and the Open Directory Project. We also investigate the issue of scalability. We find that mutual information with distributional micro-aggregation across users yields the highest accuracy, but is not scalable; per-user projection with collaborative aggregation provides the best scalable approach via incremental computations. The results are consistent across resource and tag similarity." ] }
1207.5466
2952413440
In order to generate synthetic basket data sets for better benchmark testing, it is important to integrate characteristics from real-life databases into the synthetic basket data sets. The characteristics that could be used for this purpose include the frequent itemsets and association rules. The problem of generating synthetic basket data sets from frequent itemsets is generally referred to as inverse frequent itemset mining. In this paper, we show that the problem of approximate inverse frequent itemset mining is NP -complete. Then we propose and analyze an approximate algorithm for approximate inverse frequent itemset mining, and discuss privacy issues related to the synthetic basket data set. In particular, we propose an approximate algorithm to determine the privacy leakage in a synthetic basket data set.
Privacy preserving data mining has been a very active research topic in the last few years. There are two general approaches mainly from privacy preserving data mining framework: data perturbation and the distributed secure multi-party computation approach. As the context of this paper focuses on data perturbation for single site, we will not discuss the multi-party computation based approach for distributed cases (See @cite_27 for a recent survey).
{ "cite_N": [ "@cite_27" ], "mid": [ "2001336960" ], "abstract": [ "Research in secure distributed computation, which was done as part of a larger body of research in the theory of cryptography, has achieved remarkable results. It was shown that non-trusting parties can jointly compute functions of their different inputs while ensuring that no party learns anything but the defined output of the function. These results were shown using generic constructions that can be applied to any function that has an efficient representation as a circuit. We describe these results, discuss their efficiency, and demonstrate their relevance to privacy preserving computation of data mining algorithms. We also show examples of secure computation of data mining algorithms that use these generic constructions." ] }
1207.5466
2952413440
In order to generate synthetic basket data sets for better benchmark testing, it is important to integrate characteristics from real-life databases into the synthetic basket data sets. The characteristics that could be used for this purpose include the frequent itemsets and association rules. The problem of generating synthetic basket data sets from frequent itemsets is generally referred to as inverse frequent itemset mining. In this paper, we show that the problem of approximate inverse frequent itemset mining is NP -complete. Then we propose and analyze an approximate algorithm for approximate inverse frequent itemset mining, and discuss privacy issues related to the synthetic basket data set. In particular, we propose an approximate algorithm to determine the privacy leakage in a synthetic basket data set.
have proposed a general framework for privacy preserving database application testing by generating synthetic data sets based on some a-priori knowledge about the production databases @cite_28 . The general a-priori knowledge such as statistics and rules can also be taken as constraints of the underlying data records. The problem investigated in this paper can be thought as a simplified problem where data set here is binary one and constraints are frequencies of given frequent itemsets. However, the techniques developed in @cite_28 are infeasible here as the number of items are much larger than the number of attributes in general data sets.
{ "cite_N": [ "@cite_28" ], "mid": [ "1591060234" ], "abstract": [ "Synthetic data plays an important role in software testing. In this paper, we initiate the study of synthetic data generation models for the purpose of application software performance testing. In particular, we will discuss models for protecting privacy in synthetic data generations. Within this model, we investigate the feasibility and techniques for privacy preserving synthetic database generation that can be used for database application performance testing. The methodologies that we will present will be useful for general privacy preserving software performance testing." ] }
1207.4958
2949073324
Itemset mining has been an active area of research due to its successful application in various data mining scenarios including finding association rules. Though most of the past work has been on finding frequent itemsets, infrequent itemset mining has demonstrated its utility in web mining, bioinformatics and other fields. In this paper, we propose a new algorithm based on the pattern-growth paradigm to find minimally infrequent itemsets. A minimally infrequent itemset has no subset which is also infrequent. We also introduce the novel concept of residual trees. We further utilize the residual trees to mine multiple level minimum support itemsets where different thresholds are used for finding frequent itemsets for different lengths of the itemset. Finally, we analyze the behavior of our algorithm with respect to different parameters and show through experiments that it outperforms the competing ones.
@cite_0 , introduced a novel algorithm known as the method for mining frequent itemsets. The FP-growth method is a depth-first search algorithm. A data structure called the is used for storing the frequency information of itemsets in the original transaction database in a compressed form. Only two database scans are needed for the algorithm and no candidate generation is required. This makes the FP-growth method much faster than Apriori. @cite_11 , introduced a novel technique that greatly reduces the need to traverse the FP-trees. In this paper, we use a variation of the FP-tree for mining the MIIs.
{ "cite_N": [ "@cite_0", "@cite_11" ], "mid": [ "2064853889", "2151953639" ], "abstract": [ "Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist prolific patterns and or long patterns. In this study, we propose a novel frequent pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent pattern mining methods.", "Efficient algorithms for mining frequent itemsets are crucial for mining association rules as well as for many other data mining tasks. Methods for mining frequent itemsets have been implemented using a prefix-tree structure, known as an FP-tree, for storing compressed information about frequent itemsets. Numerous experimental results have demonstrated that these algorithms perform extremely well. In this paper, we present a novel FP-array technique that greatly reduces the need to traverse FP-trees, thus obtaining significantly improved performance for FP-tree-based algorithms. Our technique works especially well for sparse data sets. Furthermore, we present new algorithms for mining all, maximal, and closed frequent itemsets. Our algorithms use the FP-tree data structure in combination with the FP-array technique efficiently and incorporate various optimization techniques. We also present experimental results comparing our methods with existing algorithms. The results show that our methods are the fastest for many cases. Even though the algorithms consume much memory when the data sets are sparse, they are still the fastest ones when the minimum support is low. Moreover, they are always among the fastest algorithms and consume less memory than other methods when the data sets are dense." ] }
1207.4958
2949073324
Itemset mining has been an active area of research due to its successful application in various data mining scenarios including finding association rules. Though most of the past work has been on finding frequent itemsets, infrequent itemset mining has demonstrated its utility in web mining, bioinformatics and other fields. In this paper, we propose a new algorithm based on the pattern-growth paradigm to find minimally infrequent itemsets. A minimally infrequent itemset has no subset which is also infrequent. We also introduce the novel concept of residual trees. We further utilize the residual trees to mine multiple level minimum support itemsets where different thresholds are used for finding frequent itemsets for different lengths of the itemset. Finally, we analyze the behavior of our algorithm with respect to different parameters and show through experiments that it outperforms the competing ones.
To the best of our knowledge there has been only one other work that discusses the mining of MIIs. @cite_9 , proposed the algorithm which is based upon the algorithm developed for finding unique itemsets (itemsets with no unique proper subsets) @cite_10 @cite_6 . The authors also showed that the minimal infrequent itemset problem is NP-complete @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_6" ], "mid": [ "111044289", "1975967982", "" ], "abstract": [ "A new algorithm for minimal infrequent itemset mining is presented. Potential applications of finding infrequent itemsets include statistical disclosure risk assessment, bioinformatics, and fraud detection. This is the first algorithm designed specifically for finding these rare itemsets. Many itemset properties used implicitly in the algorithm are proved. The problem is shown to be NP-complete. Experimental results are then presented.", "A new algorithm, SUDA2, is presented which finds minimally unique itemsets i.e., minimal itemsets of frequency one. These itemsets, referred to as Minimal Sample Uniques (MSUs), are important for statistical agencies who wish to estimate the risk of disclosure of their datasets. SUDA2 is a recursive algorithm which uses new observations about the properties of MSUs to prune and traverse the search space. Experimental comparisons with previous work demonstrate that SUDA2 is several orders of magnitude faster, enabling datasets of significantly more columns to be addressed. The ability of SUDA2 to identify the boundaries of the search space for MSUs is clearly demonstrated.", "" ] }
1207.4958
2949073324
Itemset mining has been an active area of research due to its successful application in various data mining scenarios including finding association rules. Though most of the past work has been on finding frequent itemsets, infrequent itemset mining has demonstrated its utility in web mining, bioinformatics and other fields. In this paper, we propose a new algorithm based on the pattern-growth paradigm to find minimally infrequent itemsets. A minimally infrequent itemset has no subset which is also infrequent. We also introduce the novel concept of residual trees. We further utilize the residual trees to mine multiple level minimum support itemsets where different thresholds are used for finding frequent itemsets for different lengths of the itemset. Finally, we analyze the behavior of our algorithm with respect to different parameters and show through experiments that it outperforms the competing ones.
@cite_5 , proposed the MLMS model for constraining the number of frequent and infrequent itemsets generated. A candidate generation-and-test based algorithm was proposed in @cite_5 . The downward closure property is absent in the MLMS model, and thus, the Apriori algorithm checks the supports of all possible occurring at least once in the transaction database, for finding the frequent itemsets. Generally, the support thresholds are chosen randomly for different length itemsets with the constraint @math . @cite_2 , extended their proposed algorithm from @cite_5 to include an interestingness parameter while mining frequent and infrequent itemsets.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2143945853", "1520720062" ], "abstract": [ "When we study positive and negative association rules simultaneously, infrequent itemsets become very important because there are many valued negative association rules in them. However, how to discover infrequent itemsets is still an open problem. In this paper, we propose a multiple level minimum supports (MLMS) model to constrain infrequent itemsets and frequent itemsets by giving deferent minimum supports to itemsets with deferent length. We compare the MLMS model with the existing models. We also design an algorithm Apriori_MLMS to discover simultaneously both frequent and infrequent itemsets based on MLMS model. The experimental results and comparisons show the validity of the algorithm.", "MLMS (Multiple Level Minimum Supports) model which uses multiple level minimum supports to discover infrequent itemsets and frequent itemsets simultaneously is proposed in our previous work. The reason to discover infrequent itemsets is that there are many valued negative association rules in them. However, some of the itemsets discovered by the MLMS model are not interesting and ought to be pruned. In one of Xindong Wu's papers [1], a pruning strategy (we call it Wu's pruning strategy here) is used to prune uninteresting itemsets. But the pruning strategy is only applied to single minimum support. In this paper, we modify the Wu's pruning strategy to adapt to the MLMS model to prune uninteresting itemsets and we call the MLMS model with the modified Wu's pruning strategy IMLMS (Interesting MLMS) model. Based on the IMLMS model, we design an algorithm to discover simultaneously both interesting frequent itemsets and interesting infrequent itemsets. The experimental results show the validity of the model." ] }
1207.5226
2950692016
Functional dependencies (FDs) specify the intended data semantics while violations of FDs indicate deviation from these semantics. In this paper, we study a data cleaning problem in which the FDs may not be completely correct, e.g., due to data evolution or incomplete knowledge of the data semantics. We argue that the notion of relative trust is a crucial aspect of this problem: if the FDs are outdated, we should modify them to fit the data, but if we suspect that there are problems with the data, we should modify the data to fit the FDs. In practice, it is usually unclear how much to trust the data versus the FDs. To address this problem, we propose an algorithm for generating non-redundant solutions (i.e., simultaneous modifications of the data and the FDs) corresponding to various levels of relative trust. This can help users determine the best way to modify their data and or FDs to achieve consistency.
The idea of modifying a supplied set of FDs to better fit the data was also discussed in @cite_6 . The goal of that work was to generate a small set of Conditional Functional Dependencies (CFDs) by modifying the embedded FD. Modifying the data and relative trust were not discussed.
{ "cite_N": [ "@cite_6" ], "mid": [ "2023052779" ], "abstract": [ "We present Data Auditor, a tool for exploring data quality and data semantics. Given a rule or an integrity constraint and a target relation, Data Auditor computes pattern tableaux, which concisely summarize subsets of the relation that (mostly) satisfy or (mostly) fail the constraint. This paper describes 1) the architecture and user interface of Data Auditor, 2) the supported constraints for testing data consistency and completeness, 3) the heuristics used by Data Auditor to \"tune\" a given constraint or its associated parameters for better fit with the data, and 4) several demonstration scenarios. using real data sets." ] }
1207.5226
2950692016
Functional dependencies (FDs) specify the intended data semantics while violations of FDs indicate deviation from these semantics. In this paper, we study a data cleaning problem in which the FDs may not be completely correct, e.g., due to data evolution or incomplete knowledge of the data semantics. We argue that the notion of relative trust is a crucial aspect of this problem: if the FDs are outdated, we should modify them to fit the data, but if we suspect that there are problems with the data, we should modify the data to fit the FDs. In practice, it is usually unclear how much to trust the data versus the FDs. To address this problem, we propose an algorithm for generating non-redundant solutions (i.e., simultaneous modifications of the data and the FDs) corresponding to various levels of relative trust. This can help users determine the best way to modify their data and or FDs to achieve consistency.
The problem of cleaning the data in order to satisfy a fixed set of FDs has been studied in, e.g., @cite_12 @cite_0 @cite_2 @cite_1 . In our context, these solutions may be classified as having a fixed threshold @math of @math @math $ tuples. Thus, existing techniques for FD discovery are not applicable to our problem.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_12", "@cite_2" ], "mid": [ "2167333415", "2137775416", "", "2047745978" ], "abstract": [ "Data integrated from multiple sources may contain inconsistencies that violate integrity constraints. The constraint repair problem attempts to find \"low cost\" changes that, when applied, will cause the constraints to be satisfied. While in most previous work repair cost is stated in terms of tuple insertions and deletions, we follow recent work to define a database repair as a set of value modifications. In this context, we introduce a novel cost framework that allows for the application of techniques from record-linkage to the search for good repairs. We prove that finding minimal-cost repairs in this model is NP-complete in the size of the database, and introduce an approach to heuristic repair-construction based on equivalence classes of attribute values. Following this approach, we define two greedy algorithms. While these simple algorithms take time cubic in the size of the database, we develop optimizations inspired by algorithms for duplicate-record detection that greatly improve scalability. We evaluate our framework and algorithms on synthetic and real data, and show that our proposed optimizations greatly improve performance at little or no cost in repair quality.", "Two central criteria for data quality are consistency and accuracy. Inconsistencies and errors in a database often emerge as violations of integrity constraints. Given a dirty database D, one needs automated methods to make it consistent, i.e., find a repair D' that satisfies the constraints and \"minimally\" differs from D. Equally important is to ensure that the automatically-generated repair D' is accurate, or makes sense, i.e., D' differs from the \"correct\" data within a predefined bound. This paper studies effective methods for improving both data consistency and accuracy. We employ a class of conditional functional dependencies (CFDs) proposed in [6] to specify the consistency of the data, which are able to capture inconsistencies and errors beyond what their traditional counterparts can catch. To improve the consistency of the data, we propose two algorithms: one for automatically computing a repair D' that satisfies a given set of CFDs, and the other for incrementally finding a repair in response to updates to a clean database. We show that both problems are intractable. Although our algorithms are necessarily heuristic, we experimentally verify that the methods are effective and efficient. Moreover, we develop a statistical method that guarantees that the repairs found by the algorithms are accurate above a predefined rate without incurring excessive user interaction.", "", "We study the problem of repairing an inconsistent database that violates a set of functional dependencies by making the smallest possible value modifications. For an inconsistent database, we define an optimum repair as a database that satisfies the functional dependencies, and minimizes, among all repairs, a distance measure that depends on the number of corrections made in the database and the weights of tuples modified. We show that like other versions of the repair problem, checking the existence of a repair within a certain distance of a database is NP-complete. We also show that finding a constant-factor approximation for the optimum repair for any set of functional dependencies is NP-hard. Furthermore, there is a small constant and a set of functional dependencies, for which finding an approximate solution for the optimum repair within the factor of that constant is also NP-hard. Then we present an approximation algorithm that for a fixed set of functional dependencies and an arbitrary input inconsistent database, produces a repair whose distance to the database is within a constant factor of the optimum repair distance. We finally show how the approximation algorithm can be used in data cleaning using a recent extension to functional dependencies, called conditional functional dependencies." ] }
1207.4286
2761697135
Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.
The problem of designing transfer functions for numeric domains is as old as the field of abstract interpretation itself @cite_76 , and even the technique of using primed and unprimed variables to capture and abstract the semantics of instructions and functions dates back to the thesis work of Halbwachs @cite_16 . However, even for a fixed abstract domain, there are typically many ways of designing and implementing transfer functions. Cousot and Halbwachs [Sect. 4.2.1] CH78 , for example, discussed several ways to realise a transfer function for assignments such as @math in the polyhedral domain while abstracting integer division @math is an interesting study within itself @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_76", "@cite_16" ], "mid": [ "2167310095", "2043100293", "2584348005" ], "abstract": [ "Verification is usually performed on a high-level view of the software, either specification or program source code. However in certain circumstances verification is more relevant when performed at the machine code level. This paper focuses on automatic test data generation from a standalone executable. Low-level analysis is much more difficult than high-level analysis since even the control-flow graph is not available and bit-level instructions have to be modelled faithfully. We show how \"path-based\" structural test data generation can be adapted from structured language to machine code, using both state-of-the-art technologies and innovative techniques. Our results have been implemented in a tool named OSMOSE and encouraging experiments have been conducted.", "A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).", "In [13], a new size-change principle was proposed to verify termination of functional programs automatically. We extend this principle in order to prove termination and innermost termination of arbitrary term rewrite systems (TRSs). Moreover, we compare this approach with existing techniques for termination analysis of TRSs (such as recursive path orderings or dependency pairs). It turns out that the size-change principle on its own fails for many examples that can be handled by standard techniques for rewriting, but there are also TRSs where it succeeds whereas existing rewriting techniques fail. In order to benefit from their respective advantages, we show how to combine the size-change principle with classical orderings and with dependency pairs. In this way, we obtain a new approach for automated termination proofs of TRSs which is more powerful than previous approaches." ] }
1207.4286
2761697135
Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.
Transfer functions can always be found for domains of finite height using the method of @cite_34 , provided one is prepared to pay the cost of repeatedly calling a decision procedure or a theorem prover, possibly many times on each application of a transformer. This motivates applying a decision procedure in order to compute a best transformer offline, prior to the actual analysis @cite_51 @cite_66 , so as to both simplify and speedup their application.
{ "cite_N": [ "@cite_34", "@cite_51", "@cite_66" ], "mid": [ "", "1485150864", "2095997776" ], "abstract": [ "", "Traditionally, transfer functions have been manually designed for each operation in a program. Recently, however, there has been growing interest in computing transfer functions, motivated by the desire to reason about sequences of operations that constitute basic blocks. This paper focuses on deriving transfer functions for intervals -- possibly the most widely used numeric domain--and shows how they can be computed from Boolean formulae which are derived through bit-blasting. This approach is entirely automatic, avoids complicated elimination algorithms, and provides a systematic way of handling wrap-arounds (integer overflows and underflows) which arise in machine arithmetic.", "Verification is usually performed on a high-level view of the software, either specification or program source code. However, in certain circumstances verification is more relevant when performed at the machine-code level. This paper focuses on automatic test data generation from a stand-alone executable. Low-level analysis is much more difficult than high-level analysis since even the control-flow graph is not available and bit-level instructions have to be modelled faithfully. The paper shows how ‘path-based’ structural test data generation can be adapted from structured language to machine code, using both state-of-the-art technologies and innovative techniques. The results have been implemented in a tool named OSMOSE and encouraging experiments have been conducted. Copyright © 2010 John Wiley & Sons, Ltd. (This paper is an extended version of results presented at ICST 2008 1.)" ] }
1207.4286
2761697135
Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.
Transfer functions for low-level code have been synthesised for intervals using BDDs @cite_45 by applying interval subdivision where the extrema representing the interval are themselves represented as bit-vectors @cite_54 . If @math is a unary operation on an unsigned byte, then its abstract transformer @math on @math can be defined recursively. If @math then @math whereas if @math then @math where @math and @math . Binary operations can likewise be decomposed by repeatedly dividing squares into their quadrants. The 8-bit inputs, @math and @math , can be represented as 8-bit vectors, as can the 8-bit outputs, so as to represent @math with a BDD. This permits caching to be applied when @math is computed, which reduces the time needed to compute a best transformer to approximately 24 hours for each 8-bit operation. It is difficult to see how this approach can be extended to blocks that involve many variables without a step-change in BDD performance.
{ "cite_N": [ "@cite_54", "@cite_45" ], "mid": [ "2125076295", "2080267935" ], "abstract": [ "Embedded software must meet conflicting requirements such as be-ing highly reliable, running on resource-constrained platforms, and being developed rapidly. Static program analysis can help meet all of these goals. People developing analyzers for embedded object code face a difficult problem: writing an abstract version of each instruction in the target architecture(s). This is currently done by hand, resulting in abstract operations that are both buggy and im-precise. We have developed Hoist: a novel system that solves these problems by automatically constructing abstract operations using a microprocessor (or simulator) as its own specification. With almost no input from a human, Hoist generates a collection of C func-tions that are ready to be linked into an abstract interpreter. We demonstrate that Hoist generates abstract operations that are cor-rect, having been extensively tested, sufficiently fast, and substan-tially more precise than manually written abstract operations. Hoist is currently limited to eight-bit machines due to costs exponential in the word size of the target architecture. It is essential to be able to analyze software running on these small processors: they are important and ubiquitous, with many embedded and safety-critical systems being based on them.", "In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach." ] }
1207.4286
2761697135
Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.
The question of how to construct a best abstract transformer has also been considered in the context of Markov decision processes (MDPs) for which the first abstract interpretation framework has recently been developed @cite_42 . The framework affords the calculation of both lower and upper bounds on reachability probabilities, which is novel. The work focuses on predicate abstraction @cite_73 , that have had some success with large MDPs, and seeks to answer the question of, for given set of predicates, what is the most precise abstract program that still is a correct abstraction. More generally, the work illustrates that the question of how to compute the best abstract transformer is pertinent even in a probabilistic setting.
{ "cite_N": [ "@cite_42", "@cite_73" ], "mid": [ "2098245493", "1497571013" ], "abstract": [ "This paper investigates relative precision and optimality of analyses for concurrent probabilistic systems. Aiming at the problem at the heart of probabilistic model checking – computing the probability of reaching a particular set of states – we leverage the theory of abstract interpretation. With a focus on predicate abstraction, we develop the first abstract-interpretation framework for Markov decision processes which admits to compute both lower and upper bounds on reachability probabilities. Further, we describe how to compute and approximate such abstractions using abstraction refinement and give experimental results.", "In this paper, we propose a method for the automatic construction of an abstract state graph of an arbitrary system using the Pvs theorem prover." ] }
1207.4286
2761697135
Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.
The classical approach to handling overflows is to follow the application of a transfer function with overflow and underflow checks; program variables are considered to be unbounded for the purposes of applying the transfer function but then their sizes are considered and range tests and, if necessary, range adjustments are applied to model any wrapping. This approach has been implemented in the analyzer @cite_61 @cite_38 . However, for convex polyhedra, it is also possible to revise the concretisation map to reflect truncation so as to remove the range tests from most abstract operations @cite_49 @cite_48 . Another choice is to deploy congruence relations @cite_26 @cite_6 where the modulus is a power of two so as to reflect the wrapping in the abstract domain itself @cite_77 . This approach can be applied to find both relationships between different words @cite_77 and the bits that constitute words @cite_83 @cite_40 @cite_66 (the relative precision of these two approaches has recently been compared @cite_57 ). Bit-level models have been combined with range inference @cite_53 @cite_39 , though neither of these works address relational abstraction nor transfer function synthesis.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_26", "@cite_48", "@cite_53", "@cite_6", "@cite_39", "@cite_57", "@cite_77", "@cite_40", "@cite_83", "@cite_49", "@cite_66" ], "mid": [ "2170736936", "", "", "1584710274", "2003412429", "2244843980", "", "", "1991504773", "1824807610", "1844582961", "", "2095997776" ], "abstract": [ "We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software.The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization (Sect. 3 and 7), the symbolic manipulation of expressions to improve the precision of abstract transfer functions (Sect. 6.3), the octagon (Sect. 6.2.2), ellipsoid (Sect. 6.2.3), and decision tree (Sect. 6.2.4) abstract domains, all with sound handling of rounding errors in oating point computations, widening strategies (with thresholds: Sect. 7.1.2, delayed: Sect. 7.1.3) and the automatic determination of the parameters (parametrized packing: Sect. 7.2).", "", "", "Variables in programs are usually confined to a fixed number of bits and results that require more bits are truncated. Due to the use of 32-bit and 64-bit variables, inadvertent overflows are rare. However, a sound static analysis must reason about overflowing calculations and conversions between unsigned and signed integers; the latter remaining a common source of subtle programming errors. Rather than polluting an analysis with the low-level details of modelling two's complement wrapping behaviour, this paper presents a computationally light-weight solution based on polyhedral analysis which eliminates the need to check for wrapping when evaluating most (particularly linear) assignments.", "Symbolic decision trees are not the only way to correlate the relationship between flags and numeric variables. Boolean formulae can also represent such relationships where the integer variables are modelled with bit-vectors of propositional variables. Boolean formulae can be composed to express the semantics of a block and program state, but they are hardly tractable, hence the need to compute their abstractions. This paper shows how incremental SAT can be applied to derive range and set abstractions for bit-vectors that are constrained by Boolean formulae.", "We present several new static analysis frameworks applying to rational numbers, and more precisely, designed for discovering congruence properties satisfied by rational (or real) variables of programs. Two of them deal with additive congruence properties and generalize linear equation analysis [12] and congruence analyses on integer numbers [8, 9]. The others are based on multiplicative congruence properties in the set of positive rational numbers. Among other potential applications, we exemplify the interest of all these analyses for optimizing the representation of rational or real valued variables.", "", "", "We consider integer arithmetic modulo a power of 2 as providedby mainstream programming languages like Java or standardimplementations of C. The difficulty here is that, for w> 1, the ring Zm of integers modulom = 2w has zero divisors and thus cannotbe embedded into a field. Not withstanding that, we present intra-and interprocedural algorithms for inferring for every programpoint u affine relations between program variables valid atu. If conditional branching is replaced withnondeterministic branching, our algorithms are not only sound butalso complete in that they detect all valid affinerelations in a natural class of programs. Moreover, they run intime linear in the program size and polynomial in the number ofprogram variables and can be implemented by using the same modularinteger arithmetic as the target language to be analyzed. We alsoindicate how our analysis can be extended to deal with equalityguards, even in an interprocedural setting.", "This paper proposes a new approach for deriving invariants that are systems of congruence equations where the modulo is a power of 2. The technique is an amalgam of SAT-solving, where a propositional formula is used to encode the semantics of a basic block, and abstraction, where the solutions to the formula are systematically combined and summarised as a system of congruence equations. The resulting technique is more precise than existing congruence analyses since a single optimal transfer function is derived for a basic block as a whole.", "Bitwise instructions, loops and indirect data access pose difficult challenges to the verification of microcontroller programs. In particular, it is necessary to show that an indirect write does not mutate registers, which are indirectly addressable. To prove this property, among others, this paper presents a relational binary-code semantics and details how this can be used to compute program invariants in terms of bit-level congruences. Moreover, it demonstrates how congruences can be combined with intervals to derive accurate ranges, as well as information about strided indirect memory accesses.", "", "Verification is usually performed on a high-level view of the software, either specification or program source code. However, in certain circumstances verification is more relevant when performed at the machine-code level. This paper focuses on automatic test data generation from a stand-alone executable. Low-level analysis is much more difficult than high-level analysis since even the control-flow graph is not available and bit-level instructions have to be modelled faithfully. The paper shows how ‘path-based’ structural test data generation can be adapted from structured language to machine code, using both state-of-the-art technologies and innovative techniques. The results have been implemented in a tool named OSMOSE and encouraging experiments have been conducted. Copyright © 2010 John Wiley & Sons, Ltd. (This paper is an extended version of results presented at ICST 2008 1.)" ] }
1207.4854
2950437111
We study the behavior of the posterior distribution in high-dimensional Bayesian Gaussian linear regression models having @math , with @math the number of predictors and @math the sample size. Our focus is on obtaining quantitative finite sample bounds ensuring sufficient posterior probability assigned in neighborhoods of the true regression coefficient vector, @math , with high probability. We assume that @math is approximately @math -sparse and obtain universal bounds, which provide insight into the role of the prior in controlling concentration of the posterior. Based on these finite sample bounds, we examine the implied asymptotic contraction rates for several examples showing that sparsely-structured and heavy-tail shrinkage priors exhibit rapid contraction rates. We also demonstrate that a stronger result holds for the Uniform-Gaussian [2] A binary vector of indicators ( @math ) is drawn from the uniform distribution on the set of binary sequences with exactly @math ones, and then each @math if @math and @math if @math . prior. These types of finite sample bounds provide guidelines for designing and evaluating priors for high-dimensional problems.
In relation to a more Bayesian approach, the replica method has also been used to obtain results concerning the behavior of MAP estimators in the work of Rangan, Fletcher, and Goyal @cite_51 . Because a rigorous theory for the replica method has not yet been established, making the results of this work rigorous shall require a leap forward in technology.
{ "cite_N": [ "@cite_51" ], "mid": [ "2090842051" ], "abstract": [ "The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdu. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability." ] }
1207.4854
2950437111
We study the behavior of the posterior distribution in high-dimensional Bayesian Gaussian linear regression models having @math , with @math the number of predictors and @math the sample size. Our focus is on obtaining quantitative finite sample bounds ensuring sufficient posterior probability assigned in neighborhoods of the true regression coefficient vector, @math , with high probability. We assume that @math is approximately @math -sparse and obtain universal bounds, which provide insight into the role of the prior in controlling concentration of the posterior. Based on these finite sample bounds, we examine the implied asymptotic contraction rates for several examples showing that sparsely-structured and heavy-tail shrinkage priors exhibit rapid contraction rates. We also demonstrate that a stronger result holds for the Uniform-Gaussian [2] A binary vector of indicators ( @math ) is drawn from the uniform distribution on the set of binary sequences with exactly @math ones, and then each @math if @math and @math if @math . prior. These types of finite sample bounds provide guidelines for designing and evaluating priors for high-dimensional problems.
Along the lines of priors promoting sparsity, a strong theory for the normal means problem has been developed in @cite_44 under the assumption that @math . Their asymptotic theory relies upon comparison with a minimax framework. To imitate this theory, the most obvious approach would leverage the framework in @cite_12 , but this would only provide asymptotic guarantees given the current state of that theory. As such, we leave the investigation of this approach to the future.
{ "cite_N": [ "@cite_44", "@cite_12" ], "mid": [ "2147426468", "2175784154" ], "abstract": [ "We consider full Bayesian inference in the multivariate normal mean model in the situation that the mean vector is sparse. The prior distribution on the vector of means is constructed hierarchically by first choosing a collection of nonzero means and next a prior on the nonzero values.We consider the posterior distribution in the frequentist set-up that the observations are generated according to a fixed mean vector, and are interested in the posterior distribution of the number of nonzero components and the contraction of the posterior distribution to the true mean vector. We find various combinations of priors on the number of nonzero coefficients and on these coefficients that give desirable performance. We also find priors that give suboptimal convergence, for instance, Gaussian priors on the nonzero coefficients.We illustrate the results by simulations. © 2012 Institute of Mathematical Statistics.", "‘Approximate message passing’ algorithms proved to be extremely effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide the first rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with iid gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs." ] }
1207.4854
2950437111
We study the behavior of the posterior distribution in high-dimensional Bayesian Gaussian linear regression models having @math , with @math the number of predictors and @math the sample size. Our focus is on obtaining quantitative finite sample bounds ensuring sufficient posterior probability assigned in neighborhoods of the true regression coefficient vector, @math , with high probability. We assume that @math is approximately @math -sparse and obtain universal bounds, which provide insight into the role of the prior in controlling concentration of the posterior. Based on these finite sample bounds, we examine the implied asymptotic contraction rates for several examples showing that sparsely-structured and heavy-tail shrinkage priors exhibit rapid contraction rates. We also demonstrate that a stronger result holds for the Uniform-Gaussian [2] A binary vector of indicators ( @math ) is drawn from the uniform distribution on the set of binary sequences with exactly @math ones, and then each @math if @math and @math if @math . prior. These types of finite sample bounds provide guidelines for designing and evaluating priors for high-dimensional problems.
Another closely related area of research involves the construction of hypothesis tests and confidence intervals based upon the LASSO @cite_0 @cite_18 @cite_29 . These methods are very recent, and we anticipate that these methods may provide a way forward for a sharper analysis of Bayesian model selection. The techniques and principles of those works are quite different from those used in this paper, so we leave this problem as a path of further inquiry.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_29" ], "mid": [ "2949901676", "2042542290", "2078411132" ], "abstract": [ "Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or @math -values for these models. We consider here high-dimensional linear regression problem, and propose an efficient algorithm for constructing confidence intervals and @math -values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. We test our method on synthetic data and a high-throughput genomic data set about riboflavin production rate.", "We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters @math is comparable to or exceeds the sample size @math , a successful approach uses an @math -penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work javanmard2013hypothesis, confidenceJM, GBR-hypothesis has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal testing power if the sample size @math asymptotically dominates @math , with @math being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires @math to asymptotically dominate @math . In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals.", "We propose a general method for constructing confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in a high-dimensional model. It can be easily adjusted for multiplicity taking dependence among tests into account. For linear models, our method is essentially the same as in Zhang and Zhang [J. R. Stat. Soc. Ser. B Stat. Methodol. 76 (2014) 217-242]: we analyze its asymptotic properties and establish its asymptotic optimality in terms of semiparametric efficiency. Our method naturally extends to generalized linear models with convex loss functions. We develop the corresponding theory which includes a careful analysis for Gaussian, sub-Gaussian and bounded correlated designs." ] }
1207.4678
2953130791
We observe that the technique of Markov contraction can be used to establish measure concentration for a broad class of non-contracting chains. In particular, geometric ergodicity provides a simple and versatile framework. This leads to a short, elementary proof of a general concentration inequality for Markov and hidden Markov chains (HMM), which supercedes some of the known results and easily extends to other processes such as Markov trees. As applications, we give a Dvoretzky-Kiefer-Wolfowitz-type inequality and a uniform Chernoff bound. All of our bounds are dimension-free and hold for countably infinite state spaces.
In parallel to the work on concentration of measure results for Markov chains @cite_10 @cite_1 @cite_34 @cite_7 @cite_2 @cite_3 , grew a body of independent results on Chernoff-type bounds for these processes. The papers @cite_21 @cite_25 @cite_29 @cite_15 @cite_4 played a founding role, and various extensions and refinements followed @cite_11 @cite_16 . In a remarkable recent development @cite_8 , optimal Chernoff-Hoeffding bounds are obtained based on the mixing time at a constant threshold. Concentration of Lipschitz functions of mixing sequences, with applications to the Kolmogorov-Smirnov statistic, were considered in @cite_32 . The paper @cite_14 examines the concentration of empirical distributions for non-independent sequences satisfying Poincar 'e or log-Sobolev inequalities.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_29", "@cite_21", "@cite_1", "@cite_32", "@cite_3", "@cite_16", "@cite_2", "@cite_15", "@cite_34", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2165651967", "1985380836", "2009754940", "2101363765", "2139964991", "1980322272", "2950313068", "", "1972615020", "2124036135", "1998123258", "2001351653", "2028204099", "2133755908", "2027143166", "2004255137" ], "abstract": [ "The concentration of empirical measures is studied for dependent data, whose joint distribution satisfies Poincar ' e -type or logarithmic Sobolev inequalities. The general concentration results are then applied to spectral empirical distribution functions associated with high-dimensional random matrices.", "This paper develops bounds on the distribution function of the empirical mean for irreducible finite-state Markov chains. One approach, explored by D. Gillman, reduces this problem to bounding the largest eigenvalue of a perturbation of the transition matrix for the Markov chain. By using estimates on eigenvalues given in Kato's book ''Perturbation Theory for Linear Operators'', we simplify the proof of D. Gillman and extend it to non-reversible finite-state Markov chains and continuous time. We also set out another method, directly applicable to some general ergodic Markov kernels having a spectral gap.", "The martingale method is used to establish concentration inequalities for a class of dependent random sequences on a countable state space, with the constants in the inequalities expressed in terms of certain mixing coecien ts. Along the way, bounds are obtained on mar- tingale dierences associated with the random sequences, which may be of independent interest. As applications of the main result, concentra- tion inequalities are also derived for inhomogeneous Markov chains and hidden Markov chains, and an extremal property associated with their martingale dierence bounds is established. This work complements and generalizes certain concentration inequalities obtained by Marton and Samson, while also providing a dieren t proof of some known results.", "We prove the first Chernoff-Hoeffding bounds for general nonreversible finite-state Markov chains based on the standard L_1 (variation distance) mixing-time of the chain. Specifically, consider an ergodic Markov chain M and a weight function f: [n] -> [0,1] on the state space [n] of M with mean mu = E_ v = delta mu t ], is at most exp(-Omega(delta^2 mu t T)) for 0 1. In fact, the bounds hold even if the weight functions f_i's for i in [t] are distinct, provided that all of them have the same mean mu. We also obtain a simplified proof for the Chernoff-Hoeffding bounds based on the spectral expansion lambda of M, which is the square root of the second largest eigenvalue (in absolute value) of M tilde M , where tilde M is the time-reversal Markov chain of M. We show that the probability Pr [ |X - mu t| >= delta mu t ] is at most exp(-Omega(delta^2 (1-lambda) mu t)) for 0 1. Both of our results extend to continuous-time Markov chains, and to the case where the walk starts from an arbitrary distribution x, at a price of a multiplicative factor depending on the distribution x in the concentration bounds.", "We consider a finite random walk on a weighted graph G; we show that the fraction of time spent in a set of vertices A converges to the stationary probability @math with error probability exponentially small in the length of the random walk and the square of the size of the deviation from @math . The exponential bound is in terms of the expansion of G and improves previous results of [D. Aldous, Probab. Engrg. Inform. Sci., 1 (1987), pp. 33--46], [L. Lovasz and M. Simonovits, Random Structures Algorithms , 4 (1993), pp. 359--412], [M. Ajtai, J. Komlos, and E. Szemeredi, Deterministic simulation of logspace, in Proc. 19th ACM Symp. on Theory of Computing, 1987]. We show that taking the sample average from one trajectory gives a more efficient estimate of @math than the standard method of generating independent sample points from several trajectories. Using this more efficient sampling method, we improve the algorithms of Jerrum and Sinclair for approximating the number of perfect matchings in a dense graph and for approximating the partition function of a ferromagnetic Ising system, and we give an efficient algorithm to estimate the entropy of a random walk on an unweighted graph.", "", "Using the renewal approach we prove exponential inequalities for additive functionals and empirical processes of ergodic Markov chains, thus obtaining counterparts of inequalities for sums of independent random variables. The inequalities do not require functions of the chain to be bounded and moreover all the involved constants are given by explicit formulas whenever the usual drift condition holds, which may be of interest in practical applications e.g. to MCMC algorithms.", "", "We prove concentration inequalities for some classes of Markov chains and (F-mixing processes, with constants independent of the size of the sample, that extend the inequalities for product measures of Talagrand. The method is based on information inequalities put forward by Marton in case of contracting Markov chains. Using a simple duality argument on entropy, our results also include the family of logarithmic Sobolev inequalities for convex functions. Applications to bounds on supremum of dependent empirical processes complete this work.", "We prove tail estimates for variables of the form ∑if(Xi), where (Xi)i is a sequence of states drawn from a reversible Markov chain, or, equivalently, from a random walk on an undirected graph. The estimates are in terms of the range of the function f, its variance, and the spectrum of the graph. The purpose of our estimates is to determine the number of chain walk samples which are required for approximating the expectation of a distribution on vertices of a graph, especially an expander. The estimates must therefore provide information for fixed number of samples (as in Gillman's [4]) rather than just asymptotic information. Our proofs are more elementary than other proofs in the literature, and our results are sharper. We obtain Bernstein-and Bennett-type inequalities, as well as an inequality for sub-Gaussian variables.", "There is a simple inequality by Pinsker between variational distance and informational divergence of probability measures defined on arbitrary probability spaces. We shall consider probability measures on sequences taken from countable alphabets, and derive, from Pinsker's inequality, bounds on the d-distance by informational divergence. Such bounds can be used to prove the concentration of measure phenomenon for some nonproduct distributions.", "", "We obtain moment and Gaussian bounds for general coordinate-wise Lipschitz functions evaluated along the sample path of a Markov chain. We treat Markov chains on general (possibly unbounded) state spaces via a coupling method. If the first moment of the coupling time exists, then we obtain a variance inequality. If a moment of order @math @math of the coupling time exists, then depending on the behavior of the stationary distribution, we obtain higher moment bounds. This immediately implies polynomial concentration inequalities. In the case that a moment of order @math is finite, uniformly in the starting point of the coupling, we obtain a Gaussian bound. We illustrate the general results with house of cards processes, in which both uniform and non-uniform behavior of moments of the coupling time can occur.", "We present a tail inequality for suprema of empirical processes generated by variables with finite @math norms and apply it to some geometrically ergodic Markov chains to derive similar estimates for empirical processes of such chains, generated by bounded functions. We also obtain a bounded difference inequality for symmetric statistics of such Markov chains.", "Bounds are given for an irreducible Markov chain on the probability that the time average of a functional on the state space exceeds its stationary expectation, without assuming reversibility. The bounds are in terms of the singular values of the discrete generator.", "We build optimal exponential bounds for the probabilities of large deviations of sums k=1 ^nf(X_k) where (X_k) is a finite reversible Markov chain and f is an arbitrary bounded function. These bounds depend only on the stationary mean E_ f, the end-points of the support of f, the sample size n and the second largest eigenvalue of the transition matrix." ] }
1207.4525
2951938755
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.
The SiGMa algorithm is related to the collective entity resolution approach of Bhattacharya and Getoor @cite_1 , which proposed a greedy agglomerative clustering algorithm to cluster entities based on previous decisions. Their approach could handle constraints on the clustering, including a @math matching constraint in theory, though it was not implemented. A scalable solution for collective entity resolution was proposed recently in @cite_28 , by treating the sophisticated machine learning approaches to entity resolution as black boxes (see references therein), but running them on small neighborhoods and combining their output using a message-passing scheme. They do not consider exploiting a @math matching constraint though, as most entity resolution or record linkage work.
{ "cite_N": [ "@cite_28", "@cite_1" ], "mid": [ "1982287794", "2148019918" ], "abstract": [ "There have been several recent advancements in Machine Learning community on the Entity Matching (EM) problem. However, their lack of scalability has prevented them from being applied in practical settings on large real-life datasets. Towards this end, we propose a principled framework to scale any generic EM algorithm. Our technique consists of running multiple instances of the EM algorithm on small neighborhoods of the data and passing messages across neighborhoods to construct a global solution. We prove formal properties of our framework and experimentally demonstrate the effectiveness of our approach in scaling EM algorithms.", "Many databases contain uncertain and imprecise references to real-world entities. The absence of identifiers for the underlying entities often results in a database which contains multiple references to the same entity. This can lead not only to data redundancy, but also inaccuracies in query processing and knowledge extraction. These problems can be alleviated through the use of entity resolution. Entity resolution involves discovering the underlying entities and mapping each database reference to these entities. Traditionally, entities are resolved using pairwise similarity over the attributes of references. However, there is often additional relational information in the data. Specifically, references to different entities may cooccur. In these cases, collective entity resolution, in which entities for cooccurring references are determined jointly rather than independently, can improve entity resolution accuracy. We propose a novel relational clustering algorithm that uses both attribute and relational information for determining the underlying domain entities, and we give an efficient implementation. We investigate the impact that different relational similarity measures have on entity resolution quality. We evaluate our collective entity resolution algorithm on multiple real-world databases. We show that it improves entity resolution performance over both attribute-based baselines and over algorithms that consider relational information but do not resolve entities collectively. In addition, we perform detailed experiments on synthetically generated data to identify data characteristics that favor collective relational resolution over purely attribute-based algorithms." ] }
1207.4525
2951938755
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.
The idea to propagate information on a relationship graph has been used in several other approaches for ontology matching @cite_22 @cite_11 , though none were scalable for the size of knowledge bases that we considered. An analogous fire propagation' algorithm has been used to align social network graphs in @cite_6 , though with a very different objective function (they define weights in each graphs and want to align edges which has similar weights). The heuristic of propagating information on a relationship graph is related to a well-known heuristic for solving Constraint Satisfactions Problems known as constraint propagation @cite_3 . Ehrig and Staab @cite_4 mentioned several heuristics to reduce the number of candidates to consider in ontology alignment, including a similar one to compatible-neighbors , though they tested their approach only on a few hundred instances. Finally, we mention that Peralta @cite_27 aligned the movie database MovieLens to IMDb through a combination of steps of manual cleaning with some automation. SiGMa could be considered as an alternative which does not require manual intervention apart specifying the score function to use.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_3", "@cite_6", "@cite_27", "@cite_11" ], "mid": [ "1969713547", "103919619", "", "2951118224", "", "1660209954" ], "abstract": [ "Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.", "Ontology matching is an important task to achieve interoperation between semantic web applications using different ontologies. Structural similarity plays a central role in ontology matching. However, the existing approaches rely heavily on lexical similarity, and they mix up lexical similarity with structural similarity. In this paper, we present a graph matching approach for ontologies, called GMO. It uses bipartite graphs to represent ontologies, and measures the structural similarity between graphs by a new measurement. Furthermore, GMO can take a set of matched pairs, which are typically previously found by other approaches, as external input in matching process. Our implementation and experimental results are given to demonstrate the effectiveness of the graph matching approach.", "", "This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.", "", "Ontology mapping is to find semantic correspondences between similar elements of different ontologies. It is critical to achieve semantic interoperability in the WWW. This paper proposes a new generic and scalable ontology mapping approach based on propagation theory, information retrieval technique and artificial intelligence model. The approach utilizes both linguistic and structural information, measures the similarity of different elements of ontologies in a vector space model, and deals with constraints using the interactive activation network. The results of pilot study, the PRIOR, are promising and scalable." ] }
1207.3532
1496716358
Massively parallel DNA sequencing technologies are revolutionizing genomics research. Billions of short reads generated at low costs can be assembled for reconstructing the whole genomes. Unfortunately, the large memory footprint of the existing de novo assembly algorithms makes it challenging to get the assembly done for higher eukaryotes like mammals. In this work, we investigate the memory issue of constructing de Bruijn graph, a core task in leading assembly algorithms, which often consumes several hundreds of gigabytes memory for large genomes. We propose a disk-based partition method, called Minimum Substring Partitioning (MSP), to complete the task using less than 10 gigabytes memory, without runtime slowdown. MSP breaks the short reads into multiple small disjoint partitions so that each partition can be loaded into memory, processed individually and later merged with others to form a de Bruijn graph. By leveraging the overlaps among the k-mers (substring of length k), MSP achieves astonishing compression ratio: The total size of partitions is reduced from @math to @math , where @math is the size of the short read database, and @math is the length of a @math -mer. Experimental results show that our method can build de Bruijn graphs using a commodity computer for any large-volume sequence dataset.
The de Bruijn graph construction problem is related to duplicate detection. The traditional duplicate detection algorithms perform a merge sort to find duplicates, e.g., Bitton and DeWitt @cite_0 . Teuhola and Wegner @cite_17 proposed an @math extra space, linear time algorithm to detect and delete duplicates from a dataset. Teuhola @cite_20 introduced an external duplicate deletion algorithm that makes an extensive use of hashing. It was reported that hash-based approaches are much faster than sort merge in most cases. Bucket sort @cite_30 is adoptable to these techniques, which works by partitioning an array into a number of buckets. Each bucket is then sorted individually. By replacing sort with hashing, it can solve the duplicate detection problem too. Duplicate detection has also been examined in different contexts, e.g., stream @cite_2 and text @cite_5 . A survey for general duplicate record detection solutions was given by Elmagarmid, Ipeirotis and Verykios @cite_18 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_0", "@cite_2", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "73629738", "2108991785", "2020191321", "2080330461", "2164456230", "", "2053127227" ], "abstract": [ "", "Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area", "The issue of duplicate elimination for large data files in which many occurrences of the same record may appear is addressed. A comprehensive cost analysis of the duplicate elimination operation is presented. This analysis is based on a combinatorial model developed for estimating the size of intermediate runs produced by a modified merge-sort procedure. The performance of this modified merge-sort procedure is demonstrated to be significantly superior to the standard duplicate elimination technique of sorting followed by a sequential pass to locate duplicate records. The results can also be used to provide critical input to a query optimizer in a relational database system.", "We consider the problem of finding duplicates in data streams. Duplicate detection in data streams is utilized in various applications including fraud detection. We develop a solution based on Bloom Filters [9], and discuss the space and time requirements for running the proposed algorithm in both the contexts of sliding, and landmark stream windows. We run a comprehensive set of experiments, using both real and synthetic click streams, to evaluate the performance of the proposed solution. The results demonstrate that the proposed solution yields extremely low error rates.", "The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we present a framework for improving duplicate detection using trainable measures of textual similarity. We propose to employ learnable text distance functions for each database field, and show that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can improve duplicate detection accuracy over traditional techniques.", "", "The common method used to delete duplicates in a file is to sort the records. Duplicates may then be deleted either on-the-fly or in a second pass. Here, we present a new method based on hashing. Multiple passes are made over the file and detected duplicates move in place to the tail end of the file. The algorithm requires, on the average, only linear time and works with 0(1) extra space" ] }
1207.3532
1496716358
Massively parallel DNA sequencing technologies are revolutionizing genomics research. Billions of short reads generated at low costs can be assembled for reconstructing the whole genomes. Unfortunately, the large memory footprint of the existing de novo assembly algorithms makes it challenging to get the assembly done for higher eukaryotes like mammals. In this work, we investigate the memory issue of constructing de Bruijn graph, a core task in leading assembly algorithms, which often consumes several hundreds of gigabytes memory for large genomes. We propose a disk-based partition method, called Minimum Substring Partitioning (MSP), to complete the task using less than 10 gigabytes memory, without runtime slowdown. MSP breaks the short reads into multiple small disjoint partitions so that each partition can be loaded into memory, processed individually and later merged with others to form a de Bruijn graph. By leveraging the overlaps among the k-mers (substring of length k), MSP achieves astonishing compression ratio: The total size of partitions is reduced from @math to @math , where @math is the size of the short read database, and @math is the length of a @math -mer. Experimental results show that our method can build de Bruijn graphs using a commodity computer for any large-volume sequence dataset.
The concept of minimum substring was introduced in @cite_25 for memory-efficient sequence comparison. Our work develops minimum substring based partitioning and its use in sequence assembly. We also theoretically analyze several important properties of minimum substring partitioning.
{ "cite_N": [ "@cite_25" ], "mid": [ "2144560237" ], "abstract": [ "Motivation: Comparison of nucleic acid and protein sequences is a fundamental tool of modern bioinformatics. A dominant method of such string matching is the 'seed-and-extend' approach, in which occurrences of short subsequences called 'seeds' are used to search for potentially longer matches in a large database of sequences. Each such potential match is then checked to see if it extends beyond the seed. To be effective, the seed-and-extend approach needs to catalogue seeds from virtually every substring in the database of search strings. Projects such as mammalian genome assemblies and large-scale protein matching, however, have such large sequence databases that the resulting list of seeds cannot be stored in RAM on a single computer. This significantly slows the matching process. Results: We present a simple and elegant method in which only a small fraction of seeds, called 'minimizers', needs to be stored. Using minimizers can speed up string-matching computations by a large factor while missing only a small fraction of the matches found using all seeds." ] }
1207.3682
1862586238
Two-sided matchings are an important theoretical tool used to model markets and social interactions. In many real life problems the utility of an agent is influenced not only by their own choices, but also by the choices that other agents make. Such an influence is called an externality. Whereas fully expressive representations of externalities in matchings require exponential space, in this paper we propose a compact model of externalities, in which the influence of a match on each agent is computed additively. In this framework, we analyze many-to-many and one-to-one matchings under neutral, optimistic, and pessimistic behaviour, and provide both computational hardness results and polynomial-time algorithms for computing stable outcomes.
Klaus @cite_14 consider pairwise and setwise stability, with the weak and strong variants, in many-to-many matching markets. Echenique @cite_1 study several solution concepts, such as the setwise-stable set, the core, and the bargaining set in many-to-many matchings. These models do not consider externalities or the boundedness of the agents as they use exponential preference profiles. Externalities in the classical marriage problem have been introduced by Sasaki and Toda @cite_10 , and in one-to-many models by Dutta and Masso @cite_16 , both for complete preference profiles. Hafalir @cite_8 studied externalities in marriage problems, Weighted preferences have been introduced in matchings by Pini @cite_6 , in which the agents rank each other using a numerical value. However, they study solution concepts different from ours, such as @math -stability and link-additive stability, and do not consider externalities.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_6", "@cite_16", "@cite_10" ], "mid": [ "2144354978", "2130543692", "2134207546", "2400632488", "2000982483", "1974473105" ], "abstract": [ "We consider several notions of setwise stability for many-to-many matching markets with contracts and provide an analysis of the relations between the resulting sets of stable allocations for general, substitutable, and strongly substitutable preferences. Apart from obtaining \"set inclusion results\" on all three domains, we introduce weak setwise stability as a new stability concept and prove that for substitutable preferences the set of pairwise stable matchings is nonempty and coincides with the set of weakly setwise stable matchings. For strongly substitutable preferences the set of pairwise stable matchings coincides with the set of setwise stable matchings.", "In many matching problems, it is natural to consider that agents may have preferences not only over the set of potential partners but also over what other matches occur. Once such externalities are considered, the set of stable matchings will depend on what agents believe will happen if they deviate. In this paper, we introduce endogenously generated beliefs (which depend on the preferences). We introduce a particular notion of endogenous beliefs, called sophisticated expectations, and show that with these beliefs, stable matchings always exist.", "We develop a theory of stability in many-to-many matching markets. We give conditions under which the setwise-stable set, a core-like concept, is nonempty and can be approached through an algorithm. The usual core may be empty. The setwise-stable set coincides with the pairwise-stable set and with the predictions of a non-cooperative bargaining model. The setwise-stable set possesses the conflict coincidence of interest properties from many-to-one, and one-to-one models. The theory parallels the standard theory of stability for many-to-one, and one-to-one, models. We provide results for a number of core-like solutions, besides the setwise-stable set.", "The stable marriage problem is a well-known problem of matching men to women so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or more generally to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here we consider stable marriage problems with weighted preferences: each man (resp., woman) provides a score for each woman (resp., man). Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations it is more natural to express scores (to model, for example, profits or costs) rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages which are stable and or optimal according to these notions. While expressivity greatly increases by adopting weighted preferences, we show that in most cases the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem.", "In the standard two-sided matching models, agents on one side of the market (the institutions) can each be matched to a set of agents ( the individuals) on the other side of the market, and the individuals only have preferences defined over institutions ti which they can be matched. We explicitly study the consequences for stability when the composition of one's coworkers or colleagues can affect the preferences over institutions.", "Abstract In this paper, we develop a model of two-sided matching markets with externalities. A new concept of stability of matchings is proposed and it is shown to be the unique one that ensures the general existence. Moreover, it is demonstrated that our stability does not contradict Pareto optimality. Some extensions of the model are also discussed. Journal of Economic Literature Classification Numbers: C71, C78, D62." ] }
1207.2776
2121641100
In downlink multi-antenna systems with many users, the multiplexing gain is strictly limited by the number of transmit antennas N and the use of these antennas. Assuming that the total number of receive antennas at the multi-antenna users is much larger than N, the maximal multiplexing gain can be achieved with many different transmission reception strategies. For example, the excess number of receive antennas can be utilized to schedule users with effective channels that are near-orthogonal, for multi-stream multiplexing to users with well-conditioned channels, and or to enable interference-aware receive combining. In this paper, we try to answer the question if the N data streams should be divided among few users (many streams per user) or many users (few streams per user, enabling receive combining). Analytic results are derived to show how user selection, spatial correlation, heterogeneous user conditions, and imperfect channel acquisition (quantization or estimation errors) affect the performance when sending the maximal number of streams or one stream per scheduled user-the two extremes in data stream allocation. While contradicting observations on this topic have been reported in prior works, we show that selecting many users and allocating one stream per user (i.e., exploiting receive combining) is the best candidate under realistic conditions. This is explained by the provably stronger resilience towards spatial correlation and the larger benefit from multi-user diversity. This fundamental result has positive implications for the design of downlink systems as it reduces the hardware requirements at the user devices and simplifies the throughput optimization.
The sum-rate maximization problem is nonconvex and combinatorial @cite_20 , thus only suboptimal strategies are feasible in practice. Such low-complexity algorithms have been proposed in @cite_6 @cite_7 @cite_17 @cite_27 , among others, by successively allocating data streams to users in a greedy manner. Simulations have indicated that fewer than @math streams should be used when @math and @math are small, and that spatial correlation makes it beneficial to divide the streams among many users. Simulations in @cite_7 indicates that the probability of allocating more than one stream per user is small when @math grows large, but @cite_7 only considers users with homogeneous channel conditions and all the aforementioned papers assume perfect CSI.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_27", "@cite_20", "@cite_17" ], "mid": [ "2123836500", "1581039813", "2122991517", "2144000019", "2117792619" ], "abstract": [ "We consider the MIMO broadcast channel (MIMO-BC) where an array equipped with M antennas transmits distinct information to K users, each equipped with N antennas. We propose a linear precoding technique, called multiuser eigenmode transmission (MET), based on the block diagonalization precoding technique. MET addresses the shortcomings of previous ZF-based beamformers by transmitting to each user on one or more eigenmodes chosen using a greedy algorithm. We consider both the typical sum-power constraint (SPC), and a per-antenna power constraint (PAPC) motivated by array architectures where antennas are powered by separate amplifiers and are either co-located or spatially separated. Numerical results show that the proposed MET technique outperforms previous linear techniques with both SPC and PAPC. Asymptotically as the number of user K increases without bound, we show that block diagonalization with receive antenna selection under PAPC and SPC are asymptotically optimal.", "Block diagonalization (BD) combined with coordinated transmitter-receiver processing and scheduling between users is a simple and straightforward linear method to utilize the available spatial degrees of freedom in downlink multi-user multiple-input multiple-output (MIMO) channel for increasing the system throughput. In this paper, an efficient scheduling algorithm is proposed to maximize the downlink spectral efficiency of the BD method for any number of users and antennas at both the transmitter and the receivers. The performance is compared to the spectral efficiency available with several other scheduling algorithms and to the sum-rate capacity. It is shown that the performance of the BD method with the proposed scheduling approaches the sum rate capacity at high signal-to-noise ratio (SNR) as the number of users increases. It is also noticed that due to the inherent noise amplification problem with linear BD method, the maximum spectral efficiency is often achieved by transmitting to less users beams than the spatial dimensions available, especially, at low SNR region with a high number of transmit antennas and with a low number of users", "Achieving the boundary of the capacity region in the multiple-input multiple-output (MIMO) broadcast channel requires the use of dirty paper coding (DPC). As practical nearly optimum implementations of DPC are computationally complex, purely linear approaches are often used instead. However, in this case, the problem of maximizing a weighted sum rate constitutes a nonconvex and, in most cases, also a combinatorial optimization problem. In this paper, we present two heuristic nearly optimum algorithms with reduced computational complexity. For this purpose, a lower bound for the weighted sum rate under linear zero-forcing constraints is used. Based on this bound, both greedy algorithms successively allocate data streams to users. In each step, the user is determined that is given an additional data stream such that the increase in weighted sum rate becomes maximum. Thereby, the data stream allocations and filters obtained in the previous steps are kept fixed and only the filter corresponding to the additional data stream is optimized. The first algorithm determines the receive and transmit filters directly in the downlink. The other algorithm operates in the dual uplink, from which the downlink transmit and receive filters can be obtained via the general rate duality leading to nonzero-forcing in the downlink. Simulation results reveal marginal performance losses compared to more complex algorithms.", "Consider a communication system whereby multiple users share a common frequency band and must choose their transmit power spectral densities dynamically in response to physical channel conditions. Due to co-channel interference, the achievable data rate of each user depends on not only the power spectral density of its own, but also those of others in the system. Given any channel condition and assuming Gaussian signaling, we consider the problem to jointly determine all users' power spectral densities so as to maximize a system-wide utility function (e.g., weighted sum-rate of all users), subject to individual power constraints. For the discretized version of this nonconvex problem, we characterize its computational complexity by establishing the NP-hardness under various practical settings, and identify subclasses of the problem that are solvable in polynomial time. Moreover, we consider the Lagrangian dual relaxation of this nonconvex problem. Using the Lyapunov theorem in functional analysis, we rigorously prove a result first discovered by Yu and Lui (2006) that there is a zero duality gap for the continuous (Lebesgue integral) formulation. Moreover, we show that the duality gap for the discrete formulation vanishes asymptotically as the size of discretization decreases to zero.", "A low-complexity multimode transmission technique for downlink multiuser multiple-input-multiple-output (MIMO) systems with block diagonalization (BD) is proposed. The proposed technique adaptively configures the number of data streams for each user by adjusting its number of active receive antenna and switching between single-stream beamforming and multistream spatial multiplexing, as a means to exploit the multimode switching diversity. We consider a highly loaded system where there are a large number of users, hence a subset of users need to be selected. Joint user and antenna selection has been proposed as a multiuser multimode switching technique, where the optimal subset of receive antennas and users are chosen to maximize the sum throughput. The brute-force search, however, is prohibitively complicated. In this paper, two low-complexity near-optimal user antenna selection algorithms are developed. The first algorithm aims at maximizing a capacity lower bound, derived in terms of the sum Frobenius norm of the channel, while the second algorithm greedily maximizes the sum capacity. We analytically evaluate the complexity of the proposed algorithms and show that it is orders of magnitude lower than that of the exhaustive search. Simulation results demonstrate that the proposed algorithms achieve up to 98 of the sum throughput of the exhaustive search, for most system configurations, while the complexity is substantially reduced." ] }
1207.2776
2121641100
In downlink multi-antenna systems with many users, the multiplexing gain is strictly limited by the number of transmit antennas N and the use of these antennas. Assuming that the total number of receive antennas at the multi-antenna users is much larger than N, the maximal multiplexing gain can be achieved with many different transmission reception strategies. For example, the excess number of receive antennas can be utilized to schedule users with effective channels that are near-orthogonal, for multi-stream multiplexing to users with well-conditioned channels, and or to enable interference-aware receive combining. In this paper, we try to answer the question if the N data streams should be divided among few users (many streams per user) or many users (few streams per user, enabling receive combining). Analytic results are derived to show how user selection, spatial correlation, heterogeneous user conditions, and imperfect channel acquisition (quantization or estimation errors) affect the performance when sending the maximal number of streams or one stream per scheduled user-the two extremes in data stream allocation. While contradicting observations on this topic have been reported in prior works, we show that selecting many users and allocating one stream per user (i.e., exploiting receive combining) is the best candidate under realistic conditions. This is explained by the provably stronger resilience towards spatial correlation and the larger benefit from multi-user diversity. This fundamental result has positive implications for the design of downlink systems as it reduces the hardware requirements at the user devices and simplifies the throughput optimization.
The authors of @cite_38 claim that transmitting at most one stream per user is desirable when there are many users in the system. They justify this statement by using asymptotic results from @cite_36 where @math . This argumentation ignores some important issues: 1) asymptotic optimality can also be proven with multiple streams per user; The uplink analysis in @cite_39 shows that a non-zero (but bounded) number of users can use multiple streams, and the well-established uplink-downlink duality makes this result applicable also in our downlink scenario. 2) the performance at practical values on @math is unknown; and 3) the analysis implies an unbounded asymptotic multi-user diversity gain, which is a modeling artifact of fading channels @cite_8 .
{ "cite_N": [ "@cite_36", "@cite_38", "@cite_8", "@cite_39" ], "mid": [ "2105854862", "2131262740", "2148454256", "2137607133" ], "abstract": [ "In this paper, a downlink communication system, in which a base station (BS) equipped with antennas communicates with users each equipped with receive antennas, is considered. An efficient suboptimum algorithm is proposed for selecting a set of users in order to maximize the sum-rate throughput of the system, in a Rayleigh-fading environment. For the asymptotic case when tends to infinity, the necessary and sufficient conditions in order to achieve the maximum sum-rate throughput, such that the difference between the achievable sum-rate and the maximum value approaches zero, is derived. The complexity of our algorithm is investigated in terms of the required amount of feedback from the users to the BS, as well as the number of searches required for selecting the users. It is shown that the proposed method is capable of achieving a large portion of the sum-rate capacity, with a very low complexity.", "We consider a MIMO broadcast channel where both the transmitter and receivers are equipped with multiple antennas. Channel state information at the transmitter (CSIT) is obtained through limited (i.e., finite-bandwidth) feedback from the receivers that index a set of precoding vectors contained in a predefined codebook. We propose a novel transceiver architecture based on zero-forcing beamforming and linear receiver combining. The receiver combining and quantization for CSIT feedback are jointly designed in order to maximize the expected SINR for each user. We provide an analytic characterization of the achievable throughput in the case of many users and show how additional receive antennas or higher multiuser diversity can reduce the required feedback rate to achieve a target throughput.We also propose a design methodology for generating codebooks tailored for arbitrary spatial correlation statistics. The resulting codebooks have a tree structure that can be utilized in time-correlated MIMO channels to significantly reduce feedback overhead. Simulation results show the effectiveness of the overall transceiver design strategy and codebook design methodology compared to prior techniques in a variety of correlation environments.", "This article originates from a panel with the above title, held at IEEE VTC Spring 2009, in which the authors took part. The enthusiastic response it received prompted us to discuss for a wider audience whether research at the physical layer (PHY) is still relevant to the field of wireless communications. Using cellular systems as the axis of our exposition, we exemplify areas where PHY research has indeed hit a performance wall and where any improvements are expected to be marginal. We then discuss whether the research directions taken in the past have always been the right choice and how lessons learned could influence future policy decisions. Several of the raised issues are subsequently discussed in greater details, e.g., the growing divergence between academia and industry. With this argumentation at hand, we identify areas that are either under-developed or likely to be of impact in coming years - hence corroborating the relevance and importance of PHY research.", "This paper considers the optimal uplink transmission strategy that achieves the sum-capacity in a multiuser multi-antenna wireless system. Assuming an independent identically distributed block-fading model with transmitter channel side information, beamforming for each remote user is shown to be necessary for achieving sum-capacity when there is a large number of users in the system. This result stands even in the case where each user is equipped with a large number of transmit antennas, and it can be readily extended to channels with intersymbol interference if an orthogonal frequency division multiplexing modulation is assumed. This result is obtained by deriving a rank bound on the transmit covariance matrices, and it suggests that all users should cooperate by each user using only a small portion of available dimensions. Based on the result, a suboptimal transmit scheme is proposed for the situation where only partial channel side information is available at each transmitter. Simulations show that the suboptimal scheme is not only able to achieve a sum rate very close to the capacity, but also insensitive to channel estimation error." ] }
1207.2776
2121641100
In downlink multi-antenna systems with many users, the multiplexing gain is strictly limited by the number of transmit antennas N and the use of these antennas. Assuming that the total number of receive antennas at the multi-antenna users is much larger than N, the maximal multiplexing gain can be achieved with many different transmission reception strategies. For example, the excess number of receive antennas can be utilized to schedule users with effective channels that are near-orthogonal, for multi-stream multiplexing to users with well-conditioned channels, and or to enable interference-aware receive combining. In this paper, we try to answer the question if the N data streams should be divided among few users (many streams per user) or many users (few streams per user, enabling receive combining). Analytic results are derived to show how user selection, spatial correlation, heterogeneous user conditions, and imperfect channel acquisition (quantization or estimation errors) affect the performance when sending the maximal number of streams or one stream per scheduled user-the two extremes in data stream allocation. While contradicting observations on this topic have been reported in prior works, we show that selecting many users and allocating one stream per user (i.e., exploiting receive combining) is the best candidate under realistic conditions. This is explained by the provably stronger resilience towards spatial correlation and the larger benefit from multi-user diversity. This fundamental result has positive implications for the design of downlink systems as it reduces the hardware requirements at the user devices and simplifies the throughput optimization.
The authors of @cite_21 @cite_0 arrive at a different conclusion when they compare BD (which selects @math users and sends @math streams user) and ZFC (which selects @math users and sends one stream user) under quantized CSI. Their simulations reveal a distinct advantage of BD (i.e., multi-stream multiplexing), but are limited to uncorrelated channels and neither include user selection nor interference rejection. We show that their results are misleading, because single-user transmission greatly outperforms both BD and ZFC in the scenario that they simulate.
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "2164290759", "2154466855" ], "abstract": [ "Block diagonalization is a linear preceding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but requires very accurate channel knowledge at the transmitter. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we compare our quantization strategy to an analog feedback scheme and show the superiority of quantized feedback.", "A multiple antenna downlink channel where limited channel feedback is available to the transmitter is considered. In a vector downlink channel (single antenna at each receiver), the transmit antenna array can be used to transmit separate data streams to multiple receivers only if the transmitter has very accurate channel knowledge, i.e., if there is high-rate channel feedback from each receiver. In this work it is shown that channel feedback requirements can be significantly reduced if each receiver has a small number of antennas and appropriately combines its antenna outputs. A combining method that minimizes channel quantization error at each receiver, and thereby minimizes multi-user interference, is proposed and analyzed. This technique is shown to outperform traditional techniques such as maximum-ratio combining because minimization of interference power is more critical than maximization of signal power in the multiple antenna downlink. Analysis is provided to quantify the feedback savings, and the technique is seen to work well with user selection and is also robust to receiver estimation error." ] }
1207.2776
2121641100
In downlink multi-antenna systems with many users, the multiplexing gain is strictly limited by the number of transmit antennas N and the use of these antennas. Assuming that the total number of receive antennas at the multi-antenna users is much larger than N, the maximal multiplexing gain can be achieved with many different transmission reception strategies. For example, the excess number of receive antennas can be utilized to schedule users with effective channels that are near-orthogonal, for multi-stream multiplexing to users with well-conditioned channels, and or to enable interference-aware receive combining. In this paper, we try to answer the question if the N data streams should be divided among few users (many streams per user) or many users (few streams per user, enabling receive combining). Analytic results are derived to show how user selection, spatial correlation, heterogeneous user conditions, and imperfect channel acquisition (quantization or estimation errors) affect the performance when sending the maximal number of streams or one stream per scheduled user-the two extremes in data stream allocation. While contradicting observations on this topic have been reported in prior works, we show that selecting many users and allocating one stream per user (i.e., exploiting receive combining) is the best candidate under realistic conditions. This is explained by the provably stronger resilience towards spatial correlation and the larger benefit from multi-user diversity. This fundamental result has positive implications for the design of downlink systems as it reduces the hardware requirements at the user devices and simplifies the throughput optimization.
Despite the similar terminology, our problem is fundamentally different from the classic works on the diversity-spatial multiplexing tradeoff (DMT) in @cite_28 @cite_19 . The DMT brings insight on how many streams should be transmitted in the high-SNR regime, while we consider how a fixed number of streams should be divided among the users.
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "2129766733", "2128315032" ], "abstract": [ "Multiple antennas can be used for increasing the amount of diversity or the number of degrees of freedom in wireless communication systems. We propose the point of view that both types of gains can be simultaneously obtained for a given multiple-antenna channel, but there is a fundamental tradeoff between how much of each any coding scheme can get. For the richly scattered Rayleigh-fading channel, we give a simple characterization of the optimal tradeoff curve and use it to evaluate the performance of existing multiple antenna schemes.", "A contemporary perspective on transmit antenna diversity and spatial multiplexing is provided. It is argued that, in the context of most modern wireless systems and for the operating points of interest, transmission techniques that utilize all available spatial degrees of freedom for multiplexing outperform techniques that explicitly sacrifice spatial multiplexing for diversity. Reaching this conclusion, however, requires that the channel and some key system features be adequately modeled and that suitable performance metrics be adopted; failure to do so may bring about starkly different conclusions. As a specific example, this contrast is illustrated using the 3GPP long-term evolution system design." ] }
1207.3110
2949938444
We are motivated by the problem of designing a simple distributed algorithm for Peer-to-Peer streaming applications that can achieve high throughput and low delay, while allowing the neighbor set maintained by each peer to be small. While previous works have mostly used tree structures, our algorithm constructs multiple random directed Hamiltonian cycles and disseminates content over the superposed graph of the cycles. We show that it is possible to achieve the maximum streaming capacity even when each peer only transmits to and receives from Theta(1) neighbors. Further, we show that the proposed algorithm achieves the streaming delay of Theta(log N) when the streaming rate is less than (1-1 K) of the maximum capacity for any fixed integer K>1, where N denotes the number of peers in the network. The key theoretical contribution is to characterize the distance between peers in a graph formed by the superposition of directed random Hamiltonian cycles, in which edges from one of the cycles may be dropped at random. We use Doob martingales and graph expansion ideas to characterize this distance as a function of N, with high probability.
Unstructured P2P networks overcome this vulnerability to peer churn. In unstructured P2P networks, peers find their neighboring peers randomly and get paired with them locally. As a neighboring peer leaves, a peer chooses another peer randomly as its new neighboring peer. Due to the distributed fashion of this peer pairing, unstructured P2P networks are robust to peer churn, unlike the structured P2P networks. However, the fundamental limitation of unstructured P2P networks is weak connectivity. Since peers are paired randomly without considering the entire network topology, there may be some peers that are not strongly connected from the source, which results in poor throughput and delay. To ensure full connectivity in this approach, it is required that every peer should be paired with @math neighboring peers @cite_17 , or should constantly change their neighbors to find neighbors providing a better streaming rate @cite_8 . However, in these approaches, delay performance is hard to guarantee because chunks have to be disseminated over an unknown'' network topology.
{ "cite_N": [ "@cite_8", "@cite_17" ], "mid": [ "2147657151", "2115727680" ], "abstract": [ "This paper presents DONet, a data-driven overlay network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement, as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient, as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents. We have extensively evaluated the performance of DONet over the PlanetLab. Our experiments, involving almost all the active PlanetLab nodes, demonstrate that DONet achieves quite good streaming quality even under formidable network conditions. Moreover, its control overhead and transmission delay are both kept at low levels. An Internet-based DONet implementation, called CoolStreaming v.0.9, was released on May 30, 2004, which has attracted over 30000 distinct users with more than 4000 simultaneously being online at some peak times. We discuss the key issues toward designing CoolStreaming in this paper, and present several interesting observations from these large-scale tests; in particular, the larger the overlay size, the better the streaming quality it can deliver.", "Peer-to-Peer (P2P) streaming technologies can take advantage of the upload capacity of clients, and hence can scale to large content distribution networks with lower cost. A fundamental question for P2P streaming systems is the maximum streaming rate that all users can sustain. Prior works have studied the optimal streaming rate for a complete network, where every peer is assumed to communicate with all other peers. This is however an impractical assumption in real systems. In this paper, we are interested in the achievable streaming rate when each peer can only connect to a small number of neighbors. We show that even with a random peer selection algorithm and uniform rate allocation, as long as each peer maintains Ω(logN) downstream neighbors, where N is the total number of peers in the system, the system can asymptotically achieve a streaming rate that is close to the optimal streaming rate of a complete network.We then extend our analysis to multi-channel P2P networks, and we study the scenario where “helpers” from channels with excessive upload capacity can help peers in channels with insufficient upload capacity. We show that by letting each peer select Ω(logN) neighbors randomly from either the peers in the same channel or from the helpers, we can achieve a close-to-optimal streaming capacity region. Simulation results are provided to verify our analysis." ] }
1207.3099
2245961675
A @math -ruling set of a graph @math is a vertex-subset @math that is independent and satisfies the property that every vertex @math is at a distance of at most @math from some vertex in @math . A is a 1-ruling set. The problem of computing an MIS on a network is a fundamental problem in distributed algorithms and the fastest algorithm for this problem is the @math -round algorithm due to Luby (SICOMP 1986) and (J. Algorithms 1986) from more than 25 years ago. Since then the problem has resisted all efforts to yield to a sub-logarithmic algorithm. There has been recent progress on this problem, most importantly an @math -round algorithm on graphs with @math vertices and maximum degree @math , due to (Barenboim, Elkin, Pettie, and Schneider, April 2012, arxiv 1202.1983; to appear FOCS 2012). We approach the MIS problem from a different angle and ask if O(1)-ruling sets can be computed much more efficiently than an MIS? As an answer to this question, we show how to compute a 2-ruling set of an @math -vertex graph in @math rounds. We also show that the above result can be improved for special classes of graphs such as graphs with high girth, trees, and graphs of bounded arboricity. Our main technique involves randomized sparsification that rapidly reduces the graph degree while ensuring that every deleted vertex is close to some vertex that remains. This technique may have further applications in other contexts, e.g., in designing sub-logarithmic distributed approximation algorithms. Our results raise intriguing questions about how quickly an MIS (or 1-ruling sets) can be computed, given that 2-ruling sets can be computed in sub-logarithmic rounds.
The MIS problem on the class of growth-bounded graphs has attracted fair bit of attention @cite_11 @cite_5 @cite_14 . Growth-bounded graphs have the property that the @math -neighborhood of any vertex @math has at most @math independent vertices in it, for some constant @math . In other words, the rate of the growth of independent sets is polynomial in the radius of the ball'' around a vertex. Schneider and Wattenhofer @cite_14 showed that there is a deterministic MIS algorithm on growth-bounded graphs that runs in @math rounds. Growth-bounded graphs have been used to model wireless networks because the number of independent vertices in any spatial region is usually bounded by the area or volume of that region. In contrast to growth-bounded graphs, the graph subclasses we consider in this paper tend to have arbitrarily many independent vertices in any neighborhood.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_11" ], "mid": [ "2052811387", "1997707549", "" ], "abstract": [ "The efficient distributed construction of a maximal independent set (MIS) of a graph is of fundamental importance. We study the problem in the class of Growth-Bounded Graphs, which includes for example the well-known Unit Disk Graphs. In contrast to the fastest (time-optimal) existing approach [11], we assume that no geometric information (e.g., distances in the graph's embedding) is given. Instead, nodes employ randomization for their decisions. Our algorithm computes a MIS in O(log log n • log* n) rounds with very high probability for graphs with bounded growth, where n denotes the number of nodes in the graph. In view of Linial's Ω(log* n) lower bound for computing a MIS in ring networks [12], which was extended to randomized algorithms independently by Naor [18] and Linial [13], our solution is close to optimal. In a nutshell, our algorithm shows that for computing a MIS, randomization is a viable alternative to distance information.", "We present a novel distributed algorithm for the maximal independent set (MIS) problem. On growth-bounded graphs (GBG) our deterministic algorithm finishes in O(log* n) time, n being the number of nodes. In light of Linial's Ω(log* n) lower bound our algorithm is asymptotically optimal. Our algorithm answers prominent open problems in the ad hoc sensor network domain. For instance, it solves the connected dominating set problem for unit disk graphs in O(log* n) time, exponentially faster than the state-of-the-art algorithm. With a new extension our algorithm also computes a delta+1 coloring in O(log* n) time, where delta is the maximum degree of the graph.", "" ] }
1207.3099
2245961675
A @math -ruling set of a graph @math is a vertex-subset @math that is independent and satisfies the property that every vertex @math is at a distance of at most @math from some vertex in @math . A is a 1-ruling set. The problem of computing an MIS on a network is a fundamental problem in distributed algorithms and the fastest algorithm for this problem is the @math -round algorithm due to Luby (SICOMP 1986) and (J. Algorithms 1986) from more than 25 years ago. Since then the problem has resisted all efforts to yield to a sub-logarithmic algorithm. There has been recent progress on this problem, most importantly an @math -round algorithm on graphs with @math vertices and maximum degree @math , due to (Barenboim, Elkin, Pettie, and Schneider, April 2012, arxiv 1202.1983; to appear FOCS 2012). We approach the MIS problem from a different angle and ask if O(1)-ruling sets can be computed much more efficiently than an MIS? As an answer to this question, we show how to compute a 2-ruling set of an @math -vertex graph in @math rounds. We also show that the above result can be improved for special classes of graphs such as graphs with high girth, trees, and graphs of bounded arboricity. Our main technique involves randomized sparsification that rapidly reduces the graph degree while ensuring that every deleted vertex is close to some vertex that remains. This technique may have further applications in other contexts, e.g., in designing sub-logarithmic distributed approximation algorithms. Our results raise intriguing questions about how quickly an MIS (or 1-ruling sets) can be computed, given that 2-ruling sets can be computed in sub-logarithmic rounds.
Fast algorithms for @math -ruling sets may have applications in distributed approximation algorithms. For example, in a recent paper by @cite_6 a 2-ruling set is computed as a way of obtaining a @math -factor approximation to the metric facility location problem. Our work raises questions about the existence of sub-logarithmic round algorithms for problems such as minimum dominating set, vertex cover, etc., at least for special graph classes.
{ "cite_N": [ "@cite_6" ], "mid": [ "2950999367" ], "abstract": [ "This paper presents a distributed O(1)-approximation algorithm, with expected- @math running time, in the @math model for the metric facility location problem on a size- @math clique network. Though metric facility location has been considered by a number of researchers in low-diameter settings, this is the first sub-logarithmic-round algorithm for the problem that yields an O(1)-approximation in the setting of non-uniform facility opening costs. In order to obtain this result, our paper makes three main technical contributions. First, we show a new lower bound for metric facility location, extending the lower bound of B a (ICALP 2005) that applies only to the special case of uniform facility opening costs. Next, we demonstrate a reduction of the distributed metric facility location problem to the problem of computing an O(1)-ruling set of an appropriate spanning subgraph. Finally, we present a sub-logarithmic-round (in expectation) algorithm for computing a 2-ruling set in a spanning subgraph of a clique. Our algorithm accomplishes this by using a combination of randomized and deterministic sparsification." ] }
1207.2615
1799030305
We present Broccoli, a fast and easy-to-use search engine for what we call semantic full-text search. Semantic full-text search combines the capabilities of standard full-text search and ontology search. The search operates on four kinds of objects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g., Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, the main idea behind a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (40 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas and provide a web application to reproduce our quality experiments. Both are accessible via this http URL .
Ester @cite_7 was the first system to offer efficient combined full-text and ontology search on a collection as large as the English Wikipedia. Broccoli improves upon Ester in three important aspects. First, Ester works with inverted lists for classes and achieves fast query times only on relatively simple queries. Second, Ester does not consider contexts but merely syntactic proximity of words entities. Third, Ester's simplistic user interface was ok for queries with one relation, but practically unusable for more complex queries.
{ "cite_N": [ "@cite_7" ], "mid": [ "2146842426" ], "abstract": [ "We present ESTER, a modular and highly efficient system for combined full-text and ontology search. ESTER builds on a query engine that supports two basic operations: prefix search and join. Both of these can be implemented very efficiently with a compact index, yet in combination provide powerful querying capabilities. We show how ESTER can answer basic SPARQL graph-pattern queries on the ontology by reducing them to a small number of these two basic operations. ESTER further supports a natural blend of such semantic queries with ordinary full-text queries. Moreover, the prefix search operation allows for a fully interactive and proactive user interface, which after every keystroke suggests to the user possible semantic interpretations of his or her query, and speculatively executes the most likely of these interpretations. As a proof of concept, we applied ESTER to the English Wikipedia, which contains about 3 million documents, combined with the recent YAGO ontology, which contains about 2.5 million facts. For a variety of complex queries, ESTER achieves worst-case query processing times of a fraction of a second, on a single machine, with an index size of about 4 GB." ] }
1207.2615
1799030305
We present Broccoli, a fast and easy-to-use search engine for what we call semantic full-text search. Semantic full-text search combines the capabilities of standard full-text search and ontology search. The search operates on four kinds of objects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g., Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, the main idea behind a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (40 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas and provide a web application to reproduce our quality experiments. Both are accessible via this http URL .
Another popular form of entity retrieval is known as @cite_6 . Here, the search is on structured data, as discussed in the next subsection. Queries are given by a sequence of keywords, similar as in full-text search, for example, . Then query interpretation becomes a non-trivial problem; see Section .
{ "cite_N": [ "@cite_6" ], "mid": [ "1997189720" ], "abstract": [ "Semantic Search refers to a loose set of concepts, challenges and techniques having to do with harnessing the information of the growing Web of Data (WoD) for Web search. Here we propose a formal model of one specific semantic search task: ad-hoc object retrieval. We show that this task provides a solid framework to study some of the semantic search problems currently tackled by commercial Web search engines. We connect this task to the traditional ad-hoc document retrieval and discuss appropriate evaluation metrics. Finally, we carry out a realistic evaluation of this task in the context of a Web search application." ] }
1207.2615
1799030305
We present Broccoli, a fast and easy-to-use search engine for what we call semantic full-text search. Semantic full-text search combines the capabilities of standard full-text search and ontology search. The search operates on four kinds of objects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g., Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, the main idea behind a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (40 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas and provide a web application to reproduce our quality experiments. Both are accessible via this http URL .
Systems for ontology search have reached a high level of sophistication. For example, RDF-3X can answer complex SPARQL queries on the Barton dataset (50 million triples) in less than a second on average @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "2000656232" ], "abstract": [ "RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The \"pay-as-you-go\" nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude." ] }
1207.2615
1799030305
We present Broccoli, a fast and easy-to-use search engine for what we call semantic full-text search. Semantic full-text search combines the capabilities of standard full-text search and ontology search. The search operates on four kinds of objects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g., Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, the main idea behind a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (40 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas and provide a web application to reproduce our quality experiments. Both are accessible via this http URL .
As part of the Semantic Web Linked Open Data @cite_14 effort, more and more data is explicitly available as fact triples. The bulk of useful triple data is still harvested from text documents though. The information extraction techniques employed range from simple parsing of structured information (for example, many of the relations in YAGO or DBpedia @cite_19 come from the Wikipedia info boxes) over pattern matching (e.g., @cite_20 ) to complex techniques involving non-trivial natural language processing like in our paper (e.g., @cite_4 ). For a relatively recent survey, see @cite_21 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_19", "@cite_20" ], "mid": [ "2015191210", "2127978399", "1847917847", "102708294", "2103931177" ], "abstract": [ "The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward.", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "This paper addresses the issue of simplifying natural language texts in order to ease the task of accessing factual information contained in them. We define the notion of Easy Access Sentence – a unit of text from which the information it contains can be retrieved by a system with modest text-analysis capabilities, able to process single verb sentences with named entities as constituents. We present an algorithm that constructs Easy Access Sentences from the input text, with a small-scale evaluation. Challenges and further research directions are then discussed.", "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.", "Text documents often contain valuable structured data that is hidden Yin regular English sentences. This data is best exploited infavailable as arelational table that we could use for answering precise queries or running data mining tasks.We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. These examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection.We build on this idea and present our Snowball system. Snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents.At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention,and keeps only the most reliable ones for the next iteration. In this paper we also develop a scalable evaluation methodology and metrics for our task, and present a thorough experimental evaluation of Snowball and comparable techniques over a collection of more than 300,000 newspaper documents." ] }
1207.2544
2951413418
Previous approaches to systematic state-space exploration for testing multi-threaded programs have proposed context-bounding and depth-bounding to be effective ranking algorithms for testing multithreaded programs. This paper proposes two new metrics to rank thread schedules for systematic state-space exploration. Our metrics are based on characterization of a concurrency bug using v (the minimum number of distinct variables that need to be involved for the bug to manifest) and t (the minimum number of distinct threads among which scheduling constraints are required to manifest the bug). Our algorithm is based on the hypothesis that in practice, most concurrency bugs have low v (typically 1- 2) and low t (typically 2-4) characteristics. We iteratively explore the search space of schedules in increasing orders of v and t. We show qualitatively and empirically that our algorithm finds common bugs in fewer number of execution runs, compared with previous approaches. We also show that using v and t improves the lower bounds on the probability of finding bugs through randomized algorithms. Systematic exploration of schedules requires instrumenting each variable access made by a program, which can be very expensive and severely limits the applicability of this approach. Previous work [5, 19] has avoided this problem by interposing only on synchronization operations (and ignoring other variable accesses). We demonstrate that by using variable bounding (v) and a static imprecise alias analysis, we can interpose on all variable accesses (and not just synchronization operations) at 10-100x less overhead than previous approaches.
CHESS @cite_24 uses iterative context bounding to rank schedules. We borrow many ideas from CHESS, including iterative context bounding @cite_7 , using a happens-before graph for stateless model checking, and fair scheduling. We provide further ranking of schedules to uncover most bugs with a smaller number of schedules. While we consider all shared memory accesses as potential context-switch points, CHESS only allows pre-emptible context switches at explicit synchronization primitives. This restriction (first used in ExitBlock @cite_11 ) is justified if all shared memory accesses are protected by explicit synchronization (e.g., lock unlock ). CHESS relies on a data-race detector to separately check this property. Even if we assume a precise and efficient data-race detector, this approach still overlooks adhoc'' synchronization that do not involve known synchronization primitives @cite_17 .
{ "cite_N": [ "@cite_24", "@cite_17", "@cite_7", "@cite_11" ], "mid": [ "", "178326370", "2135948849", "328544693" ], "abstract": [ "", "Many synchronizations in existing multi-threaded programs are implemented in an ad hoc way. The first part of this paper does a comprehensive characteristic study of ad hoc synchronizations in concurrent programs. By studying 229 ad hoc synchronizations in 12 programs of various types (server, desktop and scientific), including Apache, MySQL, Mozilla, etc., we find several interesting and perhaps alarming characteristics: (1) Every studied application uses ad hoc synchronizations. Specifically, there are 6-83 ad hoc synchronizations in each program. (2) Ad hoc synchronizations are error-prone. Significant percentages (22-67 ) of these ad hoc synchronizations introduced bugs or severe performance issues. (3) Ad hoc synchronization implementations are diverse and many of them cannot be easily recognized as synchronizations, i.e. have poor readability and maintainability. The second part of our work builds a tool called SyncFinder to automatically identify and annotate ad hoc synchronizations in concurrent programswritten in C C++ to assist programmers in porting their code to better structured implementations, while also enabling other tools to recognize them as synchronizations. Our evaluation using 25 concurrent programs shows that, on average, SyncFinder can automatically identify 96 of ad hoc synchronizations with 6 false positives. We also build two use cases to leverage SyncFinder's auto-annotation. The first one uses annotation to detect 5 deadlocks (including 2 new ones) and 16 potential issues missed by previous analysis tools in Apache, MySQL and Mozilla. The second use case reduces Valgrind data race checker's false positive rates by 43-86 .", "Multithreaded programs are difficult to get right because of unexpected interaction between concurrently executing threads. Traditional testing methods are inadequate for catching subtle concurrency errors which manifest themselves late in the development cycle and post-deployment. Model checking or systematic exploration of program behavior is a promising alternative to traditional testing methods. However, it is difficult to perform systematic search on large programs as the number of possible program behaviors grows exponentially with the program size. Confronted with this state-explosion problem, traditional model checkers perform iterative depth-bounded search. Although effective for message-passing software, iterative depth-bounding is inadequate for multithreaded software. This paper proposes iterative context-bounding, a new search algorithm that systematically explores the executions of a multithreaded program in an order that prioritizes executions with fewer context switches. We distinguish between preempting and nonpreempting context switches, and show that bounding the number of preempting context switches to a small number significantly alleviates the state explosion, without limiting the depth of explored executions. We show both theoretically and empirically that context-bounded search is an effective method for exploring the behaviors of multithreaded programs. We have implemented our algorithmin two model checkers and applied it to a number of real-world multithreaded programs. Our implementation uncovered 9 previously unknown bugs in our benchmarks, each of which was exposed by an execution with at most 2 preempting context switches. Our initial experience with the technique is encouraging and demonstrates that iterative context-bounding is a significant improvement over existing techniques for testing multithreaded programs.", "We present a practical testing algorithm called ExitBlock that systematically and deterministically nds program errors resulting from unintended timing dependencies. ExitBlock executes a program or a portion of a program on a given input multiple times, enumerating meaningful schedules in order to cover all program behaviors. Previous work on systematic testing focuses on programs whose concurrent elements are processes that run in separate memory spaces and explicitly declare what memory they will be sharing. ExitBlock extends previous approaches to multithreaded programs in which all of memory is potentially shared. A key challenge is to minimize the number of schedules executed while still guaranteeing to cover all behaviors. Our approach relies on the fact that for a program following a mutual-exclusion locking discipline, enumerating possible orders of the synchronized regions of the program covers all possible behaviors of the program. We describe in detail the basic algorithm and extensions to take advantage of readwrite dependency information and to detect deadlocks." ] }
1207.2189
2951897045
Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80 : these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10 ) at minimizing the runs of identical values within columns, in a few cases.
The compression of bitmap indexes also greatly benefits from table sorting. In some experiments, the sizes of the bitmap indexes are reduced by nearly an order of magnitude @cite_55 . Of course, everything else being equal, smaller indexes tend to be faster. Meanwhile, alternatives to the lexicographical order such as , reflected Gray-code or Hilbert orders are unhelpful on bitmap indexes @cite_55 . (We review an improved version of in .)
{ "cite_N": [ "@cite_55" ], "mid": [ "1966678916" ], "abstract": [ "Bitmap indexes must be compressed to reduce input output costs and minimize CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. These techniques are sensitive to the order of the rows: a simple lexicographical sort can divide the index size by 9 and make indexes several times faster. We investigate row-reordering heuristics. Simply permuting the columns of the table can increase the sorting efficiency by 40 . Secondary contributions include efficient algorithms to construct and aggregate bitmaps. The effect of word length is also reviewed by constructing 16-bit, 32-bit and 64-bit indexes. Using 64-bit CPUs, we find that 64-bit indexes are slightly faster than 32-bit indexes despite being nearly twice as large." ] }
1207.2189
2951897045
Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80 : these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10 ) at minimizing the runs of identical values within columns, in a few cases.
Sometimes reordering all of the data before compression is not an option. For example, describe a system where bitmap indexes must be compressed on-the-fly to index network traffic. They report that their system can accommodate the insertion of more than a million records per second. To improve compressibility without sacrificing performance, they cluster the rows using locality sensitive hashing @cite_25 . They report a compression factor of 2.7 due to this reordering (from 845 ,MB to 314 ,MB).
{ "cite_N": [ "@cite_25" ], "mid": [ "1502916507" ], "abstract": [ "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50)." ] }
1207.1878
2952481338
Network virtualization is a technology of running multiple heterogeneous network architecture on a shared substrate network. One of the crucial components in network virtualization is virtual network embedding, which provides a way to allocate physical network resources (CPU and link bandwidth) to virtual network requests. Despite significant research efforts on virtual network embedding in wired and cellular networks, little attention has been paid to that in wireless multi-hop networks, which is becoming more important due to its rapid growth and the need to share these networks among different business sectors and users. In this paper, we first study the root causes of new challenges of virtual network embedding in wireless multi-hop networks, and propose a new embedding algorithm that efficiently uses the resources of the physical substrate network. We examine our algorithm's performance through extensive simulations under various scenarios. Due to lack of competitive algorithms, we compare the proposed algorithm to five other algorithms, mainly borrowed from wired embedding or artificially made by us, partially with or without the key algorithmic ideas to assess their impacts.
Recently, there has been research interest regarding virtual network embedding over wired networks, , @cite_8 @cite_3 @cite_9 @cite_13 @cite_2 @cite_18 and or embedding in single-hop cellular networks @cite_4 , where the embedding problem turns out to require computationally intractable complexity for optimality, and thus various heuristics have been proposed. More challenging issues in multi-hop networks are involved mainly due to the complex interference among links and its severe coupling with network topology. These challenges require a new design of embedding algorithms.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_9", "@cite_3", "@cite_2", "@cite_13" ], "mid": [ "2152415706", "", "2151268583", "2142547489", "755306271", "2161965229", "2114298221" ], "abstract": [ "Assigning the resources of a virtual network to the components of a physical network, called Virtual Network Mapping, plays a central role in network virtualization. Existing approaches use classical heuristics like simulated annealing or attempt a two stage solution by solving the node mapping in a first stage and doing the link mapping in a second stage. The contribution of this paper is a Virtual Network Mapping (VNM) algorithm based on subgraph isomorphism detection: it maps nodes and links during the same stage. Our experimental evaluations show that this method results in better mappings and is faster than the two stage approach, especially for large virtual networks with high resource consumption which are hard to map.", "", "The routing infrastructure of the Internet has become resistant to fundamental changes and the use of overlay networks has been proposed to provide additional flexibility and control. One of the most prominent configurable components of an overlay network is its topology, which can be dynamically reconfigured to accommodate communication requirements that vary over time. In this paper, we study the problem of determining dynamic topology reconfiguration for service overlay networks with dynamic communication requirement, and the ideal goal is to find the optimal reconfiguration policies that can minimize the potential overall cost of using an overlay. We start by observing the properties of the optimal reconfiguration policies through studies on small systems and find structures in the optimal reconfiguration policies. Based on these observations, we propose heuristic methods for constructing different flavors of reconfiguration policies, i.e., never-change policy, always-change policy and cluster-based policies, to mimic and approximate the optimal ones. Our experiments show that our policy construction methods are applicable to large systems and generate policies with good performance. Our work does not only provide solutions to practical overlay topology design problems, but also provides theoretical evidence for the advantage of overlay network due to its configurability.", "Recent proposals for network virtualization provide a promising way to overcome the Internet ossification. The key idea of network virtualization is to build a diversified Internet to support a variety of network services and architectures through a shared substrate. A major challenge in network virtualization is the assigning of substrate resources to virtual networks (VN) efficiently and on-demand. This paper focuses on two versions of the VN assignment problem: VN assignment without reconfiguration (VNA-I) and VN assignment with reconfiguration (VNAII). For the VNA-I problem, we develop a basic scheme as a building block for all other advanced algorithms. Subdividing heuristics and adaptive optimization strategies are then presented to further improve the performance. For the VNA-II problem, we develop a selective VN reconfiguration scheme that prioritizes the reconfiguration of the most critical VNs. Extensive simulation experiments demonstrate that the proposed algorithms can achieve good performance under a wide range of network conditions.", "Virtualization has been proposed as a vehicle for overcoming the growing problem of internet ossification [1]. This paper studies the problem of mapping diverse virtual networks onto a common physical substrate. In particular, we develop a method for mapping a virtual network onto a substrate network in a cost-efficient way, while allocating sufficient capacity to virtual network links to ensure that the virtual network can handle any traffic pattern allowed by a general set of traffic constraints. Our approach attempts to find the best topology in a family of backbone-star topologies, in which a subset of nodes constitute the backbone, and the remaining nodes each connect to the nearest backbone node. We investigate the relative cost-effectiveness of different backbone topologies on different substrate networks, under a wide range of traffic conditions. Specifically, we study how the most cost-effective topology changes as the tightness of pairwise traffic constraints and the constraints on traffic locality are varied. In general, we find that as pairwise traffic constraints are relaxed, the least-cost backbone topology becomes increasingly “tree-like”. We also find that the cost of the constructed virtual networks is usually no more than 1.5 times a computed lower bound on the network cost and that the quality of solutions improves as the traffic locality gets weaker.", "Recently network virtualization has been proposed as a promising way to overcome the current ossification of the Internet by allowing multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. A major challenge in this respect is the VN embedding problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms which had clear separation between the node mapping and the link mapping phases. This paper proposes VN embedding algorithms with better coordination between the two phases. We formulate the VN em- bedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program, and devise two VN embedding algo- rithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. Simulation experiments show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.", "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks" ] }
1207.1878
2952481338
Network virtualization is a technology of running multiple heterogeneous network architecture on a shared substrate network. One of the crucial components in network virtualization is virtual network embedding, which provides a way to allocate physical network resources (CPU and link bandwidth) to virtual network requests. Despite significant research efforts on virtual network embedding in wired and cellular networks, little attention has been paid to that in wireless multi-hop networks, which is becoming more important due to its rapid growth and the need to share these networks among different business sectors and users. In this paper, we first study the root causes of new challenges of virtual network embedding in wireless multi-hop networks, and propose a new embedding algorithm that efficiently uses the resources of the physical substrate network. We examine our algorithm's performance through extensive simulations under various scenarios. Due to lack of competitive algorithms, we compare the proposed algorithm to five other algorithms, mainly borrowed from wired embedding or artificially made by us, partially with or without the key algorithmic ideas to assess their impacts.
Related work on embedding in wired networks mainly focuses on addressing computational challenges by restricting the problem space in different dimensions or proposing heuristic algorithms @cite_8 @cite_3 @cite_12 . For example, only bandwidth requirements are considered in @cite_8 @cite_3 or all VN requests are assumed to be given in advance @cite_3 @cite_9 . The authors in @cite_8 @cite_3 @cite_9 also considered the substrate network with infinite capacity, accepting all incoming VN requests. On embedding problems over SN with limited resources, multicommodity flow detection based algorithms are proposed in @cite_13 @cite_2 , where node embedding methods also consider their relation to the link embedding stage like our work. However, all algorithms in @cite_3 @cite_9 @cite_13 @cite_2 separate node and link embedding process, , all VN nodes are embedded before embedding the VN links. A single-stage wired embedding algorithm is also proposed in @cite_18 which tries to find a subgraph isomorphism of the VN via a method. There, the algorithm imposes a limit on the length of substrate paths that will embed virtual links and checks feasibility whenever a new VN link is embedded: if it is infeasible, it is backtracked to the last feasible embedding.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_9", "@cite_3", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2152415706", "2151268583", "2142547489", "755306271", "2161965229", "2114298221", "" ], "abstract": [ "Assigning the resources of a virtual network to the components of a physical network, called Virtual Network Mapping, plays a central role in network virtualization. Existing approaches use classical heuristics like simulated annealing or attempt a two stage solution by solving the node mapping in a first stage and doing the link mapping in a second stage. The contribution of this paper is a Virtual Network Mapping (VNM) algorithm based on subgraph isomorphism detection: it maps nodes and links during the same stage. Our experimental evaluations show that this method results in better mappings and is faster than the two stage approach, especially for large virtual networks with high resource consumption which are hard to map.", "The routing infrastructure of the Internet has become resistant to fundamental changes and the use of overlay networks has been proposed to provide additional flexibility and control. One of the most prominent configurable components of an overlay network is its topology, which can be dynamically reconfigured to accommodate communication requirements that vary over time. In this paper, we study the problem of determining dynamic topology reconfiguration for service overlay networks with dynamic communication requirement, and the ideal goal is to find the optimal reconfiguration policies that can minimize the potential overall cost of using an overlay. We start by observing the properties of the optimal reconfiguration policies through studies on small systems and find structures in the optimal reconfiguration policies. Based on these observations, we propose heuristic methods for constructing different flavors of reconfiguration policies, i.e., never-change policy, always-change policy and cluster-based policies, to mimic and approximate the optimal ones. Our experiments show that our policy construction methods are applicable to large systems and generate policies with good performance. Our work does not only provide solutions to practical overlay topology design problems, but also provides theoretical evidence for the advantage of overlay network due to its configurability.", "Recent proposals for network virtualization provide a promising way to overcome the Internet ossification. The key idea of network virtualization is to build a diversified Internet to support a variety of network services and architectures through a shared substrate. A major challenge in network virtualization is the assigning of substrate resources to virtual networks (VN) efficiently and on-demand. This paper focuses on two versions of the VN assignment problem: VN assignment without reconfiguration (VNA-I) and VN assignment with reconfiguration (VNAII). For the VNA-I problem, we develop a basic scheme as a building block for all other advanced algorithms. Subdividing heuristics and adaptive optimization strategies are then presented to further improve the performance. For the VNA-II problem, we develop a selective VN reconfiguration scheme that prioritizes the reconfiguration of the most critical VNs. Extensive simulation experiments demonstrate that the proposed algorithms can achieve good performance under a wide range of network conditions.", "Virtualization has been proposed as a vehicle for overcoming the growing problem of internet ossification [1]. This paper studies the problem of mapping diverse virtual networks onto a common physical substrate. In particular, we develop a method for mapping a virtual network onto a substrate network in a cost-efficient way, while allocating sufficient capacity to virtual network links to ensure that the virtual network can handle any traffic pattern allowed by a general set of traffic constraints. Our approach attempts to find the best topology in a family of backbone-star topologies, in which a subset of nodes constitute the backbone, and the remaining nodes each connect to the nearest backbone node. We investigate the relative cost-effectiveness of different backbone topologies on different substrate networks, under a wide range of traffic conditions. Specifically, we study how the most cost-effective topology changes as the tightness of pairwise traffic constraints and the constraints on traffic locality are varied. In general, we find that as pairwise traffic constraints are relaxed, the least-cost backbone topology becomes increasingly “tree-like”. We also find that the cost of the constructed virtual networks is usually no more than 1.5 times a computed lower bound on the network cost and that the quality of solutions improves as the traffic locality gets weaker.", "Recently network virtualization has been proposed as a promising way to overcome the current ossification of the Internet by allowing multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. A major challenge in this respect is the VN embedding problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms which had clear separation between the node mapping and the link mapping phases. This paper proposes VN embedding algorithms with better coordination between the two phases. We formulate the VN em- bedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program, and devise two VN embedding algo- rithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. Simulation experiments show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.", "Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks", "" ] }
1206.6720
1991439166
We give a generic divide-and-conquer approach for constructing collusion-resistant probabilistic dynamic traitor tracing schemes with larger alphabets from schemes with smaller alphabets. This construction offers a linear tradeoff between the alphabet size and the codelength. In particular, we show that applying our results to the binary dynamic Tardos scheme of leads to schemes that are shorter by a factor equal to half the alphabet size. Asymptotically, these codelengths correspond, up to a constant factor, to the fingerprinting capacity for static probabilistic schemes. This gives a hierarchy of probabilistic dynamic traitor tracing schemes, and bridges the gap between the low bandwidth, high codelength scheme of and the high bandwidth, low codelength scheme of Fiat and Tassa.
Besides constructions of traitor tracing schemes, several papers have also investigated theoretical bounds on the codelength needed to catch a certain number of colluders. So far, these have all focused on probabilistic static schemes. Tardos @cite_9 showed that his codelength is optimal up to a constant factor. Huang and Moulin @cite_0 gave the exact capacity of the binary fingerprinting game, by showing that for large @math , a codelength of @math is both necessary and sufficient. This was then extended to the @math -ary setting independently by Boesten and S kori ' c @cite_13 and Huang and Moulin @cite_11 , showing that the @math -ary capacity corresponds to @math bits of information, or a codelength of @math symbols from a @math -ary alphabet.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_13", "@cite_11" ], "mid": [ "2083594366", "2031722321", "1546855683", "2043584522" ], "abstract": [ "We study a fingerprinting game in which the number of colluders and the collusion channel are unknown. The encoder embeds fingerprints into a host sequence and provides the decoder with the capability to trace back pirated copies to the colluders. Fingerprinting capacity has recently been derived as the limit value of a sequence of maximin games with mutual information as their payoff functions. However, these games generally do not admit saddle-point solutions and are very hard to solve numerically. Here under the so-called Boneh-Shaw marking assumption, we reformulate the capacity as the value of a single two-person zero-sum game, and show that it is achieved by a saddle-point solution. If the maximal coalition size is k and the fingerprinting alphabet is binary, we show that capacity decays quadratically with k. Furthermore, we prove rigorously that the asymptotic capacity is 1 (k221n2) and we confirm our earlier conjecture that Tardos' choice of the arcsine distribution asymptotically maximizes the mutual information payoff function while the interleaving attack minimizes it. Along with the asymptotics, numerical solutions to the game for small k are also presented.", "We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r. By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting.", "We compute the channel capacity of non-binary fingerprinting under the Marking Assumption, in the limit of large coalition size c. The solution for the binary case was found by Huang and Moulin. They showed that asymptotically, the capacity is 1 (c22 ln 2), the interleaving attack is optimal and the arcsine distribution is the optimal bias distribution. In this paper we prove that the asymptotic capacity for general alphabet size q is (q - 1) (c22 lnq). Our proof technique does not reveal the optimal attack or bias distribution. The fact that the capacity is an increasing function of q shows that there is a real gain in going to non-binary alphabets.", "Fingerprinting capacity has recently been derived as the value of a two-person zero-sum game. In this work, we study fingerprinting capacity games with k pirates under the ˇ combined digit model proposed by For small k, capacities along with optimal strategies for both players of the game are obtained explicitly. For large k, we extend our earlier asymptotic analysis for the binary alphabet to this general model and show that capacity is asymptotic to A k2 where the constant A is identified. Saddle-point solutions to the functional maximin game are obtained using methods of variational calculus." ] }
1206.6646
2963354545
The multi-criteria decision making, which is possible with the advent of skyline queries, has been applied in many areas. Though most of the existing research is concerned with only a single relation, several real world applications require finding the skyline set of records over multiple relations. Consequently, the join operation over skylines where the preferences are local to each relation, has been proposed. In many of those cases, however, the join often involves performing aggregate operations among some of the attributes from the different relations. In this paper, we introduce such queries as “aggregate skyline join *
The maximum vector problem or Pareto curve @cite_1 in the field of computational geometry has been imported to databases forming the skyline query @cite_3 . After the first skyline algorithm proposed by @cite_1 , there were many algorithms devised by exploring the properties of skylines. Some representative non-indexed algorithms are SFS @cite_6 , LESS @cite_14 . Using index structures, algorithms such as NN @cite_0 and BBS @cite_11 have been proposed.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "1517348552", "2049864887", "2134644026", "", "2121612399", "2096547754" ], "abstract": [ "Finding the maximals in a collection of vectors is relevant to many applications. The maximal set is related to the convex hull---and hence, linear optimization---and nearest neighbors. The maximal vector problem has resurfaced with the advent of skyline queries for relational databases and skyline algorithms that are external and relationally well behaved.The initial algorithms proposed for maximals are based on divide-and-conquer. These established good average and worst case asymptotic running times, showing it to be O(n) average-case. where n is the number of vectors. However, they are not amenable to externalizing. We prove, furthermore, that their performance is quite bad with respect to the dimensionality, k, of the problem. We demonstrate that the more recent external skyline algorithms are actually better behaved, although they do not have as good an apparent asymptotic complexity. We introduce a new external algorithm, LESS, that combines the best features of these. experimentally evaluate its effectiveness and improvement over the field, and prove its average-case running time is O(kn).", "H. T. KUNG Carnegze-Mellon Un verszty, P2ttsburgh, Pennsylvanza F. LUCCIO Unwerszht d P sa, P sa, Italy F. P. PREPARATA University of Ilhno s, Urbana, Illinois ASSTRACT. Let U1 , U2, . . . , Ud be totally ordered sets and let V be a set of n d-dimensional vectors In U X Us. . X Ud . A partial ordering is defined on V in a natural way The problem of finding all maximal elements of V with respect to the partial ordering s considered The computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n). It is tnwal that C (n) = n - 1 and C, (n) _ flog2 n!l for d _> 2", "The skyline, or Pareto, operator selects those tuples that are not dominated by any others. Extending relational systems with the skyline operator would offer a basis for handling preference queries. Good algorithms are needed for skyline, however, to make this efficient in a relational setting. We propose a skyline algorithm, SFS, based on presorting that is general, for use with any skyline query, efficient, and well behaved in a relational setting.", "", "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).", "The skyline of a set of d-dimensional points contains the points that are not dominated by any other point on all dimensions. Skyline computation has recently received considerable attention in the database community, especially for progressive (or online) algorithms that can quickly return the first skyline points without having to read the entire data file. Currently, the most efficient algorithm is NN ( n earest n eighbors), which applies the divide -and-conquer framework on datasets indexed by R-trees. Although NN has some desirable features (such as high speed for returning the initial skyline points, applicability to arbitrary data distributions and dimensions), it also presents several inherent disadvantages (need for duplicate elimination if d>2, multiple accesses of the same node, large space overhead). In this paper we develop BBS ( b ranch-and- b ound s kyline), a progressive algorithm also based on nearest neighbor search, which is IO optimal, i.e., it performs a single access only to those R-tree nodes that may contain skyline points. Furthermore, it does not retrieve duplicates and its space overhead is significantly smaller than that of NN. Finally, BBS is simple to implement and can be efficiently applied to a variety of alternative skyline queries. An analytical and experimental comparison shows that BBS outperforms NN (usually by orders of magnitude) under all problem instances." ] }
1206.6646
2963354545
The multi-criteria decision making, which is possible with the advent of skyline queries, has been applied in many areas. Though most of the existing research is concerned with only a single relation, several real world applications require finding the skyline set of records over multiple relations. Consequently, the join operation over skylines where the preferences are local to each relation, has been proposed. In many of those cases, however, the join often involves performing aggregate operations among some of the attributes from the different relations. In this paper, we introduce such queries as “aggregate skyline join *
@cite_7 , proposed the multi relational skyline operator. They also designed algorithms to find such skylines over multiple relations. @cite_2 , coined the term skyline join'' in the context of distributed environments. They extended SaLSa @cite_4 and also proposed an iterative algorithm that prunes the search space in each step. ASJQ queries differ in that it extends the skyline join proposed in @cite_7 with aggregate operations performed during the join. This renders the use of the existing techniques inapplicable as they work only on the local attributes.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_2" ], "mid": [ "1972933272", "2170188482", "2115120098" ], "abstract": [ "Skyline queries compute the set of Pareto-optimal tuples in a relation, ie those tuples that are not dominated by any other tuple in the same relation. Although several algorithms have been proposed for efficiently evaluating skyline queries, they either require to extend the relational server with specialized access methods (which is not always feasible) or have to perform the dominance tests on all the tuples in order to determine the result. In this paper we introduce SaLSa (Sort and Limit Skyline algorithm), which exploits the sorting machinery of a relational engine to order tuples so that only a subset of them needs to be examined for computing the skyline result. This makes SaLSa particularly attractive when skyline queries are executed on top of systems that do not understand skyline semantics or when the skyline logic runs on clients with limited power and or bandwidth.", "We propose to extend database systems by a Skyline operation. This operation filters out a set of interesting points from a potentially large set of data points. A point is interesting if it is not dominated by any other point. For example, a hotel might be interesting for somebody traveling to Nassau if no other hotel is both cheaper and closer to the beach. We show how SSL can be extended to pose Skyline queries, present and evaluate alternative algorithms to implement the Skyline operation, and show how this operation can be combined with other database operations, e.g., join.", "The database research community has recently recognized the usefulness of skyline query. As an extension of existing database operator, the skyline query is valuable for multi-criteria decision making. However, current research tends to assume that the skyline operator is applied to one table which is not true for many applications on Web databases. In Web databases, tables are distributed in different sites, and a skyline query may involve attributes of multiple tables. In this paper, we address the problem of processing skyline queries on multiple tables in a distributed environment. We call the new operator skyline-join, as it is a hybrid of skyline and join operations. We propose two efficient approaches to process skyline-join queries which can significantly reduce the communication cost and processing time. Experiments are conducted and results show that our approaches are efficient for distributed skyline-join queries." ] }
1206.6646
2963354545
The multi-criteria decision making, which is possible with the advent of skyline queries, has been applied in many areas. Though most of the existing research is concerned with only a single relation, several real world applications require finding the skyline set of records over multiple relations. Consequently, the join operation over skylines where the preferences are local to each relation, has been proposed. In many of those cases, however, the join often involves performing aggregate operations among some of the attributes from the different relations. In this paper, we introduce such queries as “aggregate skyline join *
There are various algorithms for joining such as nested-loop join, indexed nested-loop join, merge-join and hash-join @cite_9 . Nested-loop joins can be used regardless of the join condition. The other join techniques are more efficient, but can handle only simple join conditions, such as natural joins or equi-joins. Any of these join algorithms that is applicable for the given query can be used with ASJQ algorithms.
{ "cite_N": [ "@cite_9" ], "mid": [ "1512840853" ], "abstract": [ "From the Publisher: This acclaimed revision of a classic database systems text offers a complete background in the basics of database design, languages, and system implementation. It provides the latest information combined with real-world examples to help readers master concepts. All concepts are presented in a technically complete yet easy-to-understand style with notations kept to a minimum. A running example of a bank enterprise illustrates concepts at work. To further optimize comprehension, figures and examples, rather than proofs, portray concepts and anticipate results." ] }
1206.5959
2953267149
We study a natural online variant of the replacement path problem. The asks to find for a given graph @math , two designated vertices @math and a shortest @math - @math path @math in @math , a @math for every edge @math on the path @math . The replacement path @math is simply a shortest @math - @math path in the graph, which avoids the edge @math . We adapt this problem to deal with the natural scenario, that the edge which failed is not known at the time of solution implementation. Instead, our problem assumes that the identity of the failed edge only becomes available when the routing mechanism tries to cross the edge. This situation is motivated by applications in distributed networks, where information about recent changes in the network is only stored locally, and fault-tolerant optimization, where an adversary tries to delay the discovery of the materialized scenario as much as possible. Consequently, we define the , which asks to find a nominal @math - @math path @math and detours @math for every edge on the path @math , such that the worst-case arrival time at the destination is minimized. Our main contribution is a label setting algorithm, which solves the problem in undirected graphs in time @math and linear space for all sources and a single destination. We also present algorithms for extensions of the model to any bounded number of failed edges.
The complexity of the RP problem for undirected graphs is well understood. Malik, Mittal and Gupta @cite_17 give a simple @math algorithm. A mistake in this paper was later corrected by Bar-Noy, Khuller and Schieber @cite_7 . This running time is asymptotically the same as a single source shortest path computation. Nardelli, Proietti and Widmayer @cite_10 later provided an algorithm with the same complexity for the variant of RP, in which vertices are removed instead of edges. The same authors give efficient algorithms for finding detour-critical edges for a given shortest path in @cite_20 @cite_23 .
{ "cite_N": [ "@cite_7", "@cite_23", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2139610344", "1978228527", "1564217339", "1980494375", "1980943820" ], "abstract": [ "", "Abstract Let P G (r,s) denote a shortest path between two nodes r and s in an undirected graph G=(V,E) such that |V|=n and |E|=m and with a positive real length w(e) associated with any e∈E . In this paper we focus on the problem of finding an edge e ∗ ∈P G (r,s) whose removal is such that the length of P G−e ∗ (r,s) is maximum, where G−e ∗ =(V,E⧹ e ∗ ) . Such an edge is known as the most vital edge of the path P G (r,s) . We will show that this problem can be solved in O (m·α(m,n)) time, where α is the functional inverse of the Ackermann function, thus improving on the previous O (m+n log n) time bound.", "In an undirected, 2-node connected graph G= (V,E) with positive real edge lengths, the distance between anyt wo nodes r and s is the length of a shortest path between r and s in G. The removal of a node and its incident edges from G may increase the distance from r to s. A most vital node of a given shortest path from rto s is a node (other than r and s) whose removal from G results in the largest increase of the distance from r to s. In the past, the problem of finding a most vital node of a given shortest path has been studied because of its implications in network management, where it is important to know in advance which component failure will affect network efficiencythe most. In this paper, we show that this problem can be solved in O(m+ nlog n) time and O(m) space, where mand n denote the number of edges and the number of nodes in G.", "Abstract Let P G ( r , s ) denote a shortest path between two nodes r and s in an undirected graph G with nonnegative edge weights. A detour at a node u ϵ P G ( r , s ) = 〈 r ,…, u , v ,…, s 〉 is defined as a shortest path P G − e ( u , s ) from u to s which does not make use of ( u , v ). In this paper we focus on the problem of finding an edge e = ( u , v ) ϵ P G ( r , s ) whose removal produces a detour at node u such that the length of P G − e ( u , s ) minus the length of P G ( u , s ) is maximum. We call such an edge a detour-critical edge . We will show that this problem can be solved in O( m + n log n ) time, where n and m denote the number of nodes and edges in the graph, respectively.", "The k most vital arcs in a network are those whose removal from the network results in the greatest increase in the shortest distance between two specified nodes. An exact algorithm is proposed to determine the k most vital arcs. Furthermore, an algorithm of time complexity equal to that of Dijkstra's algorithm for the shortest path problem is developed to solve the single most vital arc problem." ] }
1206.5959
2953267149
We study a natural online variant of the replacement path problem. The asks to find for a given graph @math , two designated vertices @math and a shortest @math - @math path @math in @math , a @math for every edge @math on the path @math . The replacement path @math is simply a shortest @math - @math path in the graph, which avoids the edge @math . We adapt this problem to deal with the natural scenario, that the edge which failed is not known at the time of solution implementation. Instead, our problem assumes that the identity of the failed edge only becomes available when the routing mechanism tries to cross the edge. This situation is motivated by applications in distributed networks, where information about recent changes in the network is only stored locally, and fault-tolerant optimization, where an adversary tries to delay the discovery of the materialized scenario as much as possible. Consequently, we define the , which asks to find a nominal @math - @math path @math and detours @math for every edge on the path @math , such that the worst-case arrival time at the destination is minimized. Our main contribution is a label setting algorithm, which solves the problem in undirected graphs in time @math and linear space for all sources and a single destination. We also present algorithms for extensions of the model to any bounded number of failed edges.
Another problem which bears resemblance to ORP is the (SSPR), studied by Andreatta and Romeo @cite_18 . This problem can be seen as the stochastic analogue of ORP.
{ "cite_N": [ "@cite_18" ], "mid": [ "2057742448" ], "abstract": [ "This paper considers Stochastic Shortest Path (SSP) problems in probabilistic networks. A variety of approaches have already been proposed in the literature. However, unlike in the deterministic case, they are related to distinct models, interpretations and applications. We have chosen to look at the case where detours from the original path must be taken whenever the “first-choice” arc fails. The main results obtained include the proof of some counterintuitive facts (e.g., the SSP may contain a cycle), the proof of the validity of applying stochastic programming to this problem and the proof that the computational complexity of a particular SSP problem is polynomial." ] }
1206.6145
2952900517
In two-way networks, nodes act as both sources and destinations of messages. This allows for "adaptation" at or "interaction" between the nodes - a node's channel inputs may be functions of its message(s) and previously received signals. How to best adapt is key to two-way communication, rendering it challenging. However, examples exist of point-to-point channels where adaptation is not beneficial from a capacity perspective. We ask whether analogous examples exist for multi-user two-way networks. We first consider deterministic two-way channel models: the binary modulo-2 addition channel and a generalization thereof, and the linear deterministic channel. For these deterministic models we obtain the capacity region for the two-way multiple access broadcast channel, the two-way Z channel and the two-way interference channel (IC). In all cases we permit all nodes to adapt channel inputs to past outputs (except for portions of the linear deterministic two-way IC where we only permit 2 of the 4 nodes to fully adapt). However, we show that this adaptation is useless from a capacity region perspective and capacity is achieved by strategies where the channel inputs at each use do not adapt to previous inputs. Finally, we consider the Gaussian two-way IC, and show that partial adaptation is useless when the interference is very strong. In the strong and weak interference regimes, we show that the non-adaptive Han and Kobayashi scheme utilized in parallel in both directions achieves to within a constant gap for the symmetric rate of the fully (some regimes) or partially (remaining regimes) adaptive models. The central technical contribution is the derivation of new, computable outer bounds which allow for adaptation. Inner bounds follow from non-adaptive achievability schemes of the corresponding one-way channel models.
The second channel model we consider is the two-way Z channel, with 6 messages. The one-way Z channel (with 3 messages, rather than the Z Interference channel with 2 messages) was first studied in @cite_12 , in which a general outer bound, and a matching inner bound for a special class of degraded Z channels are obtained. The capacity region of the one-way deterministic Z channel with invertibility constraints similar in flavor to those in @cite_40 , is found in @cite_31 , which will be of use here.
{ "cite_N": [ "@cite_40", "@cite_31", "@cite_12" ], "mid": [ "2087258384", "2137158410", "2140960960" ], "abstract": [ "The capacity region of a class of deterministic discrete memoryless interference channels is established. In this class of channels the outputs Y_ 1 and Y_ 2 are (deterministic) functions of the inputs X_ 1 and X_ 2 such that H(Y_ 1 |X_ 1 )=H(V_ 2 ) and H(Y_ 2 |X_ 2 )=H(V_ l ) for all product probabiliW distributions on X_ 1 X_ 2 , where V_ 1 is a function of X_ 1 and V_ 2 a function of X_ 2 . The capacity, region for the case in which V_ 2 0 and Y_ 1 depends randomly on X_ 1 is also obtained and illustrated with an example.", "We characterize the capacity region of a class of the deterministic Z channels. We show that, interestingly, Han-Kobayashi type rate-splitting is not required in the optimal achievable scheme for the class of channels considered.", "A two transmitter two receiver channel where independent data is sent on each communication link of the system is considered. We consider a three-link system, termed the \"Z\" channel, in which one transmitter is connected to both receivers while the other transmitter is only connected to one of the receivers. Thus, the \"Z\" channel has a three dimensional capacity region. We characterize the capacity region of a special class of degraded \"Z\" channels and establish an achievable region for the Gaussian \"Z\" channels. Finally, we use genie-aided techniques previously used for the interference and broadcast channels to obtain an outer bound for general \"Z\" channels." ] }
1206.5421
2952994695
This paper studies the problem of detecting the information source in a network in which the spread of information follows the popular Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network are in the susceptible state initially except the information source which is in the infected state. Susceptible nodes may then be infected by infected nodes, and infected nodes may recover and will not be infected again after recovery. Given a snapshot of the network, from which we know all infected nodes but cannot distinguish susceptible nodes and recovered nodes, the problem is to find the information source based on the snapshot and the network topology. We develop a sample path based approach where the estimator of the information source is chosen to be the root node associated with the sample path that most likely leads to the observed snapshot. We prove for infinite-trees, the estimator is a node that minimizes the maximum distance to the infected nodes. A reverse-infection algorithm is proposed to find such an estimator in general graphs. We prove that for @math -regular trees such that @math where @math is the node degree and @math is the infection probability, the estimator is within a constant distance from the actual source with a high probability, independent of the number of infected nodes and the time the snapshot is taken. Our simulation results show that for tree networks, the estimator produced by the reverse-infection algorithm is closer to the actual source than the one identified by the closeness centrality heuristic. We then further evaluate the performance of the reverse infection algorithm on several real world networks.
There have been extensive studies on the spread of epidemics in networks based on the SIR model (see @cite_13 @cite_15 @cite_1 @cite_12 and references within). The work most related to this paper is @cite_5 @cite_17 @cite_7 , in which the information source detection problem was studied under the SI model. @cite_6 @cite_9 considers the problem of detecting multiple information sources under the SI model. This paper considers the SIR model, where infection nodes may recover, which can occur in many practical scenarios as we have explained. Because of node recovery, the information source detection problem under the SIR model differs significantly from that under the SI model. The differences are summarized below. The set of possible sources in the SI model @cite_5 @cite_17 @cite_7 is restricted to the set of infected nodes. In the SIR model, all nodes are possible information sources because we assume susceptible nodes and recovered nodes are indistinguishable and a healthy node may be a recovered node so can be the information source. Therefore, the number of candidate sources is much larger in the SIR model than that in the SI model.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2165949377", "1966006953", "1969723574", "1964241922", "2150105124", "2038195874", "2030539428", "1914027636", "2111772797" ], "abstract": [ "We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes posses the rumor. In a recent work [10] by the authors, this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2-regular tree) graph, the probability of source detection goes to 0 while for d-regular trees with d 3 the probability of detection, say d, remains bounded away from 0 and is less than 1=2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with non-exponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multi-type continuous time branching processes (an equivalent representation of a generalized Polya’s urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution.", "Identifying the infection sources in a network, including the index cases that introduce a contagious disease into a population network, the servers that inject a computer virus into a computer network, or the individuals who started a rumor in a social network, plays a critical role in limiting the damage caused by the infection through timely quarantine of the sources. We consider the problem of estimating the infection sources and the infection regions (subsets of nodes infected by each source) in a network, based only on knowledge of which nodes are infected and their connections, and when the number of sources is unknown a priori. We derive estimators for the infection sources and their infection regions based on approximations of the infection sequences count. We prove that if there are at most two infection sources in a geometric tree, our estimator identifies the true source or sources with probability going to one as the number of infected nodes increases. When there are more than two infection sources, and when the maximum possible number of infection sources is known, we propose an algorithm with quadratic complexity to estimate the actual number and identities of the infection sources. Simulations on various kinds of networks, including tree networks, small-world networks and real world power grid networks, and tests on two real data sets are provided to verify the performance of our estimators.", "We study some simple models of disease transmission on small-world networks, in which either the probability of infection by a disease or the probability of its transmission is varied, or both. The resulting models display epidemic behavior when the infection or transmission probability rises above the threshold for site or bond percolation on the network, and we give exact solutions for the position of this threshold in a variety of cases. We confirm our analytic results by numerical simulation.", "Estimating which nodes are the infection sources that introduce a virus or rumor into a network, or the locations of pollutant sources, plays a critical role in limiting the potential damage to the network through timely quarantine of the sources. In this paper, we derive estimators for the infection sources and their infection regions based on the infection network geometry. We show that in a geometric tree with at most two sources, our estimator identifies these sources with probability going to one as the number of infected nodes increases. We extend and generalize our methods to general graphs, where the number of infection sources are unknown and there may be multiple sources. Numerical results are presented to verify the performance of our proposed algorithms under different types of graph structures.", "We provide a systematic study of the problem of finding the source of a computer virus in a network. We model virus spreading in a network with a variant of the popular SIR model and then construct an estimator for the virus source. This estimator is based upon a novel combinatorial quantity which we term rumor centrality. We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops in different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding virus sources in networks which are not tree-like.", "The Internet has a very complex connectivity recently modeled by the class of scale-free networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from computer virus infections and find the average lifetime and persistence of viral strains on the Internet. We define a dynamical model for the spreading of infections on scale-free networks, finding the absence of an epidemic threshold and its associated critical behavior. This new epidemiological framework rationalizes data of computer viruses and could help in the understanding of other spreading phenomena on communication and social networks.", "The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible infective removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.", "Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e sup na , for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.", "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like." ] }
1206.5421
2952994695
This paper studies the problem of detecting the information source in a network in which the spread of information follows the popular Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network are in the susceptible state initially except the information source which is in the infected state. Susceptible nodes may then be infected by infected nodes, and infected nodes may recover and will not be infected again after recovery. Given a snapshot of the network, from which we know all infected nodes but cannot distinguish susceptible nodes and recovered nodes, the problem is to find the information source based on the snapshot and the network topology. We develop a sample path based approach where the estimator of the information source is chosen to be the root node associated with the sample path that most likely leads to the observed snapshot. We prove for infinite-trees, the estimator is a node that minimizes the maximum distance to the infected nodes. A reverse-infection algorithm is proposed to find such an estimator in general graphs. We prove that for @math -regular trees such that @math where @math is the node degree and @math is the infection probability, the estimator is within a constant distance from the actual source with a high probability, independent of the number of infected nodes and the time the snapshot is taken. Our simulation results show that for tree networks, the estimator produced by the reverse-infection algorithm is closer to the actual source than the one identified by the closeness centrality heuristic. We then further evaluate the performance of the reverse infection algorithm on several real world networks.
A key observation in @cite_5 @cite_17 @cite_7 is that on regular trees, all permitted permutations of infection sequences (a infection sequence specifies the order at which nodes are infected) are equally likely under the SI model. The number of possible permutations from a fixed root node, therefore, decides the likelihood of the root node being the source. However, under the SIR model, different infection sequences are associated with different probabilities, so counting the number of permutations are not sufficient.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_17" ], "mid": [ "2150105124", "2165949377", "2111772797" ], "abstract": [ "We provide a systematic study of the problem of finding the source of a computer virus in a network. We model virus spreading in a network with a variant of the popular SIR model and then construct an estimator for the virus source. This estimator is based upon a novel combinatorial quantity which we term rumor centrality. We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops in different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding virus sources in networks which are not tree-like.", "We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes posses the rumor. In a recent work [10] by the authors, this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2-regular tree) graph, the probability of source detection goes to 0 while for d-regular trees with d 3 the probability of detection, say d, remains bounded away from 0 and is less than 1=2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with non-exponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multi-type continuous time branching processes (an equivalent representation of a generalized Polya’s urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution.", "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like." ] }
1206.5421
2952994695
This paper studies the problem of detecting the information source in a network in which the spread of information follows the popular Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network are in the susceptible state initially except the information source which is in the infected state. Susceptible nodes may then be infected by infected nodes, and infected nodes may recover and will not be infected again after recovery. Given a snapshot of the network, from which we know all infected nodes but cannot distinguish susceptible nodes and recovered nodes, the problem is to find the information source based on the snapshot and the network topology. We develop a sample path based approach where the estimator of the information source is chosen to be the root node associated with the sample path that most likely leads to the observed snapshot. We prove for infinite-trees, the estimator is a node that minimizes the maximum distance to the infected nodes. A reverse-infection algorithm is proposed to find such an estimator in general graphs. We prove that for @math -regular trees such that @math where @math is the node degree and @math is the infection probability, the estimator is within a constant distance from the actual source with a high probability, independent of the number of infected nodes and the time the snapshot is taken. Our simulation results show that for tree networks, the estimator produced by the reverse-infection algorithm is closer to the actual source than the one identified by the closeness centrality heuristic. We then further evaluate the performance of the reverse infection algorithm on several real world networks.
@cite_5 @cite_17 @cite_7 proved that the node with the maximum closeness centrality is the an MLE on regular-trees. We define the infection closeness centrality to be the inverse of the sum of distances to infected nodes. Our simulations show that the sample path based estimator is closer to the actual source than the nodes with the maximum infection closeness.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_17" ], "mid": [ "2150105124", "2165949377", "2111772797" ], "abstract": [ "We provide a systematic study of the problem of finding the source of a computer virus in a network. We model virus spreading in a network with a variant of the popular SIR model and then construct an estimator for the virus source. This estimator is based upon a novel combinatorial quantity which we term rumor centrality. We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops in different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding virus sources in networks which are not tree-like.", "We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes posses the rumor. In a recent work [10] by the authors, this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2-regular tree) graph, the probability of source detection goes to 0 while for d-regular trees with d 3 the probability of detection, say d, remains bounded away from 0 and is less than 1=2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with non-exponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multi-type continuous time branching processes (an equivalent representation of a generalized Polya’s urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution.", "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like." ] }
1206.5421
2952994695
This paper studies the problem of detecting the information source in a network in which the spread of information follows the popular Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network are in the susceptible state initially except the information source which is in the infected state. Susceptible nodes may then be infected by infected nodes, and infected nodes may recover and will not be infected again after recovery. Given a snapshot of the network, from which we know all infected nodes but cannot distinguish susceptible nodes and recovered nodes, the problem is to find the information source based on the snapshot and the network topology. We develop a sample path based approach where the estimator of the information source is chosen to be the root node associated with the sample path that most likely leads to the observed snapshot. We prove for infinite-trees, the estimator is a node that minimizes the maximum distance to the infected nodes. A reverse-infection algorithm is proposed to find such an estimator in general graphs. We prove that for @math -regular trees such that @math where @math is the node degree and @math is the infection probability, the estimator is within a constant distance from the actual source with a high probability, independent of the number of infected nodes and the time the snapshot is taken. Our simulation results show that for tree networks, the estimator produced by the reverse-infection algorithm is closer to the actual source than the one identified by the closeness centrality heuristic. We then further evaluate the performance of the reverse infection algorithm on several real world networks.
Other related works include: (1) detecting the first adopter of innovations based on a game theoretical model @cite_11 in which the authors derived the MLE but the computational complexity is exponential in the number of nodes, (2) network forensics under the SI model @cite_8 , where the goal is to distinguish an epidemic infection from a random infection, and (3) geospatial abduction problems (see @cite_18 @cite_14 and references within).
{ "cite_N": [ "@cite_8", "@cite_18", "@cite_14", "@cite_11" ], "mid": [ "2093029384", "", "2529184583", "1992848158" ], "abstract": [ "Computer (and human) networks have long had to contend with spreading viruses. Effectively controlling or curbing an outbreak requires understanding the dynamics of the spread. A virus that spreads by taking advantage of physical links or user-acquaintance links on a social network can grow explosively if it spreads beyond a critical radius. On the other hand, random infections (that do not take advantage of network structure) have very different propagation characteristics. If too many machines (or humans) are infected, network structure becomes essentially irrelevant, and the different spreading modes appear identical. When can we distinguish between mechanics of infection? Further, how can this be done efficiently? This paper studies these two questions. We provide sufficient conditions for different graph topologies, for when it is possible to distinguish between a random model of infection and a spreading epidemic model, with probability of misclassification going to zero. We further provide efficient algorithms that are guaranteed to work in different regimes.", "", "Imagine yourself as a military officer in a conflict zone trying to identify locations of weapons caches supporting road-side bomb attacks on your countrys troops. Or imagine yourself as a public health expert trying to identify the location of contaminated water that is causing diarrheal diseases in a local population. Geospatial abduction is a new technique introduced by the authors that allows such problems to be solved. Geospatial Abduction provides the mathematics underlying geospatial abduction and the algorithms to solve them in practice; it has wide applicability and can be used by practitioners and researchers in many different fields. Real-world applications of geospatial abduction to military problems are included. Compelling examples drawn from other domains as diverse as criminology, epidemiology and archaeology are covered as well. This book also includes access to a dedicated website on geospatial abduction hosted by University of Maryland. Geospatial Abduction targets practitioners working in general AI, game theory, linear programming, data mining, machine learning, and more. Those working in the fields of computer science, mathematics, geoinformation, geological and biological science will also find this book valuable.", "Network games provide a basic framework for studying the diffusion of new ideas or behaviors through a population. In these models, agents decide to adopt a new idea based on optimizing pay-off that depends on the adoption decisions of their neighbors in an underlying network. Assuming such a model, we consider the problem of inferring early adopters or first movers given a snap shot of the adoption state at a given time. We present some results on solving this problem in the low temperature regime. We conclude with a discussion on reducing the complexity of such inference problems for large networks." ] }
1206.5689
2951811557
In this paper we are proving the following fact. Let P be an arbitrary simple polygon, and let S be an arbitrary set of 15 points inside P. Then there exists a subset T of S that is not "visually discernible", that is, T is not equal to the intersection of S with the visibility region vis(v) of any point v in P. In other words, the VC-dimension d of visibility regions in a simple polygon cannot exceed 14. Since Valtr proved in 1998 that d [6,23] holds, no progress has been made on this bound. By epsilon-net theorems our reduction immediately implies a smaller upper bound to the number of guards needed to cover P.
Without using the @math -net theorem, Kirkpatrick @cite_10 obtained a @math upper bound to the number of boundary guards needed to cover the boundary of @math . This raises the question if the factor @math in the @math bound for @math -nets in other geometric range spaces can be lowered to @math as well, as was shown to be true by @cite_9 for special cases; see also King and Kirkpatrick @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_10" ], "mid": [ "2139332235", "2119142868", "15912472" ], "abstract": [ "We provide an O(log log opt)-approximation algorithm for the problem of guarding a simple polygon with guards on the perimeter. We first design a polynomial-time algorithm for building e-nets of size @math for the instances of Hitting Set associated with our guarding problem. We then apply the technique of Bronnimann and Goodrich to build an approximation algorithm from this e-net finder. Along with a simple polygon P, our algorithm takes as input a finite set of potential guard locations that must include the polygon’s vertices. If a finite set of potential guard locations is not specified, e.g., when guards may be placed anywhere on the perimeter, we use a known discretization technique at the cost of making the algorithm’s running time potentially linear in the ratio between the longest and shortest distances between vertices. Our algorithm is the first to improve upon O(log opt)-approximation algorithms that use generic net finders for set systems of finite VC-dimension.", "We show the existence of @math -nets of size @math for planar point sets and axis-parallel rectangular ranges. The same bound holds for points in the plane and “fat” triangular ranges and for point sets in @math and axis-parallel boxes; these are the first known nontrivial bounds for these range spaces. Our technique also yields improved bounds on the size of @math -nets in the more general context considered by Clarkson and Varadarajan. For example, we show the existence of @math -nets of size @math for the dual range space of “fat” regions and planar point sets (where the regions are the ground objects and the ranges are subsets stabbed by points). Plugging our bounds into the technique of Bronnimann and Goodrich or of Even, Rawitz, and Shahar, we obtain improved approximation factors (computable in expected polynomial time by a randomized algorithm) for the hitting set or the set cover problems associated with the corresponding range spaces.", "" ] }
1206.4952
2949917957
In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes edges to select. Many large-scale network datasets, however, are too large and or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.
The problem of sampling graphs has been of interest in many different fields of research. The work in @cite_0 @cite_35 @cite_3 studies the statistical properties of samples from complex networks produced by traditional sampling algorithms such as node sampling, edge sampling and random-walk based sampling and discusses the biases in estimates of graph metrics due to sampling. The work in @cite_30 also discusses the connections between specific biases and various measures of structural representativeness. In addition, there have been a number of sampling algorithms in other communities such as in peer-to-peer networks @cite_9 @cite_6 . Internet modeling research community @cite_13 and the WWW information retrieval community has focussed on random walk based sampling algorithms like PageRank @cite_10 @cite_32 . There is also some work that highlights the different aspects of the sampling problem. Examples include @cite_23 @cite_1 @cite_4
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_10", "@cite_9", "@cite_1", "@cite_32", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_13" ], "mid": [ "", "1965936846", "180417844", "1854214752", "2166596983", "", "2138621811", "2048596679", "2107082801", "2028897509", "", "2127931346" ], "abstract": [ "", "We study the statistical properties of the sampled networks by a random walker. We compare topological properties of the sampled networks such as degree distribution, degree-degree correlation, and clustering coefficient with those of the original networks. From the numerical results, we find that most of topological properties of the sampled networks are almost the same as those of the original networks for @math . In contrast, we find that the degree distribution exponent of the sampled networks for @math somewhat deviates from that of the original networks when the ratio of the sampled network size to the original network size becomes smaller. We also apply the sampling method to various real networks such as collaboration of movie actor, Worldwide Web, and peer-to-peer networks. All topological properties of the sampled networks are essentially the same as those of the original real networks.", "Recently, there has been a great deal of research focusing on the development of sampling algorithms for networks with small-world and or power-law structure. The peerto-peer research community (e.g., [7]) have used sampling to quickly explore and obtain a good representative sample of the network topology, as these networks are hard to explore completely and have significant amounts of churn in their topology. For collecting data from social networks, researchers often use snowball sampling (e.g., [2]) due to the lack of access to the complete graph. have developed Forest Fire Sampling, which uses a hybrid combination of snowball sampling and random-walk sampling to produce samples that match the temporal evolution of the underlying social network [5]. have developed a Metropolis algorithm which samples in a manner designed to match desired properties in the original network [3]. Although there has been a great deal of research focusing on the the development of sampling algorithms, much of this work is based on empirical study and evaluation (i.e., measuring the similarity between sampled and original network properties). There has been some work (e.g., [4, 8, 6]) that has studied the statistical properties of samples of complex networks produced by traditional sampling algorithms such as node sampling, edge sampling and random walks. However, there has been relatively little attention paid to the development of a theoretical foundation for sampling from networks—including a formal framework for sampling, an understanding of various network characteristics and their dependencies, and an analysis of their impact on the accuracy of sampling algorithms. In this paper, we reconsider the foundations of network sampling and attempt to formalize the goals, and process of, sampling, in order to frame future development and analysis of sampling algorithms.", "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.", "This paper addresses the difficult problem of selecting representative samples of peer properties (eg degree, link bandwidth, number of files shared) in unstructured peer-to-peer systems. Due to the large size and dynamic nature of these systems, measuring the quantities of interest on every peer is often prohibitively expensive, while sampling provides a natural means for estimating system-wide behavior efficiently. However, commonly-used sampling techniques for measuring peer-to-peer systems tend to introduce considerable bias for two reasons. First, the dynamic nature of peers can bias results towards short-lived peers, much as naively sampling flows in a router can lead to bias towards short-lived flows. Second, the heterogeneous nature of the overlay topology can lead to bias towards high-degree peers.We present a detailed examination of the ways that the behavior of peer-to-peer systems can introduce bias and suggest the Metropolized Random Walk with Backtracking (MRWB) as a viable and promising technique for collecting nearly unbiased samples. We conduct an extensive simulation study to demonstrate that the proposed technique works well for a wide variety of common peer-to-peer network conditions. Using the Gnutella network, we empirically show that our implementation of the MRWB technique yields more accurate samples than relying on commonly-used sampling techniques. Furthermore, we provide insights into the causes of the observed differences. The tool we have developed, ion-sampler, selects peer addresses uniformly at random using the MRWB technique. These addresses may then be used as input to another measurement tool to collect data on a particular property.", "", "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis.", "†§ ¶ Most studies of networks have only looked at small subsets of the true network. Here, we discuss the sampling properties of a network’s degree distribution under the most parsimonious sampling scheme. Only if the degree distributions of the network and randomly sampled subnets belong to the same family of probability distributions is it possible to extrapolate from subnet data to properties of the global network. We show that this condition is indeed satisfied for some important classes of networks, notably classical random graphs and exponential random graphs. For scale-free degree distributions, however, this is not the case. Thus, inferences about the scale-free nature of a network may have to be treated with some caution. The work presented here has important implications for the analysis of molecular networks as well as for graph theory and the theory of networks in general. complex networks protein interaction networks random graphs sampling theory", "We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: a) when the overlay topology is clustered, and h) when a client re-issues the same query while its horizon does not change much. For construction, we argue that an expander can he maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk can achieve statistical properties similar to independent sampling (if the second eigenvalue of the transition matrix is hounded away from 1, which translates to good expansion of the network; such connectivity is desired, and believed to hold, in every reasonable network and network model). This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.", "We study the statistical properties of the sampled scale-free networks, deeply related to the proper identification of various real-world networks. We exploit three methods of sampling and investigate the topological properties such as degree and betweenness centrality distribution, average path length, assortativity, and clustering coefficient of sampled networks compared with those of original networks. It is found that the quantities related to those properties in sampled networks appear to be estimated quite differently for each sampling method. We explain why such a biased estimation of quantities would emerge from the sampling procedure and give appropriate criteria for each sampling method to prevent the quantities from being overestimated or underestimated.", "", "In this paper, we develop methods to ''sample'' a small realistic graph from a large Internet topology. Despite recent activity, modeling and generation of realistic graphs resembling the Internet is still not a resolved issue. All previous work has attempted to grow such graphs from scratch. We address the complementary problem of shrinking an existing topology. In more detail, this work has three parts. First, we propose a number of reduction methods that can be categorized into three classes: (a) deletion methods, (b) contraction methods, and (c) exploration methods. We prove that some of them maintain key properties of the initial graph. We implement our methods and show that we can effectively reduce the nodes of an Internet graph by as much as 70 while maintaining its important properties. Second, we show that our reduced graphs compare favorably against construction-based generators. Finally, we successfully validate the effectiveness of our best methods in an actual performance evaluation study of multicast routing. Apart from its practical applications, the problem of graph sampling is of independent interest." ] }
1206.4952
2949917957
In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes edges to select. Many large-scale network datasets, however, are too large and or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.
In social networks research, the recent work in @cite_11 uses random walks to estimate node properties in @math (e.g., degree distributions in online social networks). These different sampling algorithms focused on estimating either the local or global properties of the original graph, but not to sample a representative subgraph of the original graph, which is our goal. The work in @cite_8 studied the problem of sampling a subgraph representative of the graph community structure by sampling the nodes that maximize the expansion.
{ "cite_N": [ "@cite_8", "@cite_11" ], "mid": [ "2171935404", "2103799649" ], "abstract": [ "We propose a novel method, based on concepts from expander graphs, to sample communities in networks. We show that our sampling method, unlike previous techniques, produces subgraphs representative of community structure in the original network. These generated subgraphs may be viewed as stratified samples in that they consist of members from most or all communities in the network. Using samples produced by our method, we show that the problem of community detection may be recast into a case of statistical relational learning. We empirically evaluate our approach against several real-world datasets and demonstrate that our sampling method can effectively be used to infer and approximate community affiliation in the larger network.", "Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph." ] }
1206.4952
2949917957
In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes edges to select. Many large-scale network datasets, however, are too large and or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.
Due to the popularity of online social networks such as Facebook and Twitter, there has been a lot of work @cite_34 @cite_2 @cite_25 @cite_19 @cite_33 @cite_36 studying the growth and evolution of these networks. While most of them have been on static graphs, recent works @cite_29 @cite_24 have started focusing on interactions in social networks. There is also work on decentralized search and crawling @cite_38 @cite_37 @cite_41 , however, in our work we focus on sampling from graphs that are naturally evolving as a stream of edges. In the literature, the most closely related efforts are that of Leskovec al in @cite_12 and Hubler al in @cite_21 . But, as we mentioned before, our work is different as we focus on the novel problem of sampling from graphs that are naturally evolving as a stream of edges (graph streams).
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_33", "@cite_36", "@cite_41", "@cite_29", "@cite_21", "@cite_24", "@cite_19", "@cite_2", "@cite_34", "@cite_25", "@cite_12" ], "mid": [ "2952033050", "2137135938", "2121761994", "2124793767", "2106315062", "2047443612", "2157747946", "2153204928", "2122710250", "108820191", "2115022330", "2151078464", "2146008005" ], "abstract": [ "As the World Wide Web is growing rapidly, it is getting increasingly challenging to gather representative information about it. Instead of crawling the web exhaustively one has to resort to other techniques like sampling to determine the properties of the web. A uniform random sample of the web would be useful to determine the percentage of web pages in a specific language, on a topic or in a top level domain. Unfortunately, no approach has been shown to sample the web pages in an unbiased way. Three promising web sampling algorithms are based on random walks. They each have been evaluated individually, but making a comparison on different data sets is not possible. We directly compare these algorithms in this paper. We performed three random walks on the web under the same conditions and analyzed their outcomes in detail. We discuss the strengths and the weaknesses of each algorithm and propose improvements based on experimental results.", "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.", "Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks.", "Online social networking services are among the most popular Internet services according to Alexa.com and have become a key feature in many Internet services. Users interact through various features of online social networking services: making friend relationships, sharing their photos, and writing comments. These friend relationships are expected to become a key to many other features in web services, such as recommendation engines, security measures, online search, and personalization issues. However, we have very limited knowledge on how much interaction actually takes place over friend relationships declared online. A friend relationship only marks the beginning of online interaction. Does the interaction between users follow the declaration of friend relationship? Does a user interact evenly or lopsidedly with friends? We venture to answer these questions in this work. We construct a network from comments written in guestbooks. A node represents a user and a directed edge a comments from a user to another. We call this network an activity network. Previous work on activity networks include phone-call networks [34, 35] and MSN messenger networks [27]. To our best knowledge, this is the first attempt to compare the explicit friend relationship network and implicit activity network. We have analyzed structural characteristics of the activity network and compared them with the friends network. Though the activity network is weighted and directed, its structure is similar to the friend relationship network. We report that the in-degree and out-degree distributions are close to each other and the social interaction through the guestbook is highly reciprocated. When we consider only those links in the activity network that are reciprocated, the degree correlation distribution exhibits much more pronounced assortativity than the friends network and places it close to known social networks. The k-core analysis gives yet another corroborating evidence that the friends network deviates from the known social network and has an unusually large number of highly connected cores. We have delved into the weighted and directed nature of the activity network, and investigated the reciprocity, disparity, and network motifs. We also have observed that peer pressure to stay active online stops building up beyond a certain number of friends. The activity network has shown topological characteristics similar to the friends network, but thanks to its directed and weighted nature, it has allowed us more in-depth analysis of user interaction.", "Breadth First Search (BFS) is a widely used approach for sampling large graphs. However, it has been empirically observed that BFS sampling is biased toward high-degree nodes, which may strongly affect the measurement results. In this paper, we quantify and correct the degree bias of BFS. First, we consider a random graph RG(pk) with an arbitrary degree distribution pk. For this model, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction f of covered nodes. We also show that, for RG(pk), all commonly used graph traversal techniques (BFS, DFS, Forest Fire, Snowball Sampling, RDS) have exactly the same bias. Next, we propose a practical BFS-bias correction procedure that takes as input a collected BFS sample together with the fraction f. Our correction technique is exact (i.e., leads to unbiased estimation) for RG(pk). Furthermore, it performs well when applied to a broad range of Internet topologies and to two large BFS samples of Facebook and Orkut networks.", "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone. This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.", "While data mining in chemoinformatics studied graph data with dozens of nodes, systems biology and the Internet are now generating graph data with thousands and millions of nodes. Hence data mining faces the algorithmic challenge of coping with this significant increase in graph size: Classic algorithms for data analysis are often too expensive and too slow on large graphs. While one strategy to overcome this problem is to design novel efficient algorithms, the other is to 'reduce' the size of the large graph by sampling. This is the scope of this paper: We will present novel Metropolis algorithms for sampling a 'representative' small subgraph from the original large graph, with 'representative' describing the requirement that the sample shall preserve crucial graph properties of the original graph. In our experiments, we improve over the pioneering work of Leskovec and Faloutsos (KDD 2006), by producing representative subgraph samples that are both smaller and of higher quality than those produced by other methods from the literature.", "Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30 of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged.", "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.", "We present a study of anonymized data capturing high-level communication activities within the Microsoft Instant Messenger network. We analyze properties of the communication network defined by user interactions and demographics, as reported and as derived from one month of data collected in June 2006. The compressed dataset occupies 4.5 terabytes, composed from 1 billion conversations per day (150 gigabytes) over one month of logging. The dataset contains more than 30 billion conversations among 240 million people. The network is the largest social network analyzed up to date. We focus on analyses of high-level characteristics and patterns that emerge from the collective dynamics of large numbers of people,rather than the actions and characteristics of individuals. Analyses center on numbers and durations of conversations; the content of communications was neither available nor pursued. From the data we construct a communication graph with 190 million nodes and 1.3 billion undirected edges. We find that the graph is well connected, with an effective diameter of 7.8, and is highly clustered, with a clustering coefficient decaying slowly with exponent −0.4. We also find strong influences of homophily in activities, where people with similar characteristics overall tend to communicate more with one another, with the exception of gender, where we find cross-gender conversations are both more frequent and of longer duration than conversations with the same gender. ∗This work was performed while the first author was an intern at Microsoft Research.", "Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.", "We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a \"gap\" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.", "Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph." ] }
1206.4952
2949917957
In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes edges to select. Many large-scale network datasets, however, are too large and or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.
Although significant work has been proposed to solve the problem of graph sampling, to our knowledge, there is no prior research on sampling from graph streams to obtain a representative subgraph. However, several research works @cite_18 @cite_5 @cite_27 studied graph streaming algorithms for counting triangles, degree sequences, and estimating page ranks. The main contributions of these works are to use a small amount of memory (sublinear space) and few passes to perform computations on large graphs streams. In database research, some research studied data stream management systems. For example, the work in @cite_20 studied the problem of computing frequency counts in data streams, and the work in @cite_26 studied the problem of sampling from data stream of database queries.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_27", "@cite_5", "@cite_20" ], "mid": [ "1967172267", "2053154469", "2078764670", "2002576896", "2069980026" ], "abstract": [ "This study focuses on computations on large graphs (e.g., the web-graph) where the edges of the graph are presented as a stream. The objective in the streaming model is to use small amount of memory (preferably sub-linear in the number of nodes n) and a few passes. In the streaming model, we show how to perform several graph computations including estimating the probability distribution after a random walk of length l, mixing time, and the conductance. We estimate the mixing time M of a random walk in O(nα+Mα√n+√Mn α) space and O(√Mα) passes. Furthermore, the relation between mixing time and conductance gives us an estimate for the conductance of the graph. By applying our algorithm for computing probability distribution on the web-graph, we can estimate the PageRank p of any node up to an additive error of √ep in O(√M α) passes and O(min(nα + 1 e √M α + 1 e Mα, αn√Mα + 1 e √M α)) space, for any α ∈ (0, 1]. In particular, for e = M n, by setting α = M--1 2, we can compute the approximate PageRank values in O(nM--1 4) space and O(M3 4) passes. In comparison, a standard implementation of the PageRank algorithm will take O(n) space and O(M) passes.", "The method of reservoir based sampling is often used to pick an unbiased sample from a data stream. A large portion of the unbiased sample may become less relevant over time because of evolution. An analytical or mining task (eg. query estimation) which is specific to only the sample points from a recent time-horizon may provide a very inaccurate result. This is because the size of the relevant sample reduces with the horizon itself. On the other hand, this is precisely the most important case for data stream algorithms, since recent history is frequently analyzed. In such cases, we show that an effective solution is to bias the sample with the use of temporal bias functions. The maintenance of such a sample is non-trivial, since it needs to be dynamically maintained, without knowing the total number of points in advance. We prove some interesting theoretical properties of a large class of memory-less bias functions, which allow for an efficient implementation of the sampling algorithm. We also show that the inclusion of bias in the sampling process introduces a maximum requirement on the reservoir size. This is a nice property since it shows that it may often be possible to maintain the maximum relevant sample with limited storage requirements. We not only illustrate the advantages of the method for the problem of query estimation, but also show that the approach has applicability to broader data mining problems such as evolution analysis and classification.", "In most algorithmic applications which compare two distributions, information theoretic distances are more natural than standard l p norms. In this paper we design streaming and sublinear time property testing algorithms for entropy and various information theoretic distances. posed the problem of property testing with respect to the Jensen-Shannon distance. We present optimal algorithms for estimating bounded, symmetric f-divergences (including the Jensen-Shannon divergence and the Hellinger distance) between distributions in various property testing frameworks. Along the way, we close a (log n) H gap between the upper and lower bounds for estimating entropy H, yielding an optimal algorithm over all values of the entropy. In a data stream setting (sublinear space), we give the first algorithm for estimating the entropy of a distribution. Our algorithm runs in polylogarithmic space and yields an asymptotic constant factor approximation scheme. An integral part of the algorithm is an interesting use of an F 0 (the number of distinct elements in a set) estimation algorithm; we also provide other results along the space time approximation tradeoff curve.Our results have interesting structural implications that connect sublinear time and space constrained algorithms. The mediating model is the random order streaming model, which assumes the input is a random permutation of a multiset and was first considered by Munro and Paterson in 1980. We show that any property testing algorithm in the combined oracle model for calculating a permutation invariant functions can be simulated in the random order model in a single pass. This addresses a question raised by regarding the relationship between property testing and stream algorithms. Further, we give a polylog-space PTAS for estimating the entropy of a one pass random order stream. This bound cannot be achieved in the combined oracle (generalized property testing) model.", "We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop the concept of list-efficient streaming algorithms that are essential to the design of efficient streaming algorithms through reductions.Our results include a suite of list-efficient streaming algorithms for basic statistical primitives. Using the reduction paradigm along with these tools, we design streaming algorithms for approximately counting the number of triangles in a graph presented as a stream.A specific highlight of our work is the first algorithm for the number of distinct elements in a data stream that achieves arbitrary approximation factors. (Independently, Trevisan [Tre01] has solved this problem via a different approach; our algorithm has the advantage of being list-efficient.)", "Research in data stream algorithms has blossomed since late 90s. The talk will trace the history of the Approximate Frequency Counts paper, how it was conceptualized and how it influenced data stream research. The talk will also touch upon a recent development: analysis of personal data streams for improving our quality of lives." ] }
1206.4898
2952407692
It has been recently shown that any graph of genus g>0 can be stochastically embedded into a distribution over planar graphs, with distortion Olog (g+1)) [Sidiropoulos, FOCS 2010]. This embedding can be computed in polynomial time, provided that a drawing of the input graph into a genus-g surface is given. We show how to compute the above embedding without having such a drawing. This implies a general reduction for solving problems on graphs of small genus, even when the drawing into a small genus surface is unknown. To the best of our knowledge, this is the first result of this type.
Inspired by Bartal's stochastic embedding of general metrics into trees @cite_1 , Indyk and Sidiropoulos @cite_9 showed that every metric on a graph of genus @math can be stochastically embedded into a planar graph with distortion @math (see Section for a formal definition of stochastic embeddings). The above bound was later improved by Borradaile, Lee, and Sidiropoulos @cite_12 , who obtained an embedding with distortion @math . Subsequently, Sidiropoulos @cite_14 gave an embedding with distortion @math , matching the @math lower bound from @cite_12 . The embeddings from @cite_9 , and @cite_14 can be computed in polynomial time, provided that the drawing of the graph into a small genus surface is given. Computing the embedding from @cite_12 requires solving an NP-hard problem, even when the drawing is given.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_1", "@cite_12" ], "mid": [ "2058364478", "1997713965", "2114493937", "2951036212" ], "abstract": [ "A probabilistic C-embedding of a (guest) metric M into a collection of(host) metrics M'1, ..., M'k is a randomized mapping F of M intoone of the M'1, ..., M'k such that, for any two points p,q in theguest metric: The distance between F(p) and F(q) in any M'i is not smaller thanthe original distance between p and q. The expected distance between F(p) and F(q) in (random) M'i is notgreater than some constant C times the original distance, for C≥ 1. The constant C is called the distortion of the embedding. Low-distortion probabilistic embeddings enable reducing algorithmicproblems over \"hard\" guest metrics into \"easy\" host metrics.We show that every metric induced by a graph of bounded genus can beprobabilistically embedded into planar graphs, with constant distortion. The embedding can be computed efficiently, given a drawing of the graphon a genus-g surface.", "It has been shown by Indyk and Sidiropoulos [IS07] that any graph of genus g>0 can be stochastically embedded into a distribution over planar graphs with distortion 2^O(g). This bound was later improved to O(g^2) by Borradaile, Lee and Sidiropoulos [BLS09]. We give an embedding with distortion O(log g), which is asymptotically optimal. Apart from the improved distortion, another advantage of our embedding is that it can be computed in polynomial time. In contrast, the algorithm of [BLS09] requires solving an NP-hard problem. Our result implies in particular a reduction for a large class of geometric optimization problems from instances on genus-g graphs, to corresponding ones on planar graphs, with a O(log g) loss factor in the approximation guarantee.", "This paper provides a novel technique for the analysis of randomized algorithms for optimization problems on metric spaces, by relating the randomized performance ratio for any, metric space to the randomized performance ratio for a set of \"simple\" metric spaces. We define a notion of a set of metric spaces that probabilistically-approximates another metric space. We prove that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion. These metric spaces are \"simple\" as being: (1) tree metrics; (2) natural for applying a divide-and-conquer algorithmic approach. The technique presented is of particular interest in the context of on-line computation. A large number of on-line algorithmic problems, including metrical task systems, server problems, distributed paging, and dynamic storage rearrangement are defined in terms of some metric space. Typically for these problems, there are linear lower bounds on the competitive ratio of deterministic algorithms. Although randomization against an oblivious adversary has the potential of overcoming these high ratios, very little progress has been made in the analysis. We demonstrate the use of our technique by obtaining substantially improved results for two different on-line problems.", "Indyk and Sidiropoulos (2007) proved that any orientable graph of genus @math can be probabilistically embedded into a graph of genus @math with constant distortion. Viewing a graph of genus @math as embedded on the surface of a sphere with @math handles attached, Indyk and Sidiropoulos' method gives an embedding into a distribution over planar graphs with distortion @math , by iteratively removing the handles. By removing all @math handles at once, we present a probabilistic embedding with distortion @math for both orientable and non-orientable graphs. Our result is obtained by showing that the nimum-cut graph of Erickson and Har Peled (2004) has low dilation, and then randomly cutting this graph out of the surface using the Peeling Lemma of Lee and Sidiropoulos (2009)." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
The state-of-the-art method for signal reconstruction in CS is based on the minimization of the @math norm of the signal under the linear constraint, for an overview of this technique see @cite_45 @cite_9 . A number of works also adopted a probabilistic or Bayesian approach @cite_6 @cite_37 @cite_40 . Generically, one disadvantage of the probabilistic approach is that no exact algorithm is known for evaluation of the corresponding expectations. Whereas @math minimization is done exactly using linear programming. In our approach, this problem is resolved with the use of belief propagation that turns out to be an extremely efficient heuristic. Another issue of the Bayesian approach is the choice of the signal model. Whereas the performance of the @math reconstruction is independent of the signal distribution, this is not the case for the Bayesian approach in general. We show that actually for the noiseless CS the optimal exact reconstruction is possible even if the signal model does not match the signal distribution.
{ "cite_N": [ "@cite_37", "@cite_9", "@cite_6", "@cite_40", "@cite_45" ], "mid": [ "2147361252", "2119667497", "", "2135859872", "2170929819" ], "abstract": [ "We relate compressed sensing (CS) with Bayesian experimental design and provide a novel efficient approximate method for the latter, based on expectation propagation. In a large comparative study about linearly measuring natural images, we show that the simple standard heuristic of measuring wavelet coefficients top-down systematically outperforms CS methods using random measurements; the sequential projection optimisation approach of (Ji & Carin, 2007) performs even worse. We also show that our own approximate Bayesian method is able to learn measurement filters on full images efficiently which outperform the wavelet heuristic. To our knowledge, ours is the first successful attempt at \"learning compressed sensing\" for images of realistic size. In contrast to common CS methods, our framework is not restricted to sparse signals, but can readily be applied to other notions of signal complexity or noise models. We give concrete ideas how our method can be scaled up to large signal representations.", "Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.", "", "Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform asymptotically optimal Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length-N signal containing K large coefficients, our CS-BP decoding algorithm uses O(K log(N)) measurements and O(N log2(N)) computation. Finally, although we focus on a two-state mixture Gaussian model, CS-BP is easily adapted to other signal models.", "Abstract Let A be a d × n matrix and T = Tn -1 be the standard simplex in R n. Suppose that d and n are both large and comparable: d ≈ δn, δ ∈ (0, 1). We count the faces of the projected simplex AT when the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of R n. We derive ρ N(δ) > 0 with the property that, for any ρ 0 at which phase transition occurs in k d. We compute and display ρ VS and compare with ρ N. Corollaries are as follows. (1) The convex hull of n Gaussian samples in Rd , with n large and proportional to d, has the same k-skeleton as the (n - 1) simplex, for k < ρ N (d n)d(1 + oP (1)). (2) There is a “phase transition” in the ability of linear programming to find the sparsest nonnegative solution to systems of underdetermined linear equations. For most systems having a solution with fewer than ρ VS (d n)d(1 + o(1)) nonzeros, linear programming will find that solution. neighborly polytopes convex hull of Gaussian sample underdetermined systems of linear equations uniformly distributed random projections phase transitions" ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
In the noiseless case of CS it is very intuitive that exact reconstruction of the signal is in principle possible if and only if the number of measurements is larger than the number of non-zero component of the signal, @math . In a more generic case, for instance in the presence of the measurement noise it is not straightforward to compute the best achievable mean-squared error in reconstruction. These theoretical optimality limits were analyzes rigorously in very general cases by @cite_22 @cite_28 . These results agree with the non-rigorous replica method as developed for CS e.g. in @cite_3 @cite_42 @cite_15 . Here we analyze the theoretically optimal reconstruction using as well the replica method (and explicit its connection with the density evolution).
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_42", "@cite_3", "@cite_15" ], "mid": [ "2951271920", "", "2139053635", "2090842051", "1981051810" ], "abstract": [ "Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phase transition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phase transition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weak-noise regime.", "", "We consider the problem of reconstructing an N-dimensional continuous vector x from P constraints which are generated from its linear transformation under the assumption that the number of non-zero elements of x is typically limited to ρN (0≤ρ≤1). Problems of this type can be solved by minimizing a cost function with respect to the Lp-norm , subject to the constraints under an appropriate condition. For several values of p, we assess a typical case limit αc(ρ), which represents a critical relation between α = P N and ρ for successfully reconstructing the original vector by the minimization for typical situations in the limit while keeping α finite, utilizing the replica method. For p = 1, αc(ρ) is considerably smaller than its worst case counterpart, which has been rigorously derived in the existing literature on information theory.", "The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdu. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.", "Compressed sensing (CS) is an important recent advance that shows how to reconstruct sparse high dimensional signals from surprisingly small numbers of random measurements. The nonlinear nature of the reconstruction process poses a challenge to understanding the performance of CS. We employ techniques from the statistical physics of disordered systems to compute the typical behavior of CS as a function of the signal sparsity and measurement density. We find surprising and useful regularities in the nature of errors made by CS, a new phase transition which reveals the possibility of CS for nonnegative signals without optimization, and a new null model for sparse regression." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
The performance of the belief propagation algorithm can be analyzed analytically in the large system limit. This can be done either using the replica method, as in @cite_11 , or using density evolution. An asymptotic density-evolution-like analysis of the AMP algorithm, called state evolution, was developed in @cite_1 , and more generally in @cite_13 . State evolution is the analog of density evolution for dense graphs. General analysis of algorithmic phase transitions for G-AMP was presented in @cite_34 . In this paper we perform the same density evolution analysis for other variants of the problem (with learning, where the signal model does not match the signal distribution, with noise, etc.), without the rigorous proofs. Our main point is to analyze and understand the phase transitions that pose algorithmic barriers to the message passing reconstruction.
{ "cite_N": [ "@cite_13", "@cite_34", "@cite_1", "@cite_11" ], "mid": [ "2610971674", "2103539935", "2082029531", "2550925785" ], "abstract": [ "“Approximate message passing” (AMP) algorithms have proved to be effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper, we provide rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with a large number of short cycles in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.", "Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization-the firm shrinkage nonlinearity and the minimax nonlinearity-and also nonscalar denoisers-block thresholding, monotone regression, and total variation minimization. Let the variables e = k N and δ = n N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x0 according to y=Ax0. Here, A is an n×N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(e) separating successful from unsuccessful reconstruction of x0 by AMP is given by δ = M(e|Denoiser) where M(e|Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem. We prove that this formula follows from state evolution and present numerical results validating it in a wide range of settings. The above formula generates numerous new insights, both in the scalar and in the nonscalar cases.", "Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.", "Compressed sensing deals with the reconstruction of a high-dimensional signal from far fewer linear measurements, where the signal is known to admit a sparse representation in a certain linear space. The asymptotic scaling of the number of measurements needed for reconstruction as the dimension of the signal increases has been studied extensively. This work takes a fundamental perspective on the problem of inferring about individual elements of the sparse signal given the measurements, where the dimensions of the system become increasingly large. Using the replica method, the outcome of inferring about any fixed collection of signal elements is shown to be asymptotically decoupled, i.e., those elements become independent conditioned on the measurements. Furthermore, the problem of inferring about each signal element admits a single-letter characterization in the sense that the posterior distribution of the element, which is a sufficient statistic, becomes asymptotically identical to the posterior of inferring about the same element in scalar Gaussian noise. The result leads to simple characterization of all other elemental metrics of the compressed sensing problem, such as the mean squared error and the error probability for reconstructing the support set of the sparse signal. Finally, the single-letter characterization is rigorously justified in the special case of sparse measurement matrices where belief propagation becomes asymptotically optimal." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
In cases when the signal distribution is not known, we can use expectation maximization (EM) to learn the parameters of the signal model @cite_38 . EM learning with the expectation step being done with BP was done in e.g. @cite_17 . In the context of CS, the EM was applied together with message passing reconstruction in @cite_24 . An independent implementation along the same ideas also appeared in @cite_46 under the name EM-GAMP algorithm (where EM stands for expectation-maximization). All the predictions made in the present paper thus also apply to the EM-GAMP algorithm.
{ "cite_N": [ "@cite_24", "@cite_38", "@cite_46", "@cite_17" ], "mid": [ "2073868986", "2049633694", "2543631487", "2004531067" ], "abstract": [ "Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.", "", "The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual l 1 -regularized least-squares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not least-favorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d Bernoulli-Gaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zero-mean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BG-AMP algorithm within an expectation-maximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EM-BG-AMP on a range of signal types.12", "In this paper we extend our previous work on the stochastic block model, a commonly used generative model for social and biological networks, and the problem of inferring functional groups or communities from the topology of the network. We use the cavity method of statistical physics to obtain an asymptotically exact analysis of the phase diagram. We describe in detail properties of the detectability undetectability phase transition and the easy hard phase transition for the community detection problem. Our analysis translates naturally into a belief propagation algorithm for inferring the group memberships of the nodes in an optimal way, i.e., that maximizes the overlap with the underlying group memberships, and learning the underlying parameters of the block model. Finally, we apply the algorithm to two examples of real-world networks and discuss its performance." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
Based on our understanding of the properties of the algorithmic barrier encountered by the message passing reconstruction algorithm, we have design special seeded measurement matrices for which reconstruction is possible even for close-to-optimal measurement rates. These matrices are based on the idea of spatial coupling that was developed first in error correcting codes @cite_39 @cite_10 , see @cite_31 for more transparent understanding and results. Several other applications of the same idea exist, in different contexts. For an overview see @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_10", "@cite_39" ], "mid": [ "2179644698", "2172679141", "2156991284", "1991528082" ], "abstract": [ "We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble that fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in this ensemble have this property. The quantifier universal refers to the single ensemble code that is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that, under belief-propagation decoding, spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.", "Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis.", "A threshold analysis of terminated generalized LDPC convolutional codes (GLDPC CCs) is presented for the binary erasure channel. Different ensembles of protograph-based GLDPC CCs are considered, including braided block codes (BBCs). It is shown that the terminated PG-GLDPC CCs have better thresholds than their block code counterparts. Surprisingly, our numerical analysis suggests that for large termination factors the belief propagation decoding thresholds of PG-GLDPC CCs coincide with the ML decoding thresholds of the corresponding PG-GLDPC block codes.", "We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes. The performance of this decoding is close to the performance of turbo decoding. Our simulation shows that for the rate R=1 2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit. As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10 sup -5 and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
The use of spatial coupling was first suggested for compressed sensing in @cite_23 , where the authors observed an improvement over the reconstruction with homogeneous measurement matrices (see Fig. 5 in @cite_23 ). They, however, did not combine all the key ingredients to achieve reconstruction up to close to the theoretical limit @math , as we did in @cite_24 . Their implementation of belief propagation was also not using the simplification under which only mean and variance of the messages are needed, hence the algorithm was not competitive speed-wise.
{ "cite_N": [ "@cite_24", "@cite_23" ], "mid": [ "2073868986", "2118473837" ], "abstract": [ "Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.", "Recently, it was observed that spatially-coupled LDPC code ensembles approach the Shannon capacity for a class of binary-input memoryless symmetric (BMS) channels. The fundamental reason for this was attributed to a threshold saturation phenomena derived in [1]. In particular, it was shown that the belief propagation (BP) threshold of the spatially coupled codes is equal to the maximum a posteriori (MAP) decoding threshold of the underlying constituent codes. In this sense, the BP threshold is saturated to its maximum value. Moreover, it has been empirically observed that the same phenomena also occurs when transmitting over more general classes of BMS channels. In this paper, we show that the effect of spatial coupling is not restricted to the realm of channel coding. The effect of coupling also manifests itself in compressed sensing. Specifically, we show that spatially-coupled measurement matrices have an improved sparsity to sampling threshold for reconstruction algorithms based on verification decoding. For BP-based reconstruction algorithms, this phenomenon is also tested empirically via simulation. At the block lengths accesible via simulation, the effect is rather small but, based on the threshold analysis, we believe this warrants further study." ] }
1206.3953
2018322127
Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make fewer measurements than were considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in Krzakala et?al (2012 Phys. Rev. X 2 021005) a strategy that allows compressed sensing to be performed at acquisition rates approaching the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distributions of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically.
We introduced seeded measurement matrices for CS in @cite_24 , and showed there, both numerically and using the density evolution, that with such matrices it is possible to achieve the information theoretically optimal measurement rates. The design was motivated by the idea of crystal nucleation and growth in statistical physics. Subsequent work @cite_18 justified this threshold saturation rigorously in the special case when the signal model corresponds to the signal distribution, but also more generally using the concept of R 'enyi information dimension instead of sparsity, as in @cite_43 @cite_28 . Numerical experiments with seeded non-random (Gabor-type) matrices were also performed in @cite_8 .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_28", "@cite_24", "@cite_43" ], "mid": [ "2571527823", "2094501628", "", "2073868986", "1986051087" ], "abstract": [ "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX.", "We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist rate). Candes, Romberg, Tao introduced a scheme that acquires instantaneous samples of the signal at random times. They proved that the signal can be uniquely and efficiently reconstructed, provided the sampling rate exceeds the frequency support of the signal, times logarithmic factors. In this paper we consider a probabilistic model for the signal, and a sampling scheme inspired by the idea of spatial coupling in coding theory. Namely, we propose to acquire non-instantaneous samples at random times. Mathematically, this is implemented by acquiring a small random subset of Gabor coefficients. We show empirically that this scheme achieves correct reconstruction as soon as the sampling rate exceeds the frequency support of the signal, thus reaching the information theoretic limit.", "", "Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.", "In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor. The fundamental limit is shown to the information dimension proposed by Renyi in 1959." ] }
1206.4327
2161116977
Social advertising uses information about consumers' peers, including peer affiliations with a brand, product, organization, etc., to target ads and contextualize their display. This approach can increase ad efficacy for two main reasons: peers' affiliations reflect unobserved consumer characteristics, which are correlated along the social network; and the inclusion of social cues (i.e., peers' association with a brand) alongside ads affect responses via social influence processes. For these reasons, responses may be increased when multiple social signals are presented with ads, and when ads are affiliated with peers who are strong, rather than weak, ties. We conduct two very large field experiments that identify the effect of social cues on consumer responses to ads, measured in terms of ad clicks and the formation of connections with the advertised entity. In the first experiment, we randomize the number of social cues present in word-of-mouth advertising, and measure how responses increase as a function of the number of cues. The second experiment examines the effect of augmenting traditional ad units with a minimal social cue (i.e., displaying a peer's affiliation below an ad in light grey text). On average, this cue causes significant increases in ad performance. Using a measurement of tie strength based on the total amount of communication between subjects and their peers, we show that these influence effects are greatest for strong ties. Our work has implications for ad optimization, user interface design, and central questions in social science research.
Online networks are focused on sharing information, and as such, have been studied extensively in the context of information diffusion. Large-scale observational studies explore a variety of diffusion-like phenomena in contexts including the apparent spread of links on blogs @cite_34 and Twitter @cite_8 , the joining of groups @cite_39 , product recommendations @cite_4 , and the adoption of user-contributed content in virtual economies @cite_10 . Data from these studies are highly suggestive of social influence: the probability of adopting a behavior increases with the number of adopting peers. However, as noted by Anagnostopoulos08kdd and aral2009 , such studies can easily overestimate the role of influence in online behavior because of homophily. shalizi2011 go further to illustrate that statistical methods cannot adequately control for confounding factors in observational studies without the use of very strong assumptions.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_39", "@cite_34", "@cite_10" ], "mid": [ "2105535951", "1967579779", "2432978112", "2107666336", "2107260313" ], "abstract": [ "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.", "In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential \"influencers.\" We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using \"ordinary influencers\"---individuals who exert average or even less-than-average influence.", "The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities.", "Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of \"infection.\" Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.", "Social influence determines to a large extent what we adopt and when we adopt it. This is just as true in the digital domain as it is in real life, and has become of increasing importance due to the deluge of user-created content on the Internet. In this paper, we present an empirical study of user-to-user content transfer occurring in the context of a time-evolving social network in Second Life, a massively multiplayer virtual world. We identify and model social influence based on the change in adoption rate following the actions of one's friends and find that the social network plays a significant role in the adoption of content. Adoption rates quicken as the number of friends adopting increases and this effect varies with the connectivity of a particular user. We further find that sharing among friends occurs more rapidly than sharing among strangers, but that content that diffuses primarily through social influence tends to have a more limited audience. Finally, we examine the role of individuals, finding that some play a more active role in distributing content than others, but that these influencers are distinct from the early adopters." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
However, other implied constraints provide additional filtering that may help the resolution process. For instance, Quesada @cite_12 suggested the general propagator which maintains the transitive closure and the dominance tree of the graph variable. However, its running time, @math in the worst case, makes it unlikely to be profitable in practice. A faster constraint, also based on the concept of dominance, is the constraint. It is nothing else but a simplification of the constraint @cite_26 recently improved to a @math worst case time complexity @cite_18 . Given a graph variable @math and a node @math , such a constraint ensures that @math is an arborescence rooted in node @math . More precisely, it enforces GAC over the conjunction of the following properties: @math has no circuit, each node is reachable from @math and, each node but @math has exactly one predecessor. Such a filtering can also be used to define the by switching @math with @math and predecessors with successors.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_12" ], "mid": [ "2156899428", "", "2247768113" ], "abstract": [ "This paper revisits the tree constraint introduced in [2] which partitions the nodes of a n-nodes, m-arcs directed graph into a set of node-disjoint anti-arborescences for which only certain nodes can be tree roots. We introduce a new filtering algorithm that enforces generalized arc-consistency in O(n + m) time while the original filtering algorithm reaches O(nm) time. This result allows to tackle larger scale problems involving graph partitioning.", "", "Constrained path problems have to do with finding paths in graphs subject to constraints. We present a constraint programming approach for solving the Ordered disjoint-paths problem (ODP), i.e., the Disjoint-paths problem where the pairs are associated with ordering constraints. In our approach, we reduce ODP to the Ordered simple path with mandatory nodes problem (OSPMN), i.e., the problem of finding a simple path containing a set of mandatory nodes in a given order. The reduction of the problem is motivated by the fact that we have an appropriate way of dealing with OSPMN based on DomReachability, a propagator that implements a generalized reachability constraint on a directed graph based on the concept of graph variables. The DomReachability constraint has three arguments: (1) a flow graph, i.e., a directed graph with a source node; (2) the dominance relation graph on nodes and edges of the flow graph; and (3) the transitive closure of the flow graph. Our experimental evaluation of DomReachability shows that it provides strong pruning, obtaining solutions with very little search. Furthermore, we show that DomReachability is also useful for defining a good labeling strategy. These experimental results give evidence that DomReachability is a useful primitive for solving constrained path problems over directed graphs." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
CP models often embed relaxation based constraints, to provide inference from costs. Fischetti and Toth @cite_29 suggested a general bounding procedure for combining different relaxations of the same problem.
{ "cite_N": [ "@cite_29" ], "mid": [ "2013561074" ], "abstract": [ "In this paper, new lower bounds for the asymmetric travelling salesman problem are presented, based on spanning arborescences. The new bounds are combined in an additive procedure whose theoretical performance is compared with that of the Balas and Christofides procedure (1981). Both procedures have been imbedded in a simple branch and bound algorithm and experimentally evaluated on hard test problems." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
A stronger relaxation is the weighted version of the constraint, corresponding to the Minimum Assignment Problem (MAP). It requires @math time @cite_25 to compute a first minimum cost assignment but then @math time @cite_0 to check consistency and filter incrementally. Some interesting evaluations are provided by @cite_10 , but are mainly related to the TSP with time windows constraints.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_25" ], "mid": [ "2028040632", "2120365438", "2107034441" ], "abstract": [ "This paper analyzes the most efficient algorithms for the Linear Min-Sum Assignment Problem and shows that they derive from a common basic procedure. For each algorithm, we evaluate the computational complexity and the average performance on randomly-generated test problems. Efficient FORTRAN implementations for the case of complete and sparse matrices are given.", "TheTraveling Salesman Problem with Time Windows (TSPTW) is the problem of finding a minimum-cost path visiting a set of cities exactly once, where each city must be visited within a specific time window. We propose a hybrid approach for solving the TSPTW that merges Constraint Programming propagation algorithms for the feasibility viewpoint (find a path), and Operations Research techniques for coping with the optimization perspective (find the best path). We show with extensive computational results that the synergy between Operations Research optimization techniques embedded in global constraints, and Constraint Programming constraint solving techniques, makes the resulting framework effective in the TSPTW context also if these results are compared with state-of-the-art algorithms from the literature.", "We present an incomplete filtering algorithm for the circuit constraint. The filter removes redundant values by eliminating nonhamiltonian edges from the associated graph. We identify nonhamiltonian edges by analyzing a smaller graph with labeled edges that is defined on a separator of the original graph. The complexity of the procedure for each separator S is approximately O(|S|5). We found that it identified all infeasible instances and eliminated about one-third of the redundant domain elements in feasible instances." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
An improvement of the MST relaxation is the approach of Held and Karp @cite_19 , adapted for CP by @cite_3 . It is the Lagrangian MST relaxation with a policy for updating Langrangian multipliers that provides a fast convergence. The idea of this method is to iteratively compute MST that converge towards a path by adding penalties on arc costs according to degree constraints violations. It must be noticed that since arc costs change from one iteration to another, Prim's algorithm is better than Kruskal's which requires to sort edges. Moreover, to our knowledge neither algorithm can be applied incrementally.
{ "cite_N": [ "@cite_19", "@cite_3" ], "mid": [ "2058748455", "1517248951" ], "abstract": [ "The relationship between the symmetric traveling-salesman problem and the minimum spanning tree problem yields a sharp lower bound on the cost of an optimum tour. An efficient iterative method for approximating this bound closely from below is presented. A branch-and-bound procedure based upon these considerations has easily produced proven optimum solutions to all traveling-salesman problems presented to it, ranging in size up to sixty-four cities. The bounds used are so sharp that the search trees are minuscule compared to those normally encountered in combinatorial problems of this type.", "So far, edge-finding is the only one major filtering algorithm for unary resource constraint with time complexity O(nlog n). This paper proposes O(nlog n) versions of another two filtering algorithms: not-first not-last and propagation of detectable precedences. These two algorithms can be used together with the edge-finding to further improve the filtering. This paper also propose new O(nlog n) implementation of fail detection (overload checking)." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
A more accurate relaxation is the Minimum Spanning Arborescence (MSA) relaxation, since it does not relax the orientation of the graph. This relaxation has been studied by @cite_29 @cite_28 who provide a @math time filtering algorithm based on primal dual linear programs. The best algorithm for computing a MSA has been provided by @cite_4 . Their algorithm runs in @math worst case time, but it does not provide reduced costs that are used for pruning. Thus, it could be used to create a constraint with a @math time consistency checking but the complete filtering algorithm remains in @math time. The Lagrangian MSA relaxation, with a MSA computation based on Edmonds' algorithm, has been suggested in @cite_17 . This method was very accurate but unfortunately unstable. Also, @cite_3 report that the MSA based Held and Karp scheme lead to disappointing results.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_29", "@cite_3", "@cite_17" ], "mid": [ "1867111234", "1542480886", "2013561074", "1517248951", "109225361" ], "abstract": [ "The paper introduces the MST(G,T,W) constraint, which is specified on two graph variables G and T and a vector W of scalar variables. The constraint is satisfied if T is a minimum spanning tree of G, where the edge weights are specified by the entries of W. We develop algorithms that filter the domains of all variables to bound consistency.", "Constraint Programming (CP) has been successfully applied to several combinatorial optimization problems. One of its advantages is the availability of complex global constraints performing efficient propagation and interacting with each other through shared variables. However, CP techniques have shown their limitations in dealing with optimization problems since the link between the objective function and problem decision variables is often quite loose and does not produce an effective propagation. We propose to integrate optimization components in global constraints, aimed at optimally solving a relaxation corresponding to the constraint itself. The optimal solution of the relaxation provides pieces of information which can be exploited in order to perform pruning on the basis of cost-based reasoning. In fact, we exploit reduction rules based on lower bound and reduced costs calculation to remove those branches which cannot improve the best solution found so far. The interest of integrating efficient well-known Operations Research (OR) algorithms into CP is mainly due to the smooth interaction between CP domain reduction and information provided by the relaxation acting on variable domains which can be seen as a i>communication channel among different techniques. We have applied this technique to symmetric and asymmetric Traveling Salesman Problem (TSP) instances both because the TSP is an interesting problem arising in many real-life applications, and because pure CP techniques lead to disappointing results for this problem. We have tested the proposed optimization constraints using ILOG solver. Computational results on benchmarks available from literature, and comparison with related approaches are described in the paper. The proposed method on pure TSPs improves the performances of CP solvers, but is still far from the OR state of the art techniques for solving the problem. However, due to the flexibility of the CP framework, we could easily use the same technique on TSP with Time Windows, a time constrained variant of the TSP. For this type of problem, we achieve results that are comparable with state of the art OR results.", "In this paper, new lower bounds for the asymmetric travelling salesman problem are presented, based on spanning arborescences. The new bounds are combined in an additive procedure whose theoretical performance is compared with that of the Balas and Christofides procedure (1981). Both procedures have been imbedded in a simple branch and bound algorithm and experimentally evaluated on hard test problems.", "So far, edge-finding is the only one major filtering algorithm for unary resource constraint with time complexity O(nlog n). This paper proposes O(nlog n) versions of another two filtering algorithms: not-first not-last and propagation of detectable precedences. These two algorithms can be used together with the edge-finding to further improve the filtering. This paper also propose new O(nlog n) implementation of fail detection (overload checking).", "" ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
However, very recently, @cite_3 suggested a binary heuristic, based on the MST relaxation, that we call . It consists in removing from @math the tree arc of maximum replacement cost, i.e. the arc which would involve the highest cost augmentation if it was removed. By tree arc, we mean the fact that it appears in the MST of the last iteration of the Held and Karp procedure. Acutally, as shown in section , this branching leads to poor results and should not be used.
{ "cite_N": [ "@cite_3" ], "mid": [ "1517248951" ], "abstract": [ "So far, edge-finding is the only one major filtering algorithm for unary resource constraint with time complexity O(nlog n). This paper proposes O(nlog n) versions of another two filtering algorithms: not-first not-last and propagation of detectable precedences. These two algorithms can be used together with the edge-finding to further improve the filtering. This paper also propose new O(nlog n) implementation of fail detection (overload checking)." ] }
1206.3437
1533547639
Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.
Finally, solve the TSPTW @cite_28 by guiding the search with time windows, which means that the efficiency of CP for solving the ATSP should not rely entirely on its branching heuristic.
{ "cite_N": [ "@cite_28" ], "mid": [ "1542480886" ], "abstract": [ "Constraint Programming (CP) has been successfully applied to several combinatorial optimization problems. One of its advantages is the availability of complex global constraints performing efficient propagation and interacting with each other through shared variables. However, CP techniques have shown their limitations in dealing with optimization problems since the link between the objective function and problem decision variables is often quite loose and does not produce an effective propagation. We propose to integrate optimization components in global constraints, aimed at optimally solving a relaxation corresponding to the constraint itself. The optimal solution of the relaxation provides pieces of information which can be exploited in order to perform pruning on the basis of cost-based reasoning. In fact, we exploit reduction rules based on lower bound and reduced costs calculation to remove those branches which cannot improve the best solution found so far. The interest of integrating efficient well-known Operations Research (OR) algorithms into CP is mainly due to the smooth interaction between CP domain reduction and information provided by the relaxation acting on variable domains which can be seen as a i>communication channel among different techniques. We have applied this technique to symmetric and asymmetric Traveling Salesman Problem (TSP) instances both because the TSP is an interesting problem arising in many real-life applications, and because pure CP techniques lead to disappointing results for this problem. We have tested the proposed optimization constraints using ILOG solver. Computational results on benchmarks available from literature, and comparison with related approaches are described in the paper. The proposed method on pure TSPs improves the performances of CP solvers, but is still far from the OR state of the art techniques for solving the problem. However, due to the flexibility of the CP framework, we could easily use the same technique on TSP with Time Windows, a time constrained variant of the TSP. For this type of problem, we achieve results that are comparable with state of the art OR results." ] }
1206.3552
2157527521
Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.
In Newman's pioneering work @cite_164 he organizes historical approaches to community discovery in complex networks following their traditional fields of application. He presents the most important classical approaches in computer science and sociology, enumerating algorithms such as spectral bisection @cite_123 or hierarchical clustering @cite_151 . He then reviews new physical approaches to the community discovery problem, including the known edge betweenness @cite_87 and modularity @cite_119 . His paper is very useful for a historical perspective, however it records few works and obviously does not taken into account all the algorithms and categories of methods that have been developed since it was published.
{ "cite_N": [ "@cite_87", "@cite_119", "@cite_151", "@cite_123", "@cite_164" ], "mid": [ "1971421925", "2095293504", "2015370254", "2114030927", "2125050594" ], "abstract": [ "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.", "Networks and Relations The Development of Social Network Analysis Handling Relational Data Lines, Direction and Density Centrality and Centralization Components, Cores, and Cliques Positions, Roles, and Clusters Dimensions and Displays Appendix Social Network Packages", "The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is, shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorith...", "There has been considerable recent interest in algorithms for finding communities in networks— groups of vertices within which connections are dense, but between which connections are sparser. Here we review the progress that has been made towards this end. We begin by describing some traditional methods of community detection, such as spectral bisection, the Kernighan-Lin algorithm and hierarchical clustering based on similarity measures. None of these methods, however, is ideal for the types of real-world network data with which current research is concerned, such as Internet and web data and biological and social networks. We describe a number of more recent algorithms that appear to work well with these data, including algorithms based on edge betweenness scores, on counts of short loops in networks and on voltage differences in resistor networks." ] }
1206.3552
2157527521
Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.
Chakrabarti and Faloutsos @cite_181 give a complete survey of many aspects of graph mining. One important chapter discusses community detection concepts, techniques and tools. The authors introduce the basic concepts of the classical notion of community structure based on edge density, along with other key concepts such as transitivity, edge betweenness and resilience. However, this survey is not explicitly devoted to the community discovery problem. It describes existing methods but does not investigate the possibility of different definitions of community or of a more complex analysis.
{ "cite_N": [ "@cite_181" ], "mid": [ "2085761620" ], "abstract": [ "How does the Web look? How could we tell an abnormal social network from a normal one? These and similar questions are important in many fields where the data can intuitively be cast as a graph; examples range from computer networks to sociology to biology and many more. Indeed, any M : N relation in database terminology can be represented as a graph. A lot of these questions boil down to the following: “How can we generate synthetic but realistic graphs?” To answer this, we must first understand what patterns are common in real-world graphs and can thus be considered a mark of normality realism. This survey give an overview of the incredible variety of work that has been done on these problems. One of our main contributions is the integration of points of view from physics, mathematics, sociology, and computer science. Further, we briefly describe recent advances on some related and interesting graph problems." ] }
1206.3552
2157527521
Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.
@cite_48 test an impressive number of different community discovery algorithms. They compare the time complexity and performance of the methods considered. Furthermore, they define a heuristic to evaluate the results of each algorithm and also compare their performance. However, they focus more on a practical comparison of the methods, rather than a true classification, both in terms of a community definition and in the feature considered for the input network.
{ "cite_N": [ "@cite_48" ], "mid": [ "2120043163" ], "abstract": [ "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods." ] }
1206.3552
2157527521
Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.
Various authors have also proposed a benchmark graph, which would be useful to test community discovery algorithms @cite_134 .
{ "cite_N": [ "@cite_134" ], "mid": [ "2023655578" ], "abstract": [ "Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis." ] }
1206.3686
66892933
Quantum computation teaches us that quantum mechanics exhibits exponential complexity. We argue that the standard scientific paradigm of "predict and verify" cannot be applied to testing quantum mechanics in this limit of high complexity. We describe how QM can be tested in this regime by extending the usual scientific paradigm to include interactive experiments .
In the context of blind quantum computation, Broadbent, Fitzsimons, and Kashefi's @cite_5 suggest a protocol that provides a possible way of showing that BQP = QPIP* where the verifier needs only a single quantum bit. At this point it is unclear whether the security of this protocol can be rigorously established.
{ "cite_N": [ "@cite_5" ], "mid": [ "2117593045" ], "abstract": [ "We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation." ] }
1206.3666
2953137579
The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI) require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator) instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i) calibration phases interrupt the autonomous operation of the BMI and (ii) between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and non-stationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.
In earlier work, BMI research has already addressed online adaptivity issue. For instance, @cite_28 has proposed a BMI system, where individual neuron's directional tuning changes are tracked with online adaptive linear filters. Wolpaw and McFarland have shown that intended @math -dimensional cursor movements can be estimated from EEG recordings @cite_63 . In that study, they employed Least Mean Squares (LMS) algorithm to update the parameters of a linear filter after each trial. Later, has also shown that a similar method can be used to decode @math -dimensional movements from EEG recordings @cite_56 . have proposed adaptive versions of Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis for cue-based discrete choice BMI-tasks @cite_64 @cite_52 @cite_31 . These works employ supervised learning algorithms, i.e. they necessitate that the decoder knows the target of the movement or the choice in advance and adapts the decoding parameters. In other words, the employed methods know and make use of the label of the recorded neural activity.
{ "cite_N": [ "@cite_64", "@cite_28", "@cite_52", "@cite_56", "@cite_63", "@cite_31" ], "mid": [ "2111208616", "2065833894", "", "1989927577", "2103494817", "" ], "abstract": [ "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.", "Control signals for an object are developed from the neuron-originating electrical impulses detected by arrays of electrodes chronically implanted in a subject's cerebral cortex at the pre-motor and motor locations known to have association with arm movements. Taking as an input the firing rate of the sensed neurons or neuron groupings that affect a particular electrode, a coadaptive algorithm is used. In a closed-loop environment, where the animal subject can view its results, weighting factors in the algorithm are modified over a series of tests to emphasize cortical electrical impulses that result in movement of the object as desired. At the same time, the animal subject learns and modifies its cortical electrical activity to achieve movement of the object as desired. In one specific embodiment, the object moved was a cursor portrayed as a sphere in a virtual reality display. Target objects were presented to the subject, who then proceeded to move the cursor to the target and receive a reward. In a noncoadaptive use of the algorithm as previously modified by a co-adaptation, unlearned targets were presented in the virtual reality system and the subject moved the cursor to these targets. In another embodiment, a robot arm was controlled by an animal subject.", "", "Brain–computer interfaces (BCIs) can use brain signals from the scalp (EEG), the cortical surface (ECoG), or within the cortex to restore movement control to people who are paralyzed. Like muscle-based skills, BCIs' use requires activity-dependent adaptations in the brain that maintain stable relationships between the person's intent and the signals that convey it. This study shows that humans can learn over a series of training sessions to use EEG for three-dimensional control. The responsible EEG features are focused topographically on the scalp and spectrally in specific frequency bands. People acquire simultaneous control of three independent signals (one for each dimension) and reach targets in a virtual three-dimensional space. Such BCI control in humans has not been reported previously. The results suggest that with further development noninvasive EEG-based BCIs might control the complex movements of robotic arms or neuroprostheses.", "Brain-computer interfaces (BCIs) can provide communication and control to people who are totally paralyzed. BCIs can use noninvasive or invasive methods for recording the brain signals that convey the user's commands. Whereas noninvasive BCIs are already in use for simple applications, it has been widely assumed that only invasive BCIs, which use electrodes implanted in the brain, can provide multidimensional movement control of a robotic arm or a neuroprosthesis. We now show that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys. In movement time, precision, and accuracy, the results are comparable to those with invasive BCIs. The adaptive algorithm used in this noninvasive BCI identifies and focuses on the electroencephalographic features that the person is best able to control and encourages further improvement in that control. The results suggest that people with severe motor disabilities could use brain signals to operate a robotic arm or a neuroprosthesis without needing to have electrodes implanted in their brains.", "" ] }