aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1212.2834
2016392483
Many natural signals exhibit a sparse representation, whenever a suitable describing model is given. Here, a linear generative model is considered, where many sparsity-based signal processing techniques rely on such a simplified model. As this model is often unknown for many classes of the signals, we need to select such a model based on the domain knowledge or using some exemplar signals. This paper presents a new exemplar based approach for the linear model (called the dictionary) selection, for such sparse inverse problems. The problem of dictionary selection, which has also been called the dictionary learning in this setting, is first reformulated as a joint sparsity model. The joint sparsity model here differs from the standard joint sparsity model as it considers an overcompleteness in the representation of each signal, within the range of selected subspaces. The new dictionary selection paradigm is examined with some synthetic and realistic simulations.
The dictionary selection, which will be considered in this paper, is also related to the problem of subset selection in machine learning @cite_12 @cite_0 , where the goal is to select the most relevant subset, which describes the whole set. @cite_12 uses the fact that such a model selection can be formulated as a submodular cost minimisation. For such a formulation, there exist some canonical solvers, which guarantee to find a neighbourhood solution. The derived neighbourhood is indeed not small, which motivated Das and Kempe @cite_0 to present an alternative submodular formulation to reduce the approximation error.
{ "cite_N": [ "@cite_0", "@cite_12" ], "mid": [ "1875482710", "2141552007" ], "abstract": [ "We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing greedy algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of greedy algorithms than other spectral parameters.", "We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems." ] }
1212.2178
2152845747
Given an undirected graph, one can assign directions to each of the edges of the graph, thus orienting the graph. To be as egalitarian as possible, one may wish to find an orientation such that no vertex is unfairly hit with too many arcs directed into it. We discuss how this objective arises in problems resulting from telecommunications. We give optimal, polynomial-time algorithms for: finding an orientation that minimizes the lexicographic order of the indegrees and finding a strongly-connected orientation that minimizes the maximum indegree. We show that minimizing the lexicographic order of the indegrees is NP-hard when the resulting orientation is required to be acyclic.
consider the edge-weighted version of the unconstrained problem @cite_16 . They build on the work of Venkateswaran and give a combinatorial @math -approximation algorithm where @math and @math are the maximum and minimum weights of edges respectively, and @math is a constant which depends on the input @cite_16 . Klostermeyer considers the problem of reorienting edges (rather than whole paths) so as to create graphs with given properties, such as strongly connected graphs and acyclic graphs @cite_7 . De Fraysseix and de Mendez show that they can find an indegree assignment of the vertices given a particular properties @cite_3 . In our work we are searching for a particular degree assignment not known a priori.
{ "cite_N": [ "@cite_16", "@cite_3", "@cite_7" ], "mid": [ "2033433572", "1562280270", "64043337" ], "abstract": [ "This paper studies the problem of orienting all edges of a weighted graph such that the maximum weighted outdegree of vertices is minimized. This problem, which has applications in the guard arrangement for example, can be shown to be -hard generally. In this paper we first give optimal orientation algorithms which run in polynomial time for the following special cases: (i) the input is an unweighted graph, and (ii) the input graph is a tree. Then, by using those algorithms as sub-procedures, we provide a simple, combinatorial, -approximation algorithm for the general case, where wmax and wmin are the maximum and the minimum weights of edges, respectively, and e is some small positive real number that depends on the input.", "Regular orientations, that is orientations such that almost all the vertices have the same indegree, relates many combinatorial and topological properties, such as arboricity, page number, and planarity. These orientations are a basic tool in solving combinatorial problems that preserve topological properties. Planar augmentations are a simple example of such problems.", "" ] }
1212.2178
2152845747
Given an undirected graph, one can assign directions to each of the edges of the graph, thus orienting the graph. To be as egalitarian as possible, one may wish to find an orientation such that no vertex is unfairly hit with too many arcs directed into it. We discuss how this objective arises in problems resulting from telecommunications. We give optimal, polynomial-time algorithms for: finding an orientation that minimizes the lexicographic order of the indegrees and finding a strongly-connected orientation that minimizes the maximum indegree. We show that minimizing the lexicographic order of the indegrees is NP-hard when the resulting orientation is required to be acyclic.
Biedl, Chan, Ganjali, Hajiaghayi, and Wood give a @math -approximation algorithm for finding an ordering of the vertices such that for each vertex @math , the neighbors of @math are as evenly distributed to the right and left of @math as possible @cite_8 . For the purpose of deadlock prevention @cite_15 , Wittorff describes a heuristic for finding an acyclic orientation that minimizes the sum over all vertices of the function @math choose @math , where @math is the indegree of vertex @math . This obective function is motivated by a problem concerned with resolving deadlocks in communications networks as described in the previous section @cite_14 .
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_8" ], "mid": [ "", "2287121874", "2139924054" ], "abstract": [ "", "A method of determining where to place routing constraints known as cuts in a network to result in a no-loop network, and a method of implementing such constraints in a hierarchical network; to avoid deadlocks. The method of determining where to place cuts in a network to result in a no-loop network comprises an algorithm for numbering the nodes within the network which is used to determine where the cuts are to be placed. The method of implementing constraints in a hierarchical network to result in a no-loop network comprises an algorithm for independently determining cuts and meta-cuts for each peer group, and imposing the necessary routing constraints in the network.", "In this paper we consider the problem of determining a balanced ordering of the vertices of a graph; that is, the neighbors of each vertex v are as evenly distributed to the left and fight of v as possible. This problem, which has applications in graph drawing for example, is shown to be NP- hard, and remains NP- hard for bipartite simple graphs with maximum degree six. We then describe and analyze a number of methods for determining a balanced vertex-ordering, obtaining optimal orderings for directed acyclic graphs, trees, and graphs with maximum degree three. For undirected graphs, we obtain a 13 8-approximation algorithm. Finally we consider the problem of determining a balanced vertex-ordering of a bipartite graph with a fixed ordering of one bipartition. When only the imbalances of the fixed vertices count, this problem is shown to be NP-hard. On the other hand, we describe an optimal linear time algorithm when the final imbalances of all vertices count. We obtain a linear time algorithm to compute an optimal vertex-ordering of a bipartite graph with one bipartition of constant size." ] }
1212.1522
2114725112
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1 e � 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded.
Our setting is closely related to the large topic of fair division or cake-cutting @cite_20 @cite_29 @cite_36 @cite_43 @cite_40 , which has been studied since the 1940's, using the @math interval as the standard representation of a cake. Each agent's preferences take the form of a valuation function over this interval, and then the valuations of unions of subintervals are additive. Note that the class of homogeneous valuation functions of degree one takes us beyond this standard cake-cutting model. Leontief valuations for example, allow for complementarities in the valuations, and then the valuations of unions of subintervals need not be additive. On the other hand, the additive linear valuations setting that we focus on in is very closely related to cake-cutting with piecewise constant valuation functions over the @math interval. Other common notions of fairness that have been studied in this literature are, proportionality It is worth distinguishing the notion of PF from that of proportionality by noting that the latter is a much weaker notion, directly implied by the former. , envy-freeness, and equitability @cite_20 @cite_29 @cite_36 @cite_43 @cite_40 .
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_43", "@cite_40", "@cite_20" ], "mid": [ "1977246877", "2022749618", "1577069963", "1600130330", "" ], "abstract": [ "The challenge of dividing an asset fairly, from cakes to more important properties, is of great practical importance in many situations. Since the famous Polish school of mathematicians (Steinhaus, Banach, and Knaster) introduced and described algorithms for the fair division problem in the 1940s, the concept has been widely popularized. This book gathers into one readable and inclusive source a comprehensive discussion of the state of the art in cake-cutting problems for both the novice and the professional. It offers a complete treatment of all cake-cutting algorithms under all the considered definitions of \"fair\" and presents them in a coherent, reader-friendly manner. Robertson and Webb have brought this elegant problem to life for both the bright high school student and the professional researcher.", "Cutting a cake, dividing up the property in an estate, determining the borders in an international dispute - such problems of fair division are ubiquitous. Fair Division treats all these problems and many more through a rigorous analysis of a variety of procedures for allocating goods (or 'bads' like chores), or deciding who wins on what issues, when there are disputes. Starting with an analysis of the well-known cake-cutting procedure, 'I cut, you choose', the authors show how it has been adapted in a number of fields and then analyze fair-division procedures applicable to situations in which there are more than two parties, or there is more than one good to be divided. In particular they focus on procedures which provide 'envy-free' allocations, in which everybody thinks he or she has received the largest portion and hence does not envy anybody else. They also discuss the fairness of different auction and election procedures.", "The concept of fair division is as old as civil society itself. Aristotle's \"equal treatment of equals\" was the first step toward a formal definition of distributive fairness. The concept of collective welfare, more than two centuries old, is a pillar of modern economic analysis. Reflecting fifty years of research, this book examines the contribution of modern microeconomic thinking to distributive justice. Taking the modern axiomatic approach, it compares normative arguments of distributive justice and their relation to efficiency and collective welfare. The book begins with the epistemological status of the axiomatic approach and the four classic principles of distributive justice: compensation, reward, exogenous rights, and fitness. It then presents the simple ideas of equal gains, equal losses, and proportional gains and losses. The book discusses three cardinal interpretations of collective welfare: Bentham's \"utilitarian\" proposal to maximize the sum of individual utilities, the Nash product, and the egalitarian leximin ordering. It also discusses the two main ordinal definitions of collective welfare: the majority relation and the Borda scoring method. The Shapley value is the single most important contribution of game theory to distributive justice. A formula to divide jointly produced costs or benefits fairly, it is especially useful when the pattern of externalities renders useless the simple ideas of equality and proportionality. The book ends with two versatile methods for dividing commodities efficiently and fairly when only ordinal preferences matter: competitive equilibrium with equal incomes and egalitarian equivalence. The book contains a wealth of empirical examples and exercises.", "0. Preface 1. Notation and preliminaries 2. Geometric object #1a: the individual pieces set (IPS) for two players 3. What the IPS tells us about fairness and efficiency in the two-player context 4. The general case of n players 5. What the IPS and the FIPS tell us about fairness and efficiency in the n-player context 6. Characterizing Pareto optimality: introduction and preliminary ideas 7. Characterizing Pareto optimality I: the IPS and optimization of convex combinations of measures 8. Characterizing Pareto optimality II: partition ratios 9. Geometric object #2: The Radon-Nikodym set (RNS) 10. Characterizing Pareto optimality III: the RNS, Weller's construction, and w-association 11. The shape of the IPS 12. The relationship between the IPS and the RNS 13. Other issues involving Weller's construction, partition ratios, and Pareto optimality 14. Strong Pareto optimality 15. Characterizing Pareto optimality using hyperreal numbers 16. The multi-cake individual pieces set (MIPS): symmetry restored.", "" ] }
1212.1522
2114725112
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1 e � 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded.
Despite the extensive work on fair resource allocation, truthfulness considerations have not played a major role in this literature. Most results related to truthfulness were weakened by the assumption that each agent would be truthful in reporting her valuations unless this strategy was dominated. Very recent work @cite_14 @cite_16 @cite_12 @cite_24 studies truthful cake cutting variations using the standard notion of truthfulness according to which an agent need not be truthful unless doing so is a dominant strategy. study truthful cake-cutting with agents having piecewise uniform valuations and they provide a polynomial-time mechanism that is truthful, proportional, and envy-free. They also design randomized mechanisms for more general families of valuation functions, while prove the existence of truthful (in expectation) mechanisms satisfying proportionality in expectation for general valuations. aim to achieve envy-free Pareto optimal allocations of multiple divisible goods while reducing, but not eliminating, the agents' incentives to lie. The extent to which untruthfulness is reduced by their proposed mechanism is only evaluated empirically and depends critically on their assumption that the resource limitations are soft constraints. Very recent work by provides evidence that truthfulness comes at a significant cost in terms of efficiency.
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_12", "@cite_16" ], "mid": [ "1961964649", "2137921611", "2132069171", "1532158658" ], "abstract": [ "We characterize methods of dividing a cake between two bidders in a way that is incentive-compatible and Pareto-efficient. In our cake cutting model, each bidder desires a subset of the cake (with a uniform value over this subset), and is allocated some subset. Our characterization proceeds via reducing to a simple one-dimensional version of the problem, and yields, for example, a tight bound on the social welfare achievable.", "Cake cutting is a common metaphor for the division of a heterogeneous divisible good. There are numerous papers that study the problem of fairly dividing a cake; a small number of them also take into account self-interested agents and consequent strategic issues, but these papers focus on fairness and consider a strikingly weak notion of truthfulness. In this paper we investigate the problem of cutting a cake in a way that is truthful and fair, where for the first time our notion of dominant strategy truthfulness is the ubiquitous one in social choice and computer science. We design both deterministic and randomized cake cutting algorithms that are truthful and fair under different assumptions with respect to the valuation functions of the agents.", "A natural requirement of a resource allocation system is to guarantee fairness to its participants. Fair allocation can be achieved either by distributed protocols known as cake-cutting algorithms or by centralized approaches, which first collect the agents’ preferences and then decide on the allocation. Compared with cake-cutting algorithms, centralized approaches ageless restricted and can therefore achieve more favorable allocations. Our work uses as a starting point a recent centralized algorithm that achieves an envy-free (i.e., fair) and Pareto optimal (i.e., efficient) allocation of multiple divisible goods. In fair allocation algorithms, agents who do not follow the protocol cannot prevent other agents from being allocated a fair share, but in certain situations, agents can increase their own allocation by submitting untruthful preferences. A recent article has shown that the only mechanisms that do not allow such manipulations(i.e., the only incentive-compatible mechanisms) are dictatorial. Nevertheless, we present a method that reduces possible gains from untruthful manipulation. Our mechanism uses a heuristic to approximate optimal manipulations, and compensates agents who submitted suboptimal preferences by increasing their allocation. We empirically demonstrate that when our method is used, the additional benefit that agents can achieve by untruthful manipulation is insignificant, hence they have insignificant incentives to lie.", "We address the problem of fair division, or cake cutting, with the goal of finding truthful mechanisms. In the case of a general measure space (\"cake\") and non-atomic, additive individual preference measures - or utilities - we show that there exists a truthful \"mechanism\" which ensures that each of the k players gets at least 1 k of the cake. This mechanism also minimizes risk for truthful players. Furthermore, in the case where there exist at least two different measures we present a different truthful mechanism which ensures that each of the players gets more than 1 k of the cake. We then turn our attention to partitions of indivisible goods with bounded utilities and a large number of goods. Here we provide similar mechanisms, but with slightly weaker guarantees. These guarantees converge to those obtained in the non-atomic case as the number of goods goes to infinity." ] }
1212.1522
2114725112
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option. For a multi-dimensional domain with an arbitrary number of agents and items, and for the very large class of homogeneous valuation functions, we prove that our mechanism provides every agent with at least a 1 e � 0.368 fraction of her Proportionally Fair valuation. To the best of our knowledge, this is the first result that gives a constant factor approximation to every agent for the Proportionally Fair solution. To complement this result, we show that no truthful mechanism can guarantee more than 0.5 approximation, even for the restricted class of additive linear valuations. In addition to this, we uncover a connection between the Partial Allocation mechanism and VCG-based mechanism design. We also ask whether better approximation ratios are possible in more restricted settings. In particular, motivated by the massive privatization auction in the Czech republic in the early 90s we provide another mechanism for additive linear valuations that works really well when all the items are highly demanded.
The recent papers of and of also consider the truthful allocation of multiple divisible goods; they focus on additive linear valuations and their goal is to maximize the social welfare (or efficiency) after scaling every player's reported valuations so that her total valuation for all items is 1. study two-agent instances, providing both upper and lower bounds for the achievable approximation; extend these results and also study the multiple agents setting. For problem instances that may involve an arbitrary number of items both papers provide negative results: no non-trivial approximation factor can be achieved by any truthful mechanism when the number of players is also unbounded. For the two-player case, after studied some classes of dictatorial mechanisms, showed that no dictatorial mechanism can guarantee more than the trivial @math factor. Interestingly, we recently showed @cite_11 that combining a special two-player version of the Partial Allocation mechanism with a dictatorial mechanism can actually beat this bound, achieving a @math approximation.
{ "cite_N": [ "@cite_11" ], "mid": [ "2107435" ], "abstract": [ "Consider the problem of allocating multiple divisible goods to two agents in a strategy-proof fashion without the use of payments or priors. Previous work has aimed at implementing allocations that are competitive with respect to an appropriately defined measure of social welfare. These results have mostly been negative, proving that no dictatorial mechanism can achieve an approximation factor better than 0.5, and leaving open the question of whether there exists a non-dictatorial mechanism that outperforms this bound. We provide a positive answer to this question by presenting an interesting non-dictatorial mechanism that achieves an approximation factor of 2 3 for this measure of social welfare. In proving this bound we also touch on the issue of fairness: we show that the proportionally fair solution, a well known fairness concept for money-free settings, is highly competitive with respect to social welfare. We then show how to use the proportionally fair solution to design our non-dictatorial strategy-proof mechanism." ] }
1212.1787
39516650
It is common today to deploy complex software inside a virtual machine (VM). Snapshots provide rapid deployment, migration between hosts, dependability (fault tolerance), and security (insulating a guest VM from the host). Yet, for each virtual machine, the code for snapshots is laboriously developed on a per-VM basis. This work demonstrates a generic checkpoint-restart mechanism for virtual machines. The mechanism is based on a plugin on top of an unmodified user-space checkpoint-restart package, DMTCP. Checkpoint-restart is demonstrated for three virtual machines: Lguest, user-space QEMU, and KVM QEMU. The plugins for Lguest and KVM QEMU require just 200 lines of code. The Lguest kernel driver API is augmented by 40 lines of code. DMTCP checkpoints user-space QEMU without any new code. KVM QEMU, user-space QEMU, and DMTCP need no modification. The design benefits from other DMTCP features and plugins. Experiments demonstrate checkpoint and restart in 0.2 seconds using forked checkpointing, mmap-based fast-restart, and incremental Btrfs-based snapshots.
Forked checkpointing has an exceptionally long history, dating back to 1990 @cite_11 @cite_23 . Incremental checkpointing has been demonstrated at least since 1995 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_23", "@cite_11" ], "mid": [ "", "2048894106", "2010439775" ], "abstract": [ "", "Presents the results of an implementation of several algorithms for checkpointing and restarting parallel programs on shared-memory multiprocessors. The algorithms are compared according to the metrics of overall checkpointing time, overhead imposed by the checkpointer on the target program, and amount of time during which the checkpointer interrupts the target program. The best algorithm measured achieves its efficiency through a variation of copy-on-write, which allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed. >", "We have developed and implemented a checkpointing and restart algorithm for parallel programs running on commercial uniprocessors and shared-memory multiprocessors. The algorithm runs concurrently with the target program, interrupts the target program for small, fixed amounts of time and is transparent to the checkpointed program and its compiler. The algorithm achieves its efficiency through a novel use of address translation hardware that allows the most time-consuming operations of the checkpoint to be overlapped with the running of the program being checkpointed." ] }
1212.1682
2044740924
Random k-SAT is the single most intensely studied example of a random constraint satisfaction problem. But despite substantial progress over the past decade, the threshold for the existence of satisfying assignments is not known precisely for any k≥3. The best current results, based on the second moment method, yield upper and lower bounds that differ by an additive k ⋅ ln2 2, a term that is unbounded in k (Achlioptas, Peres: STOC 2003). The basic reason for this gap is the inherent asymmetry of the Boolean values 'true' and 'false' in contrast to the perfect symmetry, e.g., among the various colors in a graph coloring problem. Here we develop a new asymmetric second moment method that allows us to tackle this issue head on for the first time in the theory of random CSPs. This technique enables us to compute the k-SAT threshold up to an additive ln2-1 2+O(1 k) 0.19. Independently of the rigorous work, physicists have developed a sophisticated but non-rigorous technique called the "cavity method" for the study of random CSPs (Mezard, Parisi, Zecchina: Science 2002). Our result matches the best bound that can be obtained from the so-called "replica symmetric" version of the cavity method, and indeed our proof directly harnesses parts of the physics calculations.
Other problems where the second moment method succeeds are symmetric as well. Pioneering the use of the second moment method in random CSPs, Achlioptas and Moore @cite_11 computed the random @math -NAESAT threshold within an additive @math . By enhancing this argument with insights from physics this gap can be narrowed to a mere @math @cite_19 @cite_18 . Moreover, the best current bounds on the random (hyper)graph @math -colorability thresholds are based on vanilla'' second moment arguments as well @cite_29 @cite_26 . In summary, in all the previous second moment arguments, the issue of asymmetry either did not appear at all by the nature of the problem @cite_11 @cite_29 @cite_19 @cite_18 @cite_16 @cite_26 @cite_15 , or it was sidestepped @cite_25 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_29", "@cite_19", "@cite_15", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "1610636547", "2952213220", "1978113428", "", "2025593265", "", "2010193278", "2088419249" ], "abstract": [ "For many random constraint satisfaction problems such as random satisfiability or random graph or hypergraph coloring, the best current estimates of the threshold for the existence of solutions are based on the first and the second moment method. However, in most cases these techniques do not yield matching upper and lower bounds. Sophisticated but non-rigorous arguments from statistical mechanics have ascribed this discrepancy to the existence of a phase transition called condensation that occurs shortly before the actual threshold for the existence of solutions and that affects the combinatorial nature of the problem (Krzakala, Montanari, Ricci-Tersenghi, Semerjian, Zdeborova: PNAS 2007). In this paper we prove for the first time that a condensation transition exists in a natural random CSP, namely in random hypergraph 2-coloring. Perhaps surprisingly, we find that the second moment method applied to the number of 2-colorings breaks down strictly before the condensation transition. Our proof also yields slightly improved bounds on the threshold for random hypergraph 2-colorability.", "We consider the problem of @math -colouring a random @math -uniform hypergraph with @math vertices and @math edges, where @math , @math , @math remain constant as @math tends to infinity. Achlioptas and Naor showed that the chromatic number of a random graph in this setting, the case @math , must have one of two easily computable values as @math tends to infinity. We give a complete generalisation of this result to random uniform hypergraphs.", "Given d ?? (0,??) let kd be the smallest integer k such that d < 2k log k. We prove that the chromatic number of a random graph G(n, d n) is either kd or kd + 1 almost surely.", "", "We consider a random instance I of k-SAT with n variables and m clauses, where k=k(n) satisfies k—log2 n→∞. Let m 0=2 k nln2 and let ∈=∈(n)>0 be such that ∈n→∞. We prove that @math", "", "Let Fk(n,m) be a random k-SAT formula on n variables formed by selecting uniformly and independently m out of all possible k-clauses. It is well-known that for r ≥ 2k ln 2, Fk(n,rn) is unsatisfiable with probability 1-o(1). We prove that there exists a sequence tk = O(k) such that for r ≥ 2k ln 2 - tk, Fk(n,rn) is satisfiable with probability 1-o(1).Our technique yields an explicit lower bound for every k which for k > 3 improves upon all previously known bounds. For example, when k=10 our lower bound is 704.94 while the upper bound is 708.94.", "Many NP-complete constraint satisfaction problems appear to undergo a “phase transition” from solubility to insolubility when the constraint density passes through a critical threshold. In all such cases it is easy to derive upper bounds on the location of the threshold by showing that above a certain density the first moment (expectation) of the number of solutions tends to zero. We show that in the case of certain symmetric constraints, considering the second moment of the number of solutions yields nearly matching lower bounds for the location of the threshold. Specifically, we prove that the threshold for both random hypergraph 2-colorability (Property B) and random Not-All-Equal @math -SAT is @math . As a corollary, we establish that the threshold for random @math -SAT is of order @math , resolving a long-standing open problem." ] }
1212.1682
2044740924
Random k-SAT is the single most intensely studied example of a random constraint satisfaction problem. But despite substantial progress over the past decade, the threshold for the existence of satisfying assignments is not known precisely for any k≥3. The best current results, based on the second moment method, yield upper and lower bounds that differ by an additive k ⋅ ln2 2, a term that is unbounded in k (Achlioptas, Peres: STOC 2003). The basic reason for this gap is the inherent asymmetry of the Boolean values 'true' and 'false' in contrast to the perfect symmetry, e.g., among the various colors in a graph coloring problem. Here we develop a new asymmetric second moment method that allows us to tackle this issue head on for the first time in the theory of random CSPs. This technique enables us to compute the k-SAT threshold up to an additive ln2-1 2+O(1 k) 0.19. Independently of the rigorous work, physicists have developed a sophisticated but non-rigorous technique called the "cavity method" for the study of random CSPs (Mezard, Parisi, Zecchina: Science 2002). Our result matches the best bound that can be obtained from the so-called "replica symmetric" version of the cavity method, and indeed our proof directly harnesses parts of the physics calculations.
The best current algorithms for random @math -SAT find satisfying assignments for densities up to @math (better for small @math ) resp. @math (better for large @math ) @cite_5 @cite_21 , a factor of @math below the satisfiability threshold. By comparison, the Lov 'asz Local Lemma and its algorithmic version succeed up to @math @cite_27 .
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_21" ], "mid": [ "2570121038", "2109693504", "2051580875" ], "abstract": [ "Let @math be a uniformly distributed random @math -SAT formula with @math variables and @math clauses. We present a polynomial time algorithm that finds a satisfying assignment of @math with high probability for constraint densities @math , where @math . Previously no efficient algorithm was known to find satisfying assignments with a nonvanishing probability beyond @math [A. Frieze and S. Suen, J. Algorithms, 20 (1996), pp. 312-355].", "The Lovasz Local Lemma discovered by Erdős and Lovasz in 1975 is a powerful tool to non-constructively prove the existence of combinatorial objects meeting a prescribed collection of criteria. In 1991, Jozsef Beck was the first to demonstrate that a constructive variant can be given under certain more restrictive conditions, starting a whole line of research aimed at improving his algorithm's performance and relaxing its restrictions. In the present article, we improve upon recent findings so as to provide a method for making almost all known applications of the general Local Lemma algorithmic.", "We consider the performance of two algorithms, GUC and SC studied by M. T. Chao and J. Franco SIAM J. Comput.15(1986), 1106?1118;Inform. Sci.51(1990), 289?314 and V. Chvatal and B. Reed in“Proceedings of the 33rd IEEE Symposium on Foundations of Computer Science, 1992,” pp. 620?627, when applied to a random instance ? of a boolean formula in conjunctive normal form withnvariables and ?cn? clauses of sizekeach. For the case wherek=3, we obtain the exact limiting probability that GUC succeeds. We also consider the situation when GUC is allowed to have limited backtracking, and we improve an existing threshold forcbelow which almost all ? is satisfiable. Fork?4, we obtain a similar result regarding SC with limited backtracking." ] }
1212.1682
2044740924
Random k-SAT is the single most intensely studied example of a random constraint satisfaction problem. But despite substantial progress over the past decade, the threshold for the existence of satisfying assignments is not known precisely for any k≥3. The best current results, based on the second moment method, yield upper and lower bounds that differ by an additive k ⋅ ln2 2, a term that is unbounded in k (Achlioptas, Peres: STOC 2003). The basic reason for this gap is the inherent asymmetry of the Boolean values 'true' and 'false' in contrast to the perfect symmetry, e.g., among the various colors in a graph coloring problem. Here we develop a new asymmetric second moment method that allows us to tackle this issue head on for the first time in the theory of random CSPs. This technique enables us to compute the k-SAT threshold up to an additive ln2-1 2+O(1 k) 0.19. Independently of the rigorous work, physicists have developed a sophisticated but non-rigorous technique called the "cavity method" for the study of random CSPs (Mezard, Parisi, Zecchina: Science 2002). Our result matches the best bound that can be obtained from the so-called "replica symmetric" version of the cavity method, and indeed our proof directly harnesses parts of the physics calculations.
Apart from experimental work @cite_30 , very little is known about the physics-inspired message passing algorithms ( Belief Survey Propagation guided decimation'') @cite_10 . The most basic variant of Belief Propagation guided decimation is known to fail on random formulas if @math for some constant @math @cite_8 . However, it is conceivable that Survey Propagation and or other variants of Belief Propagation perform better.
{ "cite_N": [ "@cite_30", "@cite_10", "@cite_8" ], "mid": [ "2054743378", "1518885151", "1833343854" ], "abstract": [ "Decimation is a simple process for solving constraint satisfaction problems, by repeatedly fixing variable values and simplifying without reconsidering earlier decisions. We investigate different decimation strategies, contrasting those based on local, syntactic information from those based on message passing, such as statistical physics based Survey Propagation (SP) and the related and more well-known Belief Propagation (BP). Our results reveal that once we resolve convergence issues, BP itself can solve fairly hard random k-SAT formulas through decimation; the gap between BP and SP narrows down quickly as k increases. We also investigate observable differences between BP SP and other common CSP heuristics as decimation proceeds, exploring the hardness of the decimated formulas and identifying a somewhat unexpected feature of message passing heuristics, namely, unlike other heuristics for satisfiability, they avoid unit propagation as variables are fixed.", "We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio α of clauses to variables less than a threshold α c are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below α c , where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.", "Let Φ be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-constructive arguments show that Φ is satisfiable for clause variable ratios m n ≤ rk 2k ln 2 with high probability (Achlioptas, Moore: SICOMP 2006; Achlioptas, Peres: J. AMS 2004). Yet no efficient algorithm is know to find a satisfying assignment for densities as low as m n rk · ln(k) k with a non-vanishing probability. In fact, the density m n rk · ln(k) k seems to form a barrier for a broad class of local search algorithms (Achlioptas, Coja-Oghlan: FOCS 2008). On the basis of deep but non-rigorous statistical mechanics considerations, a message passing algorithm called belief propagation guided decimation for solving random k-SAT has been forward (Mezard, Parisi, Zecchina: Science 2002; Braunstein, Mezard, Zecchina: RSA 2005). Experiments suggest that the algorithm might succeed for densities very close to rk for k = 3, 4, 5 (Kroc, Sabharwal, Selman: SAC 2009). Furnishing the first rigorous analysis of belief propagation guided decimation on random k-SAT, the present paper shows that the algorithm fails to find a satisfying assignment already for m n ≥ ρ · rk k, for a constant ρ > 0 independent of k." ] }
1212.1801
1965327366
This paper studies sequential methods for recovery of sparse signals in high dimensions. When compared to fixed sample size procedures, in the sparse setting, sequential methods can result in a large reduction in the number of samples needed for reliable signal support recovery. Starting with a lower bound, we show any coordinate-wise sequential sampling procedure fails in the high dimensional limit provided the average number of measurements per dimension is less then log s D(P_0||P_1) where s is the level of sparsity and D(P_0||P_1) the Kullback-Leibler divergence between the underlying distributions. A series of Sequential Probability Ratio Tests (SPRT) which require complete knowledge of the underlying distributions is shown to achieve this bound. Motivated by real world experiments and recent work in adaptive sensing, we introduce a simple procedure termed Sequential Thresholding which can be implemented when the underlying testing problem satisfies a monotone likelihood ratio assumption. Sequential Thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than log s D(P_0||P_1), achieving the lower bound. For comparison, we show any non-sequential procedure fails provided the number of measurements grows at a rate less than log n D(P_1||P_0), where n is the total dimension of the problem.
Also closely related to the work here are the lower bounds of @cite_9 . The lower bounds presented in @cite_9 are stronger in that they are not restricted to the assumption, but weaker in that they are terms of the expected set difference and restricted to the Gaussian setting. The results of @cite_9 were published after the initial work in @cite_14 @cite_11 .
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_11" ], "mid": [ "2170707542", "2008786239", "" ], "abstract": [ "This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in (IEEE Trans. Inform. Theory 57 (2011) 6222–6235) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if x ? R^n is a sparse vector with s non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds v 2 s . Furthermore, the signal support can be exactly identified provided the minimum magnitude exceedsv 2 log s . Notably there is no dependence on n , the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition, these results provide further insights on the limits of adaptive compressive sensing.", "This paper presents results pertaining to sequential methods for support recovery of sparse signals in noise. Specifically, we show that any sequential measurement procedure fails provided the average number of measurements per dimension grows slower then log s D(f0||f1) where s is the level of sparsity, and D(f0||f1) the Kullback-Leibler divergence between the underlying distributions. For comparison, we show any non-sequential procedure fails provided the number of measurements grows at a rate less than log n D(f1||f0), where n is the total dimension of the problem. Lastly, we show that a simple procedure termed sequential thresholding guarantees exact support recovery provided the average number of measurements per dimension grows faster than (log s + log log n) D(f0||f1), a mere additive factor more than the lower bound.", "" ] }
1212.1073
2009501432
Blind image deblurring algorithms have been improving steadily in the past years. Most state-of-the-art algorithms, however, still cannot perform perfectly in challenging cases, especially in large blur setting. In this paper, we focus on how to estimate a good blur kernel from a single blurred image based on the image structure. We found that image details caused by blur could adversely affect the kernel estimation, especially when the blur kernel is large. One effective way to remove these details is to apply image denoising model based on the total variation (TV). First, we developed a novel method for computing image structures based on the TV model, such that the structures undermining the kernel estimation will be removed. Second, we applied a gradient selection method to mitigate the possible adverse effect of salient edges and improve the robustness of kernel estimation. Third, we proposed a novel kernel estimation method, which is capable of removing noise and preserving the continuity in the kernel. Finally, we developed an adaptive weighted spatial prior to preserve sharp edges in latent image restoration. Extensive experiments testify to the effectiveness of our method on various kinds of challenging examples.
Image deblurring is a hot topic in image processing and computer vision communities. In single image blind deblurring, early approaches usually imposed constraints on motion blur kernel and used parameterized forms for the kernels @cite_8 @cite_26 . Recently, Fergus @cite_10 adopted a zero-mean Mixture of Gaussian to fit for natural image gradients. A variational Bayesian method was employed to deblur an image. Shan @cite_30 used a certain parametric model to approximate the heavy-tailed natural image prior. Cai @cite_15 assumed that the latent images and kernels can be sparsely represented by an over-complete dictionary and introduced a framelet and curvelet system to obtain the sparse representation for images and kernels. Levin @cite_31 illustrated the limitation of the simple maximum a posteriori (MAP) approach, and proposed an efficient marginal likelihood approximation in @cite_16 . Krishnan @cite_2 introduced a new normalized sparsity prior to estimate blur kernels. Goldstein and Fattal @cite_18 estimated blur kernels by spectral irregularities. However, the kernel estimates of the aforementioned works usually contain some noise. The hard thresholding to the kernel elements method will destroy the inherent structure of kernels.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_26", "@cite_18", "@cite_8", "@cite_2", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "2141115311", "2138204001", "2103913786", "1792921166", "2010464357", "1987075379", "2171664034", "2036682493", "2098535678" ], "abstract": [ "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.", "Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated.", "We present a blind deconvolution algorithm based on the total variational (TV) minimization method proposed by Acar and Vogel (1994). The motivation for regularizing with the TV norm is that it is extremely effective for recovering edges of images as well as some blurring functions, e.g., motion blur and out-of-focus blur. An alternating minimization (AM) implicit iterative scheme is devised to recover the image and simultaneously identify the point spread function (PSF). Numerical results indicate that the iterative scheme is quite robust, converges very fast (especially for discontinuous blur), and both the image and the PSF can be recovered under the presence of high noise level. Finally, we remark that PSFs without sharp edges, e.g., Gaussian blur, can also be identified through the TV approach.", "We describe a new method for recovering the blur kernel in motion-blurred images based on statistical irregularities their power spectrum exhibits. This is achieved by a power-law that refines the one traditionally used for describing natural images. The new model better accounts for biases arising from the presence of large and strong edges in the image. We use this model together with an accurate spectral whitening formula to estimate the power spectrum of the blur. The blur kernel is then recovered using a phase retrieval algorithm with improved convergence and disambiguation capabilities. Unlike many existing methods, the new approach does not perform a maximum a posteriori estimation, which involves repeated reconstructions of the latent image, and hence offers attractive running times. We compare the new method with state-of-the-art methods and report various advantages, both in terms of efficiency and accuracy.", "Motion smear is an important visual cue for motion perception by the human vision system (HVS). However, in image analysis research, exploiting motion smear has been largely ignored. Rather, motion smear is usually considered as a degradation of images that needs to be removed. In this paper, the authors establish a computational model that estimates image motion from motion smear information-\"motion from smear\". In many real situations, the shutter of the sensing camera must be kept open long enough to produce images of adequate signal-to-noise ratio (SNR), resulting in significant motion smear in images. The authors present a new motion blur model and an algorithm that enables unique estimation of image motion. A prototype sensor system that exploits the new motion blur model has been built to acquire data for \"motion-from-smear\". Experimental results on images with both simulated smear and real smear, using the authors' \"motion-from-smear\" algorithm as well as a conventional motion estimation technique, are provided. The authors also show that temporal aliasing does not affect \"motion-from-smear\" to the same degree as it does algorithms that use displacement as a cue. \"Motion-from-smear\" provides an additional tool for motion estimation and effectively complements the existing techniques when apparent motion smear is present.", "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.", "Restoring a clear image from a single motion-blurred image due to camera shake has long been a challenging problem in digital imaging. Existing blind deblurring techniques either only remove simple motion blurring, or need user interactions to work on more complex cases. In this paper, we present an approach to remove motion blurring from a single image by formulating the blind blurring as a new joint optimization problem, which simultaneously maximizes the sparsity of the blur kernel and the sparsity of the clear image under certain suitable redundant tight frame systems (curvelet system for kernels and framelet system for images). Without requiring any prior information of the blur kernel as the input, our proposed approach is able to recover high-quality images from given blurred images. Furthermore, the new sparsity constraints under tight frame systems enable the application of a fast algorithm called linearized Bregman iteration to efficiently solve the proposed minimization problem. The experiments on both simulated images and real images showed that our algorithm can effectively removing complex motion blurring from nature images.", "In blind deconvolution one aims to estimate from an input blurred image y a sharp image x and an unknown blur kernel k. Recent research shows that a key to success is to consider the overall shape of the posterior distribution p(x, k ) and not only its mode. This leads to a distinction between MAP x, k strategies which estimate the mode pair x, k and often lead to undesired results, and MAP k strategies which select the best k while marginalizing over all possible x images. The MAP k principle is significantly more robust than the MAP x, k one, yet, it involves a challenging marginalization over latent images. As a result, MAP k techniques are considered complicated, and have not been widely exploited. This paper derives a simple approximated MAP k algorithm which involves only a modest modification of common MAP x, k algorithms. We show that MAP k can, in fact, be optimized easily, with no additional computational complexity.", "Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequency-domain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible in-plane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections." ] }
1212.1073
2009501432
Blind image deblurring algorithms have been improving steadily in the past years. Most state-of-the-art algorithms, however, still cannot perform perfectly in challenging cases, especially in large blur setting. In this paper, we focus on how to estimate a good blur kernel from a single blurred image based on the image structure. We found that image details caused by blur could adversely affect the kernel estimation, especially when the blur kernel is large. One effective way to remove these details is to apply image denoising model based on the total variation (TV). First, we developed a novel method for computing image structures based on the TV model, such that the structures undermining the kernel estimation will be removed. Second, we applied a gradient selection method to mitigate the possible adverse effect of salient edges and improve the robustness of kernel estimation. Third, we proposed a novel kernel estimation method, which is capable of removing noise and preserving the continuity in the kernel. Finally, we developed an adaptive weighted spatial prior to preserve sharp edges in latent image restoration. Extensive experiments testify to the effectiveness of our method on various kinds of challenging examples.
After obtaining the blur kernel, the blind deblurring problem becomes a non-blind deconvolution. Early approaches such as Wiener filter and Richardson-Lucy deconvolution @cite_35 usually suffer from noise and ringing artifacts. Yuan @cite_25 proposed a progressive inter-scale and intra-scale based on the bilateral Richardson-Lucy method to reduce ringing artifacts. Recent works mainly focus on the natural image statistics @cite_30 @cite_0 to keep the properties of latent images and suppress ringing artifacts. Joshi @cite_33 used local color statistics derived from the image as a constraint to guide the latent images restoration. The works in @cite_6 @cite_32 used TV regularization to restore latent images, but the isotropic TV regularization will result in stair-casing effect.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_32", "@cite_6", "@cite_0", "@cite_25" ], "mid": [ "2141115311", "2088909704", "2167053624", "1978333359", "1598281290", "2154571593", "2110061212" ], "abstract": [ "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.", "", "Image blur and noise are difficult to avoid in many situations and can often ruin a photograph. We present a novel image deconvolution algorithm that deblurs and denoises an image given a known shift-invariant blur kernel. Our algorithm uses local color statistics derived from the image as a constraint in a unified framework that can be used for deblurring, denoising, and upsampling. A pixel's color is required to be a linear combination of the two most prevalent colors within a neighborhood of the pixel. This two-color prior has two major benefits: it is tuned to the content of the particular image and it serves to decouple edge sharpness from edge strength. Our unified algorithm for deblurring and denoising out-performs previous methods that are specialized for these individual applications. We demonstrate this with both qualitative results and extensive quantitative comparisons that show that we can out-perform previous methods by approximately 1 to 3 DB.", "We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or @math -linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.", "We discuss a few new motion deblurring problems that are significant to kernel estimation and non-blind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. We also propose an efficient and high-quality kernel estimation method based on using the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TV-l1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise.", "A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.", "Ringing is the most disturbing artifact in the image deconvolution. In this paper, we present a progressive inter-scale and intra-scale non-blind image deconvolution approach that significantly reduces ringing. Our approach is built on a novel edge-preserving deconvolution algorithm called bilateral Richardson-Lucy (BRL) which uses a large spatial support to handle large blur. We progressively recover the image from a coarse scale to a fine scale (inter-scale), and progressively restore image details within every scale (intra-scale). To perform the inter-scale deconvolution, we propose a joint bilateral Richardson-Lucy (JBRL) algorithm so that the recovered image in one scale can guide the deconvolution in the next scale. In each scale, we propose an iterative residual deconvolution to progressively recover image details. The experimental results show that our progressive deconvolution can produce images with very little ringing for large blur kernels." ] }
1212.1073
2009501432
Blind image deblurring algorithms have been improving steadily in the past years. Most state-of-the-art algorithms, however, still cannot perform perfectly in challenging cases, especially in large blur setting. In this paper, we focus on how to estimate a good blur kernel from a single blurred image based on the image structure. We found that image details caused by blur could adversely affect the kernel estimation, especially when the blur kernel is large. One effective way to remove these details is to apply image denoising model based on the total variation (TV). First, we developed a novel method for computing image structures based on the TV model, such that the structures undermining the kernel estimation will be removed. Second, we applied a gradient selection method to mitigate the possible adverse effect of salient edges and improve the robustness of kernel estimation. Third, we proposed a novel kernel estimation method, which is capable of removing noise and preserving the continuity in the kernel. Finally, we developed an adaptive weighted spatial prior to preserve sharp edges in latent image restoration. Extensive experiments testify to the effectiveness of our method on various kinds of challenging examples.
It is noted that there also have been active researches on spatially-varying blind deblurring methods. Interested readers are referred to @cite_7 @cite_22 @cite_5 @cite_12 @cite_28 for more details.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_28", "@cite_5", "@cite_12" ], "mid": [ "1988446025", "2055616670", "2002630316", "1598936309", "" ], "abstract": [ "We present a deblurring algorithm that uses a hardware attachment coupled with a natural image prior to deblur images from consumer cameras. Our approach uses a combination of inexpensive gyroscopes and accelerometers in an energy optimization framework to estimate a blur function from the camera's acceleration and angular velocity during an exposure. We solve for the camera motion at a high sampling rate during an exposure and infer the latent image using a joint optimization. Our method is completely automatic, handles per-pixel, spatially-varying blur, and out-performs the current leading image-based methods. Our experiments show that it handles large kernels -- up to at least 100 pixels, with a typical size of 30 pixels. We also present a method to perform \"ground-truth\" measurements of camera motion blur. We use this method to validate our hardware and deconvolution approach. To the best of our knowledge, this is the first work that uses 6 DOF inertial sensors for dense, per-pixel spatially-varying image deblurring and the first work to gather dense ground-truth measurements for camera-shake blur.", "Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure. We apply this model to two different algorithms for camera shake removal: the first one uses a single blurry image (blind deblurring), while the second one uses both a blurry image and a sharp but noisy image of the same scene. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on real images.", "Many blind motion deblur methods model the motion blur as a spatially invariant convolution process. However, motion blur caused by the camera movement in 3D space during shutter time often leads to spatially varying blurring effect over the image. In this paper, we proposed an efficient two-stage approach to remove spatially-varying motion blurring from a single photo. There are three main components in our approach: (i) a minimization method of estimating region-wise blur kernels by using both image information and correlations among neighboring kernels, (ii) an interpolation scheme of constructing pixel-wise blur matrix from region-wise blur kernels, and (iii) a non-blind deblurring method robust to kernel errors. The experiments showed that the proposed method outperformed the existing software based approaches on tested real images.", "We present a novel single image deblurring method to estimate spatially non-uniform blur that results from camera shake. We use existing spatially invariant deconvolution methods in a local and robust way to compute initial estimates of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time spent in each discretized portion of the space of all possible camera poses. Spatially varying blur kernels are derived directly from the MDF. We show that 6D camera motion is well approximated by 3 degrees of motion (in-plane translation and rotation) and analyze the scope of this approximation. We present results on both synthetic and captured data. Our system out-performs current approaches which make the assumption of spatially invariant blur.", "" ] }
1212.0979
2951713259
We present the first combinatorial polynomial time algorithm for computing the equilibrium of the Arrow-Debreu market model with linear utilities.
All algorithms mentioned so far are centralized in the sense that the algorithms need to know all the utilities at the start of the algorithm and, in the case of iterative algorithms, the global state of the market, i.e., all prices and the demand for each good. Cole and Fleischer @cite_4 and Cheung, Cole, and Rastogi @cite_13 explore local algorithms, where each agent only knows the price and demand of his good and, moreover, the market is run for many periods.
{ "cite_N": [ "@cite_13", "@cite_4" ], "mid": [ "2079823104", "1970021516" ], "abstract": [ "This paper continues the study, initiated by Cole and Fleischer in [Cole and Fleischer 2008], of the behavior of a tatonnement price update rule in Ongoing Fisher Markets. The prior work showed fast convergence toward an equilibrium when the goods satisfied the weak gross substitutes property and had bounded demand and income elasticities. The current work shows that fast convergence also occurs for the following type of markets: All pairs of goods are complements to each other, and the demand and income elasticities are suitably bounded. In particular, these conditions hold when all buyers in the market are equipped with CES utilities, where all the parameters ρ, one per buyer, satisfy -1 In addition, we extend the above result to markets in which a mixture of complements and substitutes occur. This includes characterizing a class of nested CES utilities for which fast convergence holds. An interesting technical contribution, which may be of independent interest, is an amortized analysis for handling asynchronous events in settings in which there are a mix of continuous changes and discrete events.", "Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, this paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets. For both market settings, this paper defines and analyzes variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. Our algorithm introduces a new and natural update rule. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property. These are the first analyses for computationally and informationally distributed algorithms that demonstrate polynomial convergence. Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking: 1. A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price. 2. A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth. 3. The closeness of the market to a Fisher market (a market with buyers starting with money alone). We give two types of protocols. The first type assumes global knowledge of only (an upper bound on) the first parameter. For this protocol, we also provide a matching lower bound in terms of these parameters for the One-Time Market. Our second protocol, which is analyzed for the One-Time Market alone, assumes no global knowledge whatsoever." ] }
1212.0763
1613314486
It is today accepted that matrix factorization models allow a high quality of rating prediction in recommender systems. However, a major drawback of matrix factorization is its static nature that results in a progressive declining of the accuracy of the predictions after each factorization. This is due to the fact that the new obtained ratings are not taken into account until a new factorization is computed, which can not be done very often because of the high cost of matrix factorization. In this paper, aiming at improving the accuracy of recommender systems, we propose a cluster-based matrix factorization technique that enables online integration of new ratings. Thus, we significantly enhance the obtained predictions between two matrix factorizations. We use finer-grained user biases by clustering similar items into groups, and allocating in these groups a bias to each user. The experiments we did on large datasets demonstrated the efficiency of our approach.
focus on users (and items) which have small rating profiles @cite_1 . They present an approximation method that updates the matrices of an existing model (previously generated by MF). The proposed and algorithms retrain the factor vector for the concerned user, or item, and keep all the other entries in the matrix unchanged. The time complexity of this method is @math , where @math is the given number of factors and @math the number of iterations. The whole factor vector of the user is retrained (i.e. his rating profile for all the items), which makes their solution more time consuming than ours ( @math , see Section ). They also not consider user biases, which might be very important for the accuracy of the predictions.
{ "cite_N": [ "@cite_1" ], "mid": [ "1990846291" ], "abstract": [ "Regularized matrix factorization models are known to generate high quality rating predictions for recommender systems. One of the major drawbacks of matrix factorization is that once computed, the model is static. For real-world applications dynamic updating a model is one of the most important tasks. Especially when ratings on new users or new items come in, updating the feature matrices is crucial. In this paper, we generalize regularized matrix factorization (RMF) to regularized kernel matrix factorization (RKMF). Kernels provide a flexible method for deriving new matrix factorization methods. Furthermore with kernels nonlinear interactions between feature vectors are possible. We propose a generic method for learning RKMF models. From this method we derive an online-update algorithm for RKMF models that allows to solve the new-user new-item problem. Our evaluation indicates that our proposed online-update methods are accurate in approximating a full retrain of a RKMF model while the runtime of online-updating is in the range of milliseconds even for huge datasets like Netflix." ] }
1212.0763
1613314486
It is today accepted that matrix factorization models allow a high quality of rating prediction in recommender systems. However, a major drawback of matrix factorization is its static nature that results in a progressive declining of the accuracy of the predictions after each factorization. This is due to the fact that the new obtained ratings are not taken into account until a new factorization is computed, which can not be done very often because of the high cost of matrix factorization. In this paper, aiming at improving the accuracy of recommender systems, we propose a cluster-based matrix factorization technique that enables online integration of new ratings. Thus, we significantly enhance the obtained predictions between two matrix factorizations. We use finer-grained user biases by clustering similar items into groups, and allocating in these groups a bias to each user. The experiments we did on large datasets demonstrated the efficiency of our approach.
propose in @cite_12 a fast online bilinear factor model (called FOBFM). It uses an offline analysis of item user features to initialize the online models. Moreover, it computes linear projections that reduces the dimensionality and, in turn, allows to learn fast both user and item factors in an online fashion. Their offline analysis uses a large amount of historical data (e.g., keywords, categories, browsing behavior) and their model needs to online learn both user and item factors in order to integrate the new ratings. So, their technique is much more costly than ours. Furthermore, our approach works even in applications where no item user features are available which is not proven in the experimentations of the FOBFM model.
{ "cite_N": [ "@cite_12" ], "mid": [ "2142057089" ], "abstract": [ "Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split." ] }
1212.0763
1613314486
It is today accepted that matrix factorization models allow a high quality of rating prediction in recommender systems. However, a major drawback of matrix factorization is its static nature that results in a progressive declining of the accuracy of the predictions after each factorization. This is due to the fact that the new obtained ratings are not taken into account until a new factorization is computed, which can not be done very often because of the high cost of matrix factorization. In this paper, aiming at improving the accuracy of recommender systems, we propose a cluster-based matrix factorization technique that enables online integration of new ratings. Thus, we significantly enhance the obtained predictions between two matrix factorizations. We use finer-grained user biases by clustering similar items into groups, and allocating in these groups a bias to each user. The experiments we did on large datasets demonstrated the efficiency of our approach.
@cite_2 point the problem of data dynamicity in latent factors detection approaches. They propose an nonnegative matrix factorization (ONMF) algorithm that detects latent factors and tracks their evolution when the data evolves. Let us remind that a nonnegative matrix factorization is a factorization where all the factors in both matrices @math and @math are positive. They base their solution on the , which states that: for two full rank decompositions @math and @math of a matrix @math , there exists one invertible matrix @math satisfying @math and @math . They use this relation to integrate the new ratings. Although the process seems to be relatively fast, its computation time is greater than ours. This is due to the fact that their technique updates the whole profiles of all the users where our solution limits the computations to the bias of the concerned user.
{ "cite_N": [ "@cite_2" ], "mid": [ "2169707717" ], "abstract": [ "Detecting and tracking latent factors from temporal data is an important task. Most existing algorithms for latent topic detection such as Nonnegative Matrix Factorization (NMF) have been designed for static data. These algorithms are unable to capture the dynamic nature of temporally changing data streams. In this paper, we put forward an online NMF (ONMF) algorithm to detect latent factors and track their evolution while the data evolve. By leveraging the already detected latent factors and the newly arriving data, the latent factors are automatically and incrementally updated to reflect the change of factors. Furthermore, by imposing orthogonality on the detected latent factors, we can not only guarantee the unique solution of NMF but also alleviate the partial-data problem, which may cause NMF to fail when the data are scarce or the distribution is incomplete. Experiments on both synthesized data and real data validate the efficiency and effectiveness of our ONMF algorithm." ] }
1212.0042
2088382258
As the use of biometrics becomes more wide-spread, the privacy concerns that stem from the use of biometrics are becoming more apparent. As the usage of mobile devices grows, so does the desire to implement biometric identification into such devices. A large majority of mobile devices being used are mobile phones. While work is being done to implement different types of biometrics into mobile phones, such as photo based biometrics, voice is a more natural choice. The idea of voice as a biometric identifier has been around a long time. One of the major concerns with using voice as an identifier is the instability of voice. We have developed a protocol that addresses those instabilities and preserves privacy. This paper describes a novel protocol that allows a user to authenticate using voice on a mobile remote device without compromising their privacy. We first discuss the Vaulted Verification protocol, which has recently been introduced in research literature, and then describe its limitations. We then introduce a novel adaptation and extension of the Vaulted Verification protocol to voice, dubbed Vaulted Voice Verification ( V 3 ). Following that we show a performance evaluation and then conclude with a discussion of security and future work.
Many techniques are used in an effort to utilize voice as a biometric identifier. The idea of privacy and security using voice goes many years back as in 2001 when @cite_2 created a system for biometric key generation. However, the system created by was impractical. There has been lots of work on voice, as @cite_12 and @cite_7 discussed in their survey papers which discuss the state of the art. Here, we are only going to discuss the most relevant papers to this work.
{ "cite_N": [ "@cite_7", "@cite_12", "@cite_2" ], "mid": [ "2129905915", "2063355793", "2131183039" ], "abstract": [ "Form a privacy perspective most concerns against the common use of biometrics arise from the storage and misuse of biometric data. Biometric cryptosystems and cancelable biometrics represent emerging technologies of biometric template protection addressing these concerns and improving public confidence and acceptance of biometrics. In addition, biometric cryptosystems provide mechanisms for biometric-dependent key-release. In the last years a significant amount of approaches to both technologies have been published. A comprehensive survey of biometric cryptosystems and cancelable biometrics is presented. State-of-the-art approaches are reviewed based on which an in-depth discussion and an outlook to future prospects are given.", "Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.", "We propose a technique to reliably generate a cryptographic key from a user's voice while speaking a password. The key resists cryptanalysis even against an attacker who captures all system information related to generating or verifying the cryptographic key. Moreover, the technique is sufficiently robust to enable the user to reliably regenerate the key by uttering her password again. We describe an empirical evaluation of this technique using 250 utterances recorded from 50 users." ] }
1212.0042
2088382258
As the use of biometrics becomes more wide-spread, the privacy concerns that stem from the use of biometrics are becoming more apparent. As the usage of mobile devices grows, so does the desire to implement biometric identification into such devices. A large majority of mobile devices being used are mobile phones. While work is being done to implement different types of biometrics into mobile phones, such as photo based biometrics, voice is a more natural choice. The idea of voice as a biometric identifier has been around a long time. One of the major concerns with using voice as an identifier is the instability of voice. We have developed a protocol that addresses those instabilities and preserves privacy. This paper describes a novel protocol that allows a user to authenticate using voice on a mobile remote device without compromising their privacy. We first discuss the Vaulted Verification protocol, which has recently been introduced in research literature, and then describe its limitations. We then introduce a novel adaptation and extension of the Vaulted Verification protocol to voice, dubbed Vaulted Voice Verification ( V 3 ). Following that we show a performance evaluation and then conclude with a discussion of security and future work.
In the last couple of years, different groups have begun to focus on using template protection with speech using GMMs. Teoh and Chong @cite_13 used Probabilistic Random Projections (PRP) to protect their speaker template while doing speaker verification. In their research, the template is hidden in a process of random projections in a subspace. The EER obtained in their experiment range between 0 differs from this work because it is a client-server protocol that can be used on mobile devices, and the template is not stored on the server. also improves over the technique discussed by Teoh and Chong in that we use multiple templates instead of a single template, this improves the certainty in the final score.
{ "cite_N": [ "@cite_13" ], "mid": [ "2130054403" ], "abstract": [ "Due to biometric template characteristics that are susceptible to non-revocable and privacy invasion, cancellable biometrics has been introduced to tackle these issues. In this paper, we present a two-factor cancellable formulation for speech biometrics, which we refer as probabilistic random projection (PRP). PRP offers strong protection on speech template by hiding the actual speech feature through the random subspace projection process. Besides, the speech template is replaceable and can be reissued when it is compromised. Our proposed method enables the generation of different speech templates from the same speech feature, which means linkability is not exited between the speech templates. The formulation of the cancellable biometrics retains its performance as for the conventional biometric. Besides that, we also propose 2D subspace projection techniques for speech feature extraction, namely 2D Principle Component Analysis (2DPCA) and 2D CLAss-Featuring Information Compression (2DCLAFIC) to accommodate the requirements of PRP formulation." ] }
1212.0178
2952686213
In a communication network, point-to-point traffic volumes over time are critical for designing protocols that route information efficiently and for maintaining security, whether at the scale of an internet service provider or within a corporation. While technically feasible, the direct measurement of point-to-point traffic imposes a heavy burden on network performance and is typically not implemented. Instead, indirect aggregate traffic volumes are routinely collected. We consider the problem of estimating point-to-point traffic volumes, x_t, from aggregate traffic volumes, y_t, given information about the network routing protocol encoded in a matrix A. This estimation task can be reformulated as finding the solutions to a sequence of ill-posed linear inverse problems, y_t = A x_t, since the number of origin-destination routes of interest is higher than the number of aggregate measurements available. Here, we introduce a novel multilevel state-space model of aggregate traffic volumes with realistic features. We implement a naive strategy for estimating unobserved point-to-point traffic volumes from indirect measurements of aggregate traffic, based on particle filtering. We then develop a more efficient two-stage inference strategy that relies on model-based regularization: a simple model is used to calibrate regularization parameters that lead to efficient and scalable inference in the multilevel state-space model. We apply our methods to corporate and academic networks, where we show that the proposed inference strategy outperforms existing approaches and scales to larger networks. We also design a simulation study to explore the factors that influence the performance. Our results suggest that model-based regularization may be an efficient strategy for inference in other complex multilevel models.
Applied research related to the type of problems we consider can be traced back to literature on transportation and operations research . There the focus is on estimating a single set of origin-destination traffic volumes, @math , from integer-valued traffic counts over time, @math . The line of research in statistics with application to communication networks is due to who coined the term by extending the approach to positron emission tomography by @cite_1 . In this latter setting, statistical approaches may be able to leverage knowledge about a physical process, explicitly specified by a model, to assist the inference task. In the network tomography setting, in contrast, we can only rely on knowledge about the routing matrix and statistics about traffic time series.
{ "cite_N": [ "@cite_1" ], "mid": [ "2069629287" ], "abstract": [ "Previous models for emission tomography (ET) do not distinguish the physics of ET from that of transmission tomography. We give a more accurate general mathematical model for ET where an unknown emission density ? = ?(x, y, z) generates, and is to be reconstructed from, the number of counts n*(d) in each of D detector units d. Within the model, we give an algorithm for determining an estimate ? of ? which maximizes the probability p(n*|?) of observing the actual detector count data n* over all possible densities ?. Let independent Poisson variables n(b) with unknown means ?(b), b = 1, ···, B represent the number of unobserved emissions in each of B boxes (pixels) partitioning an object containing an emitter. Suppose each emission in box b is detected in detector unit d with probability p(b, d), d = 1, ···, D with p(b, d) a one-step transition matrix, assumed known. We observe the total number n* = n*(d) of emissions in each detector unit d and want to estimate the unknown ? = ?(b), b = 1, ···, B. For each ?, the observed data n* has probability or likelihood p(n*|?). The EM algorithm of mathematical statistics starts with an initial estimate ?0 and gives the following simple iterative procedure for obtaining a new estimate ?new, from an old estimate ?old, to obtain ?k, k = 1, 2, ···, ?new(b)= ?old(b) ?Dd=1 n*(d)p(b,d) ??old(b?)p(b?,d),b=1,···B." ] }
1212.0167
2235191131
Sina Weibo, which was launched in 2009, is the most popular Chinese micro-blogging service. It has been reported that Sina Weibo has more than 400 million registered users by the end of the third quarter in 2012. Sina Weibo and Twitter have a lot in common, however, in terms of the following preference, Sina Weibo users, most of whom are Chinese, behave differently compared with those of Twitter. This work is based on a data set of Sina Weibo which contains 80.8 million users' profiles and 7.2 billion relations and a large data set of Twitter. Firstly some basic features of Sina Weibo and Twitter are analyzed such as degree and activeness distribution, correlation between degree and activeness, and the degree of separation. Then the following preference is investigated by studying the assortative mixing, friend similarities, following distribution, edge balance ratio, and ranking correlation, where edge balance ratio is newly proposed to measure balance property of graphs. It is found that Sina Weibo has a lower reciprocity rate, more positive balanced relations and is more disassortative. Coinciding with Asian traditional culture, the following preference of Sina Weibo users is more concentrated and hierarchical: they are more likely to follow people at higher or the same social levels and less likely to follow people lower than themselves. In contrast, the same kind of following preference is weaker in Twitter. Twitter users are open as they follow people from levels, which accords with its global characteristic and the prevalence of western civilization. The message forwarding behavior is studied by displaying the propagation levels, delays, and critical users. The following preference derives from not only the usage habits but also underlying reasons such as personalities and social moralities that is worthy of future research.
For other online social networks, Flickr, LiveJournal, Orkut, and YouTube are studied in @cite_12 . researched on the topological characteristics of Cyworld, MySpace, and orkut @cite_25 . They examined average degree, average clustering coefficient, assortativity, degree of separation, and other properties of these online social network services.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "2121761994", "2115022330" ], "abstract": [ "Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks.", "Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems." ] }
1212.0167
2235191131
Sina Weibo, which was launched in 2009, is the most popular Chinese micro-blogging service. It has been reported that Sina Weibo has more than 400 million registered users by the end of the third quarter in 2012. Sina Weibo and Twitter have a lot in common, however, in terms of the following preference, Sina Weibo users, most of whom are Chinese, behave differently compared with those of Twitter. This work is based on a data set of Sina Weibo which contains 80.8 million users' profiles and 7.2 billion relations and a large data set of Twitter. Firstly some basic features of Sina Weibo and Twitter are analyzed such as degree and activeness distribution, correlation between degree and activeness, and the degree of separation. Then the following preference is investigated by studying the assortative mixing, friend similarities, following distribution, edge balance ratio, and ranking correlation, where edge balance ratio is newly proposed to measure balance property of graphs. It is found that Sina Weibo has a lower reciprocity rate, more positive balanced relations and is more disassortative. Coinciding with Asian traditional culture, the following preference of Sina Weibo users is more concentrated and hierarchical: they are more likely to follow people at higher or the same social levels and less likely to follow people lower than themselves. In contrast, the same kind of following preference is weaker in Twitter. Twitter users are open as they follow people from levels, which accords with its global characteristic and the prevalence of western civilization. The message forwarding behavior is studied by displaying the propagation levels, delays, and critical users. The following preference derives from not only the usage habits but also underlying reasons such as personalities and social moralities that is worthy of future research.
Micro-blogging services in China experienced rapid growth. It is believed that Sina Weibo, the most popular one in China, may exceed Twitter in the number of users due to the huge Chinese netizen base. However, there are few quantitative works on Sina Weibo and the difference between these two micro-blogging magnates. studied users' basic behaviors such as access ways, writing style, topics, and interest change. They analyzed more than 40 million micro-blogging activities but didn't involve following relations @cite_21 . studied the patterns of advertisement propagation in Sina Weibo @cite_11 . They extracted propagation features such as volume, topology, and time then used K-means clustering algorithm to group the messages.
{ "cite_N": [ "@cite_21", "@cite_11" ], "mid": [ "2053241453", "1972772389" ], "abstract": [ "In this article, we analyze and compare user behavior on two different microblogging platforms: (1) Sina Weibo which is the most popular microblogging service in China and (2) Twitter. Such a comparison has not been done before at this scale and is therefore essential for understanding user behavior on microblogging services. In our study, we analyze more than 40 million microblogging activities and investigate microblogging behavior from different angles. We (i) analyze how people access microblogs and (ii) compare the writing style of Sina Weibo and Twitter users by analyzing textual features of microposts. Based on semantics and sentiments that our user modeling framework extracts from English and Chinese posts, we study and compare (iii) the topics and (iv) sentiment polarities of posts on Sina Weibo and Twitter. Furthermore, (v) we investigate the temporal dynamics of the microblogging behavior such as the drift of user interests over time. Our results reveal significant differences in the microblogging behavior on Sina Weibo and Twitter and deliver valuable insights for multilingual and culture-aware user modeling based on microblogging data. We also explore the correlation between some of these differences and cultural models from social science research.", "The explosive growth of microblogs has attracted many corporations and organizations. Microblogging has been considered as a high-quality advertising platform. In this study, we attempt to reveal the patterns of advertisement propagation in Sina-Microblog through analyzing a selected set of message cascades. Each message cascade is represented by a propagation tree and 33 features were extracted, which cover mainly three aspects of a cascade: the volume of the participants, the topology of the propagation paths, and the promptness of the propagation in term of time. To reveal the propagation patterns, We then group these message cascades using K-means clustering algorithm. Analysis of the resulted clusters reveals the patterns of advertisement propagation, based on which we further propose several metrics to measure the effectiveness of advertisement in microblogs." ] }
1212.0167
2235191131
Sina Weibo, which was launched in 2009, is the most popular Chinese micro-blogging service. It has been reported that Sina Weibo has more than 400 million registered users by the end of the third quarter in 2012. Sina Weibo and Twitter have a lot in common, however, in terms of the following preference, Sina Weibo users, most of whom are Chinese, behave differently compared with those of Twitter. This work is based on a data set of Sina Weibo which contains 80.8 million users' profiles and 7.2 billion relations and a large data set of Twitter. Firstly some basic features of Sina Weibo and Twitter are analyzed such as degree and activeness distribution, correlation between degree and activeness, and the degree of separation. Then the following preference is investigated by studying the assortative mixing, friend similarities, following distribution, edge balance ratio, and ranking correlation, where edge balance ratio is newly proposed to measure balance property of graphs. It is found that Sina Weibo has a lower reciprocity rate, more positive balanced relations and is more disassortative. Coinciding with Asian traditional culture, the following preference of Sina Weibo users is more concentrated and hierarchical: they are more likely to follow people at higher or the same social levels and less likely to follow people lower than themselves. In contrast, the same kind of following preference is weaker in Twitter. Twitter users are open as they follow people from levels, which accords with its global characteristic and the prevalence of western civilization. The message forwarding behavior is studied by displaying the propagation levels, delays, and critical users. The following preference derives from not only the usage habits but also underlying reasons such as personalities and social moralities that is worthy of future research.
Our work of following preference is also related with link analysis. studied friend recommendations designed to help users find known, off-line contacts and discover new friends on social networking sites @cite_3 . Hopcroft proposed a machine learning model to study the two-way relationship prediction in social network @cite_26 . analyzed the structure the spammers' networks that they marked on Twitter and found following preference inside the spammers' networks @cite_6 . They found the criminal accounts tend to form a small-world network and the criminal hubs prefer to follow criminal accounts. found the link farming strategy that spammers use is begun with following social capitalists who are popular and prefer to follow back anyone who connects to them @cite_10 .
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_6", "@cite_3" ], "mid": [ "2155186673", "2005556331", "2163898372", "2136664839" ], "abstract": [ "We study the extent to which the formation of a two-way relationship can be predicted in a dynamic social network. A two-way (called reciprocal) relationship, usually developed from a one-way (parasocial) relationship, represents a more trustful relationship between people. Understanding the formation of two-way relationships can provide us insights into the micro-level dynamics of the social network, such as what is the underlying community structure and how users influence each other. Employing Twitter as a source for our experimental data, we propose a learning framework to formulate the problem of reciprocal relationship prediction into a graphical model. The framework incorporates social theories into a machine learning model. We demonstrate that it is possible to accurately infer 90 of reciprocal relationships in a dynamic network. Our study provides strong evidence of the existence of the structural balance among reciprocal relationships. In addition, we have some interesting findings, e.g., the likelihood of two \"elite\" users creating a reciprocal relationships is nearly 8 times higher than the likelihood of two ordinary users. More importantly, our findings have potential implications such as how social structures can be inferred from individuals' behaviors.", "Recently, Twitter has emerged as a popular platform for discovering real-time information on the Web, such as news stories and people's reaction to them. Like the Web, Twitter has become a target for link farming, where users, especially spammers, try to acquire large numbers of follower links in the social network. Acquiring followers not only increases the size of a user's direct audience, but also contributes to the perceived influence of the user, which in turn impacts the ranking of the user's tweets by search engines. In this paper, we first investigate link farming in the Twitter network and then explore mechanisms to discourage the activity. To this end, we conducted a detailed analysis of links acquired by over 40,000 spammer accounts suspended by Twitter. We find that link farming is wide spread and that a majority of spammers' links are farmed from a small fraction of Twitter users, the social capitalists, who are themselves seeking to amass social capital and links by following back anyone who follows them. Our findings shed light on the social dynamics that are at the root of the link farming problem in Twitter network and they have important implications for future designs of link spam defenses. In particular, we show that a simple user ranking scheme that penalizes users for connecting to spammers can effectively address the problem by disincentivizing users from linking with other users simply to gain influence.", "In this paper, we perform an empirical analysis of the cyber criminal ecosystem on Twitter. Essentially, through analyzing inner social relationships in the criminal account community, we find that criminal accounts tend to be socially connected, forming a small-world network. We also find that criminal hubs, sitting in the center of the social graph, are more inclined to follow criminal accounts. Through analyzing outer social relationships between criminal accounts and their social friends outside the criminal account community, we reveal three categories of accounts that have close friendships with criminal accounts. Through these analyses, we provide a novel and effective criminal account inference algorithm by exploiting criminal accounts' social relationships and semantic coordinations.", "This paper studies people recommendations designed to help users find known, offline contacts and discover new friends on social networking sites. We evaluated four recommender algorithms in an enterprise social networking site using a personalized survey of 500 users and a field study of 3,000 users. We found all algorithms effective in expanding users' friend lists. Algorithms based on social network information were able to produce better-received recommendations and find more known contacts for users, while algorithms using similarity of user-created content were stronger in discovering new friends. We also collected qualitative feedback from our survey users and draw several meaningful design implications." ] }
1212.0421
2950524304
We consider a request processing system composed of organizations and their servers connected by the Internet. The latency a user observes is a sum of communication delays and the time needed to handle the request on a server. The handling time depends on the server congestion, i.e. the total number of requests a server must handle. We analyze the problem of balancing the load in a network of servers in order to minimize the total observed latency. We consider both cooperative and selfish organizations (each organization aiming to minimize the latency of the locally-produced requests). The problem can be generalized to the task scheduling in a distributed cloud; or to content delivery in an organizationally-distributed CDNs. In a cooperative network, we show that the problem is polynomially solvable. We also present a distributed algorithm iteratively balancing the load. We show how to estimate the distance between the current solution and the optimum based on the amount of load exchanged by the algorithm. During the experimental evaluation, we show that the distributed algorithm is efficient, therefore it can be used in networks with dynamically changing loads. In a network of selfish organizations, we prove that the price of anarchy (the worst-case loss of performance due to selfishness) is low when the network is homogeneous and the servers are loaded (the request handling time is high compared to the communication delay). After relaxing these assumptions, we assess the loss of performance caused by the selfishness experimentally, showing that it remains low. Our results indicate that a network of servers handling requests can be efficiently managed by a distributed algorithm. Additionally, even if the network is organizationally distributed, with individual organizations optimizing performance of their requests, the network remains efficient.
CoralCDN @cite_5 is a p2p CDN consisting of users voluntarily devoting their bandwidth and storage to redistribute the content. In CoralCDN the popular content is replicated among multiple servers (which can be viewed as relaying the requests); the requests for content are relayed only between the servers with constrained pairwise RTTs (which ensures the proximity of delivering server). Our mathematical model formalizes the intuitions behind heuristics in CoralCDN.
{ "cite_N": [ "@cite_5" ], "mid": [ "2395418193" ], "abstract": [ "A network latency estimation scheme associates a short “position string” to each peer in a distributed system so that the latency between any two peers can be estimated given only their positions. Proposed applications for these schemes have included efficient overlay construction, compact routing, anonymous route selection, and efficient byzantine agreement. This paper introduces Treeple, a new scheme for latency estimation, that differs from previous schemes in several respects. First, Treeple is provably secure in a strong sense, rather than being designed only to resist known attacks. Second, Treeple “positions” are not based on Euclidean coordinates, but reflect the underlying network topology. Third, Treeple positions are highly stable, allowing peers to retain the same position information for long periods with no maintenance. Finally, Treeple positions can be assigned to peers that do not participate directly in the scheme. We evaluate Treeple on a large internet dataset (with over 200,000 measurements) and find that on average, its latency estimates are within 26 of the true round-trip time. By comparison, Vivaldi, a popular but insecure scheme, has a median relative error of 25 on the same dataset." ] }
1212.0421
2950524304
We consider a request processing system composed of organizations and their servers connected by the Internet. The latency a user observes is a sum of communication delays and the time needed to handle the request on a server. The handling time depends on the server congestion, i.e. the total number of requests a server must handle. We analyze the problem of balancing the load in a network of servers in order to minimize the total observed latency. We consider both cooperative and selfish organizations (each organization aiming to minimize the latency of the locally-produced requests). The problem can be generalized to the task scheduling in a distributed cloud; or to content delivery in an organizationally-distributed CDNs. In a cooperative network, we show that the problem is polynomially solvable. We also present a distributed algorithm iteratively balancing the load. We show how to estimate the distance between the current solution and the optimum based on the amount of load exchanged by the algorithm. During the experimental evaluation, we show that the distributed algorithm is efficient, therefore it can be used in networks with dynamically changing loads. In a network of selfish organizations, we prove that the price of anarchy (the worst-case loss of performance due to selfishness) is low when the network is homogeneous and the servers are loaded (the request handling time is high compared to the communication delay). After relaxing these assumptions, we assess the loss of performance caused by the selfishness experimentally, showing that it remains low. Our results indicate that a network of servers handling requests can be efficiently managed by a distributed algorithm. Additionally, even if the network is organizationally distributed, with individual organizations optimizing performance of their requests, the network remains efficient.
@cite_21 shows a CDN based on a DHT and heuristic algorithms to minimize the total processing time. Although each server has a fixed constrains on its load bandwidth storage capacity, the paper does not consider the relation between server load and its performance degradation. The evaluation is based on simulation; no theoretical results are included.
{ "cite_N": [ "@cite_21" ], "mid": [ "1496814404" ], "abstract": [ "In this paper, we propose the dissemination tree, a dynamic content distribution system built on top of a peer-to-peer location service. We present a replica placement protocol that builds the tree while meeting QoS and server capacity constraints. The number of replicas as well as the delay and bandwidth consumption for update propagation are significantly reduced. Simulation results show that the dissemination tree has close to the optimal number of replicas, good load distribution, small delay and bandwidth penalties for update multicast compared with the ideal case: static replica placement on IP multicast." ] }
1212.0693
2044054721
From its early beginnings onwards mankind has put to test many different society forms, and this fact raises a complex of interesting questions. The objective of this paper is to present a general population model which takes essential features of any society into account and which gives interesting answers on the basis of only two natural hypotheses. One is that societies want to survive, the second one that individuals in a society would in general like to increase their standard of living. We start by presenting a mathematical model which may be seen as a particular type of a controlled branching process. All conditions of the model are justified and interpreted.After several preliminary results about societies in general we can show that two society forms should attract particular attention, both from a qualitative and a quantitative point of view. These are the so-called weakest-first society and the strongest-first society. In particular we prove then that these two societies stand out since they form an envelope of all possible societies in a sense we will make precise. This result (the Envelopment Theorem) is seen as significant because it is paralleled with precise survival criteria for the enveloping societies. Moreover, given that one of the "limiting" societies can be seen as an extreme form of communism, and the other one as being close to an extreme version of capitalism, we conclude that, remarkably, humanity is close to having already tested the limits.
Early work on controlled BPs confined interest to control through bounds imposed on the growth of Galton-Watson-type processes. @cite_7 , @cite_4 and others modified the number of individuals which are allowed to reproduce in each generation by corresponding deterministic functions. @cite_1 considered a Galton-Watson process (GWP) with a non-specified absorbing process for which only the expected influence is known.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_7" ], "mid": [ "2004799869", "2006550558", "2034335161" ], "abstract": [ "A discrete Galton-Watson process is modified by an absorbing process, which, within each generation, eliminates a subset of the living particles without leaving offspring. The absorbing process will be only roughly specified by its expected efficiency on the associated process. We give a sharp sufficient condition for the joint process to be extinguished with probability one and give, after an easy generalization, an example for possible biomedical applications.", "A branching process with an absorbing lower barrier is considered. This is a Galton-Watson process with the condition that at any generation the number of individuals is greater than a lower barrier or it is equal to zero (i.e. all individuals in populations which are too small die and have no offspring). A necessary and sufficient condition is given for the process to become extinct with probability one. At the end of the paper there are three illustrating examples.", "" ] }
1212.0693
2044054721
From its early beginnings onwards mankind has put to test many different society forms, and this fact raises a complex of interesting questions. The objective of this paper is to present a general population model which takes essential features of any society into account and which gives interesting answers on the basis of only two natural hypotheses. One is that societies want to survive, the second one that individuals in a society would in general like to increase their standard of living. We start by presenting a mathematical model which may be seen as a particular type of a controlled branching process. All conditions of the model are justified and interpreted.After several preliminary results about societies in general we can show that two society forms should attract particular attention, both from a qualitative and a quantitative point of view. These are the so-called weakest-first society and the strongest-first society. In particular we prove then that these two societies stand out since they form an envelope of all possible societies in a sense we will make precise. This result (the Envelopment Theorem) is seen as significant because it is paralleled with precise survival criteria for the enveloping societies. Moreover, given that one of the "limiting" societies can be seen as an extreme form of communism, and the other one as being close to an extreme version of capitalism, we conclude that, remarkably, humanity is close to having already tested the limits.
Population-size dependence is another interesting access to control in BP models. These were studied by @cite_21 and @cite_10 . @cite_13 proposed a special class of controlled BPs involving a different notion of resources". Motivated by applications in marketing, the objective is to control independent subpopulations (multi-type model) in such a way that they grow as quickly as possible. Relative frequencies of types were studied in @cite_2 .
{ "cite_N": [ "@cite_13", "@cite_21", "@cite_10", "@cite_2" ], "mid": [ "1603487770", "1986085340", "2071802039", "1965007131" ], "abstract": [ "We propose and analyze a new class of controlled multi-type branching processes with a per-step linear resource constraint, motivated by potential applications in viral marketing and cancer treatment. We show that the optimal exponential growth rate of the population can be achieved by maintaining a fixed proportion among the species, for both deterministic and stochastic branching processes. In the special case of a two-type population and with a symmetric reward structure, the optimal proportion is obtained in closed-form. In addition to revealing structural properties of controlled branching processes, our results are intended to provide the practitioners with an easy-to-interpret benchmark for best practices, if not exact policies. As a proof of concept, the methodology is applied to the linkage structure of the 2004 US Presidential Election blogosphere, where the optimal growth rate demonstrates sizable gains over a uniform selection strategy, and to a two-compartment cell-cycle kinetics model for cancer growth, with realistic parameters, where the robust estimate for minimal treatment intensity under a worst-case growth rate is noticeably more conservative compared to that obtained using more optimistic assumptions.", "", "This paper studies the limit behaviour of where Zn is a real-valued temporally homogeneous Markov chain, and an and bn are some constants; the results are then applied to a general population model. In such a model Zn represents the nth generation population size and is defined as are the offspring variables of the (n-1)th generation which are assumed to depend on n, i and Zn-1 whereas the classical conditional independence of given Zn as superseded by milder assumptions. Some necessary and sufficient conditions for Zn bn to converge a.s. are derived, and some results on the robustness of the asymptotic behaviour of the Galton-Watson process are obtained when offspring independence is relaxed", "This paper considers the relative frequencies of distinct types of individuals in multitype branching processes. We prove that the frequencies are asymptotically multivariate normal when the initial number of ancestors is large and the time of observation is fixed. The result is valid for any branching process with a finite number of types; the only assumption required is that of independent individual evolutions. The problem under consideration is motivated by applications in the area of cell biology. Specifically, the reported limiting results are of advantage in cell kinetics studies where the relative frequencies but not the absolute cell counts are accessible to measurement. Relevant statistical applications are discussed in the context of asymptotic maximum likelihood inference for multitype branching processes." ] }
1212.0693
2044054721
From its early beginnings onwards mankind has put to test many different society forms, and this fact raises a complex of interesting questions. The objective of this paper is to present a general population model which takes essential features of any society into account and which gives interesting answers on the basis of only two natural hypotheses. One is that societies want to survive, the second one that individuals in a society would in general like to increase their standard of living. We start by presenting a mathematical model which may be seen as a particular type of a controlled branching process. All conditions of the model are justified and interpreted.After several preliminary results about societies in general we can show that two society forms should attract particular attention, both from a qualitative and a quantitative point of view. These are the so-called weakest-first society and the strongest-first society. In particular we prove then that these two societies stand out since they form an envelope of all possible societies in a sense we will make precise. This result (the Envelopment Theorem) is seen as significant because it is paralleled with precise survival criteria for the enveloping societies. Moreover, given that one of the "limiting" societies can be seen as an extreme form of communism, and the other one as being close to an extreme version of capitalism, we conclude that, remarkably, humanity is close to having already tested the limits.
The model presented in this paper is neither a BP with varying environment (see e.g. @cite_16 ) nor a BP with random environment. See @cite_17 for a clear analysis of the connection between these two types, and e.g. @cite_24 for newer developments. Our model is no multi-type BP model, and neither a pure population size-dependent model. It is a Markov process, as we shall see, but no phase-type Markov model or decomposable BP (see @cite_20 ) can play the control we have in mind.
{ "cite_N": [ "@cite_24", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "1523572153", "1975846420", "2083498134", "2036758357" ], "abstract": [ "Biology takes a special place among the other natural sciences because biological units, be they pieces of DNA, cells, or organisms, reproduce more or less faithfully. Like any other biological process, reproduction has a large random component. The theory of branching processes was developed especially as a mathematical counterpart to this most fundamental of biological processes. This active and rich research area allows us to determine extinction risks and predict the development of population composition, and also uncover aspects of a population's history from its current genetic composition. Branching processes play an increasingly important role in models of genetics, molecular biology, microbiology, ecology, and evolutionary theory. This book presents this body of mathematical ideas for a biological audience, but should also be enjoyable to mathematicians, if only for its rich stock of rich biological examples. It can be read by anyone with a basic command of calculus, matrix algebra, and probability theory. More advanced results from basic probability theory are treated in a special appendix.", "Let Zn be a branching process whose offspring distributions vary with n. It is shown that the sequence maxi, 0 P(Zn = i) has a limit. Denote this limit by M. It turns out that M is positive only if the offspring variables rapidly approach constants. Let cn be a sequence of constants and Wn = Zn cn. It will be proven that M = 0 is necessary and sufficient for the limit distribution functions of all convergent Wn to be continuous on (0, o). If M > 0 there is, up to an equivalence, only one sequence cn such that Wn has a limit distribution with jump points in (0, co). Necessary and sufficient conditions for -continuity of limit distributions are derived in terms of the offspring distributions of Zn", "We focus on supercritical decomposable (reducible) multitype branching processes. Types are partitioned into irreducible equivalence classes. In this context, extinction of some classes is possible without the whole process becoming extinct. We derive criteria for the almost-sure extinction of the whole process, as well as of a specific class, conditionally given the class of the initial particle. We give sufficient conditions under which the extinction of a class implies the extinction of another class or of the whole process. Finally, we show that the extinction probability of a specific class is the minimal nonnegative solution of the usual extinction equation but with added constraints. © Applied Probability Trust 2012.", "" ] }
1211.6799
2149735540
We present the design of a new social bookmark manager, named GalViz, as part of the interface of the GiveA-Link system. Unlike the interfaces of traditional social tagging tools, which usually display information in a list view, GalViz visualizes tags, resources, social links, and social context in an interactive network, combined with the tag cloud. Evaluations through a scenario case study and log analysis provide evidence of the effectiveness of our design.
There is sparse literature on the design principles of social tagging tools @cite_18 @cite_21 @cite_2 . Some work has been done on building hierarchy structure from the social tagging to uncover the hidden child-parent semantics @cite_6 @cite_14 . Visualization, as a powerful type of social bookmarking tool, has been studied and utilized in several existing designs. Visualization of hyperlinks between Web pages was adopted to enhance adaptive navigation @cite_8 . Cluster Map, a social bookmark visualization tool, highlighted the relationships among users and bookmarks to identify tag and community structures @cite_4 . Unlike ClusterMap, is designed to emphasize the semantic relationships between tags and resources to help users manage existing resources and discover new ones. Besides, the graphical interface was shown to be useful for distributed collaborations and interactions on social bookmarking @cite_13 . Graphical visualization of concept networks was integrated into several Web applications as an innovative interactive user interface @cite_17 @cite_11 . Most of such existing applications display context of items of the same type, but is able to provide the visualization of heterogenous networks among two different objects, tags and resources.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_11", "@cite_8", "@cite_21", "@cite_6", "@cite_2", "@cite_13", "@cite_17" ], "mid": [ "105197999", "", "1560880555", "", "2064173066", "", "2167429226", "", "1975896036", "" ], "abstract": [ "With the emergence of social tagging systems and the possibility for users to extensively annotate web resources and any content enormous amounts of unordered information and user generated metadata circulate the Web. Accordingly a viable visualisation form needs to integrate this unclassified content into meaningful visual representations. We argue that tag clouds can make the grade. We assume that the application of clustering techniques for arranging tags can be a useful method to generate meaningful units within a tag cloud. We think that clustered tag clouds can potentially help to enhance user performance. In this paper we present a description of tag clouds including a theoretical discourse on the strengths and weaknesses of using them in common Web-based contexts. Further recent methods of semantic clustering for visualizing tag clouds are reviewed. Findings from user studies that investigated the visual perception of differently arranged depictions of tags follow. The main objective consists in the exploration of characteristical aspects in perceptual phenomenons and cognitive processes during the interaction with a tag cloud. This clears the way for useful implications on the constitution and design factors of that visualisation form. Finally a new approach is proposed in order to further develop on this concept.", "", "Social bookmarking tools are very popular nowadays. In most tools, users tag the bookmarks to describe them. Therefore, it is of- ten hard for users to discover implicit structures between tags, users and bookmarks. We think that this is essential for both end users to discover new bookmarks that could be of interest to them, and for researchers who want to study how people use social information re- trieval tools. In this work, a cluster map visualisation technique is customized to enable users to explore social bookmarks in the del.icio.us and the CALIBRATE system. The design of our visualisation aims to automatically identify tag and community structures, and visualises these structures in order to increase the users awareness of them.", "", "Collaborative tagging applications allow Internet users to annotate resources with personalized tags. The complex network created by many annotations, often called a folksonomy, permits users the freedom to explore tags, resources or even other user's profiles unbound from a rigid predefined conceptual hierarchy. However, the freedom afforded users comes at a cost: an uncontrolled vocabulary can result in tag redundancy and ambiguity hindering navigation. Data mining techniques, such as clustering, provide a means to remedy these problems by identifying trends and reducing noise. Tag clusters can also be used as the basis for effective personalized recommendation assisting users in navigation. We present a personalization algorithm for recommendation in folksonomies which relies on hierarchical tag clusters. Our basic recommendation framework is independent of the clustering method, but we use a context-dependent variant of hierarchical agglomerative clustering which takes into account the user's current navigation context in cluster selection. We present extensive experimental results on two real world dataset. While the personalization algorithm is successful in both cases, our results suggest that folksonomies encompassing only one topic domain, rather than many topics, present an easier target for recommendation, perhaps because they are more focused and often less sparse. Furthermore, context dependent cluster selection, an integral step in our personalization algorithm, demonstrates more utility for recommendation in multi-topic folksonomies than in single-topic folksonomies. This observation suggests that topic selection is an important strategy for recommendation in multi-topic folksonomies.", "", "In this paper we discuss the use of clustering techniques to enhance the user experience and thus the success of collaborative tagging services. We show that clustering techniques can improve the user experience of current tagging services. We first describe current limitations of tagging services, second, we give an overview of existing approaches. We then describe the algorithms we used for tag clustering and give experimental results. Finally, we explore the use of several techniques to identify semantically related tags.", "", "In this paper, our aim is to facilitate synchronous and co-present interaction with social bookmarking systems for groups of related users meeting to discuss and share their collections of tags and bookmarks. Our work results in a system called Orchis that proposes a graphical user interface based on cooperative visualization and interaction as an alternative graphical user interface for social bookmarking systems. Orchis presents three major characteristics: (1) graphical overviews of collections of annotated bookmarks and tags, (2) advanced drag-and-drop interaction styles adaptable to distributed display environments and (3) support for distributed architectures possibly running different windowing systems. Our hypothesis is that by using Orchis, related users will be able to better compare and share tags and bookmarks. They will also be able to build cooperatively valuable shared collections. We expect that, in turn, this will participate in improving the overall quality of both folksonomies and social bookmarking collections.", "" ] }
1211.6799
2149735540
We present the design of a new social bookmark manager, named GalViz, as part of the interface of the GiveA-Link system. Unlike the interfaces of traditional social tagging tools, which usually display information in a list view, GalViz visualizes tags, resources, social links, and social context in an interactive network, combined with the tag cloud. Evaluations through a scenario case study and log analysis provide evidence of the effectiveness of our design.
GiveALink.org , as a research-oriented social tagging platform, broadly examines several aspects of social tagging to foster the construction and applications of socially driven semantic annotation networks. Previous research includes the design of effective similarity relationships @cite_3 , social spam detection @cite_5 , and social tagging games as incentive for collecting high-quality annotations @cite_7 . Former work in GiveA -Link on exploratory navigation interfaces @cite_19 and bookmark management @cite_9 have significant influence on the new design presented in the paper.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_19", "@cite_5" ], "mid": [ "", "2050048591", "2152019382", "2049019691", "2128509431" ], "abstract": [ "", "Researchers are exploring the use of folksonomies, such as in social bookmarking systems, to build implicit links between online resources. Users create and reinforce links between resources through applying a common tag to those resources. The effectiveness of using such community-driven annotation depends on user participation to provide the critical information. However, the participation of many users is motivated by selfish reasons. An effective way to encourage these users is to create useful or entertaining applications. We demo two such tools -- a browser extension for bookmark management and navigation and a game.", "Social bookmarking systems are becoming increasingly important data sources for bootstrapping and maintaining Semantic Web applications. Their emergent information structures have become known as folksonomies. A key question for harvesting semantics from these systems is how to extend and adapt traditional notions of similarity to folksonomies, and which measures are best suited for applications such as community detection, navigation support, semantic search, user profiling and ontology learning. Here we build an evaluation framework to compare various general folksonomy-based similarity measures, which are derived from several established information-theoretic, statistical, and practical measures. Our framework deals generally and symmetrically with users, tags, and resources. For evaluation purposes we focus on similarity between tags and between resources and consider different methods to aggregate annotations across users. After comparing the ability of several tag similarity measures to predict user-created tag relations, we provide an external grounding by user-validated semantic proxies based on WordNet and the Open Directory Project. We also investigate the issue of scalability. We find that mutual information with distributional micro-aggregation across users yields the highest accuracy, but is not scalable; per-user projection with collaborative aggregation provides the best scalable approach via incremental computations. The results are consistent across resource and tag similarity.", "The visualization of results is a critical component in search engines, and the standard ranked list interface has been a consistently predominant model. The emergence of social media provides a new opportunity to investigate visualization techniques that expose socially derived links between objects to support their exploration. Here we introduce and evaluate network-based visualizations for facilitating the exploration of a Web knowledge space. We developed a force directed network interface to visualize the result sets provided by GiveALink.org, a social bookmarking site. The classifications and tags by users are aggregated to build a social similarity network between bookmarked resources. We administered a user study to evaluate the potential of leveraging such social links in an exploratory search task. During exploration, the similarity links are used to arrange the resources in a semantic layout. Users in our study prefer a hybrid interface combining a conventional ranked list and a two dimensional network map, allowing them to find the same amount of relevant information using fewer queries. This behavior is a direct result of the additional structural information present in the network visualization, which aids them in the exploration of the information space.", "The popularity of social bookmarking sites has made them prime targets for spammers. Many of these systems require an administrator's time and energy to manually filter or remove spam. Here we discuss the motivations of social spam, and present a study of automatic detection of spammers in a social tagging system. We identify and analyze six distinct features that address various properties of social spam, finding that each of these features provides for a helpful signal to discriminate spammers from legitimate users. These features are then used in various machine learning algorithms for classification, achieving over 98 accuracy in detecting social spammers with 2 false positives. These promising results provide a new baseline for future efforts on social spam. We make our dataset publicly available to the research community." ] }
1211.6496
1493746248
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88 . The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
The huge growth in user generated content in recent years has led to a number of papers that employ social media information to make predictions about future events @cite_13 . The contents of social media provide a mechanism to discover social structure and analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human-related events.
{ "cite_N": [ "@cite_13" ], "mid": [ "1577053131" ], "abstract": [ "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions." ] }
1211.6496
1493746248
This paper introduces TwitterPaul, a system designed to make use of Social Media data to help to predict game outcomes for the 2010 FIFA World Cup tournament. To this end, we extracted over 538K mentions to football games from a large sample of tweets that occurred during the World Cup, and we classified into different types with a precision of up to 88 . The different mentions were aggregated in order to make predictions about the outcomes of the actual games. We attempt to learn which Twitter users are accurate predictors and explore several techniques in order to exploit this information to make more accurate predictions. We compare our results to strong baselines and against the betting line (prediction market) and found that the quality of extractions is more important than the quantity, suggesting that high precision methods working on a medium-sized dataset are preferable over low precision methods that use a larger amount of data. Finally, by aggregating some classes of predictions, the system performance is close to the one of the betting line. Furthermore, we believe that this domain independent framework can help to predict other sports, elections, product release dates and other future events that people talk about in social media.
On a different stream of work, @cite_11 , build company-specific summaries from a collection of financial news, in order to provide information on short-term stock trading. This work focuses on high-quality sentence retrieval rather than identifying and aggregating large quantities of low-quality predictions.
{ "cite_N": [ "@cite_11" ], "mid": [ "2080721858" ], "abstract": [ "The paper presents a multi-document summarization system which builds company-specific summaries from a collection of financial news such that the extracted sentences contain novel and relevant information about the corresponding organization. The user's familiarity with the company's profile is assumed. The goal of such summaries is to provide information useful for the short-term trading of the corresponding company, i.e., to facilitate the inference from news to stock price movement in the next day. We introduce a novel query (i.e., company name) expansion method and a simple unsupervized algorithm for sentence ranking. The system shows promising results in comparison with a competitive baseline." ] }
1211.6353
2057956432
Weighted voting games are frequently used in decision making. Each voter has a weight and a proposal is accepted if the weight sum of the supporting voters exceeds a quota. One line of research is the efficient computation of so-called power indices measuring the influence of a voter. We treat the inverse problem: Given an influence vector and a power index, determine a weighted voting game such that the distribution of influence among the voters is as close as possible to the given target value. We present exact algorithms and computational results for the Shapley-Shubik and the (normalized) Banzhaf power index.
There is a vast literature how to compute the Shapley-Shubik index and other power indices in various circumstances, either exactly or approximatively. Few results are known about the inverse power index problem so far. Leech proposes a certain kind of fixed point algorithm and reports that it works considerably well in practice, whenever the number of voters @math is not too small @cite_10 @cite_4 , see also @cite_7 . The author argues that for large @math one may assume that the power index smoothly depends on the voting weights and thus Brouwer's fixed point theorem can be applied to the convergence of this approach. Another heuristic is described in @cite_23 .
{ "cite_N": [ "@cite_23", "@cite_10", "@cite_7", "@cite_4" ], "mid": [ "2170620672", "2172146192", "", "1495267737" ], "abstract": [ "Coalition formation is the process of bringing together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently. Typically, in such situations, the agents have conflicting preferences over the set of possible joint goals. Thus, before the agents realize the benefits of cooperation, they must find a way of resolving these conflicts and reaching a consensus. In this context, cooperative game theory offers the voting game as a mechanism for agents to reach a consensus. It also offers the Shapley value as a way of measuring the influence or power a player has in determining the outcome of a voting game. Given this, the designer of a voting game wants to construct a game such that a player's Shapley value is equal to some desired value. This is called the inverse Shapley value problem. Solving this problem is necessary, for instance, to ensure fairness in the players' voting powers. However, from a computational perspective, finding a player's Shapley value for a given game is #P-complete. Consequently, the problem of verifying that a voting game does indeed yield the required powers to the agents is also #P-complete. Therefore, in order to overcome this problem we present a computationally efficient approximation algorithm for solving the inverse problem. This method is based on the technique of 'successive approximations'; it starts with some initial approximate solution and iteratively updates it such that after each iteration, the approximate gets closer to the required solution. This is an anytime algorithm and has time complexity polynomial in the number of players. We also analyze the performance of this method in terms of its approximation error and the rate of convergence of an initial solution to the required one. Specifically, we show that the former decreases after each iteration, and that the latter increases with the number of players and also with the initial approximation error.", "This paper examines the system of Qualified Majority Voting, used by the Council of the European Union, from the perspective of enlargement of the Union. It uses an approach based on power indices due to Penrose (1946), Banzhaf (1965) and Coleman (1971) to make two analyses: (1) the question of the voting power of member countries from the point of view of fairness, and (2) the question of how the threshold number of votes required for QMV should be determined. It studies two scenarios for change from 2005 onwards envisaged by the Nice Treaty: (1) no enlargement, the EU comprising 15 member countries, and (2) full enlargement to 27 members by the accession of all the present twelve candidates. The proposal is made that fair weights be determined algorithmically as a technical or routine matter as the membership changes. The analysis of how the threshold affects power shows the trade-offs that countries face between their blocking power and the power of the Council to act. The main findings are: (1) that the weights laid down in the Nice Treaty are close to being fair, the only significant discrepancies being the under-representation of Germany and Romania, and the over-representation of Spain and Poland; (2) the threshold required for a decision is set too high for the Council to be an effective decision making body.", "", "Lecture Notes prepared for Summer School, “EU Decision Making : Assessment and Design of Procedures”, San Sebastian, Spain, July 8-11, 2002." ] }
1211.6353
2057956432
Weighted voting games are frequently used in decision making. Each voter has a weight and a proposal is accepted if the weight sum of the supporting voters exceeds a quota. One line of research is the efficient computation of so-called power indices measuring the influence of a voter. We treat the inverse problem: Given an influence vector and a power index, determine a weighted voting game such that the distribution of influence among the voters is as close as possible to the given target value. We present exact algorithms and computational results for the Shapley-Shubik and the (normalized) Banzhaf power index.
Besides using the finiteness of the set of weighted voting games, the first general bounds stating that some power distributions can not be approximated by Banzhaf vectors too closely are given by Alon and Edelman @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "2043836864" ], "abstract": [ "Let ( F ) be a family of subsets of the ground set [n] = 1, 2, . . . , n . For each ( i [n] ) we let ( p( F ,i) ) be the number of pairs of subsets that differ in the element i and exactly one of them is in ( F ). We interpret ( p( F ,i) ) as the influence of that element. The normalized Banzhaf vector of ( F ), denoted ( B( F ) ), is the vector ( (B( F ,1), ,B( F ,n)) ), where ( B( F ,i)= p( F ,i) p( F ) ) and ( p( F ) ) is the sum of all ( p( F ,i) ). The Banzhaf vector has been studied in the context of measuring voting power in voting games as well as in Boolean circuit theory. In this paper we investigate which non-negative vectors of sum 1 can be closely approximated by Banzhaf vectors of simple voting games. In particular, we show that if a vector has most of its weight concentrated in k < n coordinates, then it must be essentially the Banzhaf vector of some simple voting game with n − k dummy voters." ] }
1211.6353
2057956432
Weighted voting games are frequently used in decision making. Each voter has a weight and a proposal is accepted if the weight sum of the supporting voters exceeds a quota. One line of research is the efficient computation of so-called power indices measuring the influence of a voter. We treat the inverse problem: Given an influence vector and a power index, determine a weighted voting game such that the distribution of influence among the voters is as close as possible to the given target value. We present exact algorithms and computational results for the Shapley-Shubik and the (normalized) Banzhaf power index.
Since for each number of voters @math there is only a finite set of weighted voting games one may simply solve the inverse power index problem by looping over the whole set. The enumeration of weighted voting games dates back to at least 1962 @cite_11 , where up to @math voters are treated. For @math voters we refer e. g. to @cite_24 @cite_3 @cite_2 . Bart de Keijzer presents a promising graded poset for weighted voting games in his master thesis @cite_13 , see also @cite_12 . We would like to remark that the counts for weighted voting games with @math voters are wrongly stated in @cite_13 , but the methods should work. @math voters where successfully treated in @cite_14 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_3", "@cite_24", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "", "2032670288", "2041273777", "1609370892", "2009037541", "", "2151865687", "2025494402" ], "abstract": [ "", "A proposal in a weighted voting game is accepted if the sum of the (non-negative) weights of the “yea” voters is at least as large as a given quota. Several authors have considered representations of weighted voting games with minimum sum, where the weights and the quota are restricted to be integers. In Freixas and Molinero (Ann. Oper. Res. 166:243–260, 2009) the authors have classified all weighted voting games without a unique minimum sum representation for up to 8 voters. Here we exhaustively classify all weighted voting games consisting of 9 voters which do not admit a unique minimum sum integer weight representation.", "The number of threshold functions of eight variables is counted by ILLIAC II, the computer of the University of Illinois. Sets of optimum weights of majority elements realizing these functions also are investigated. Actually, canonical positive self-dual threshold functions of nine variables are investigated instead of directly investigating threshold functions of eight variables because it is easier to deal with them. The number and optimum weights of threshold functions of eight variables are easily obtained from these functions of nine variables and their realization.", "", "In this paper we deal with several classes of simple games; the first class is the one of ordered simple games (i.e. they admit of a complete desirability relation). The second class consists of all zero-sum games in the first one.", "", "In many multiagent settings, situations arise in which agents must collectively make decisions while not every agent is supposed to have an equal amount of influence in the outcome of such a decision. Weighted voting games are often used to deal with these situations. The amount of influence that an agent has in a weighted voting game can be measured by means of various power indices. This paper studies the problem of finding a weighted voting game in which the distribution of the influence among the agents is as close as possible to a given target value. We propose a method to exactly solve this problem. This method relies on a new efficient procedure for enumerating weighted voting games of a fixed number of agents. The enumeration algorithm we propose works by exploiting the properties of a specific partial order over the class of weighted voting games. The algorithm enumerates weighted voting games of a fixed number of agents in time exponential in the number of agents, and polynomial in the number of games output. As a consequence we obtain an exact anytime algorithm for designing weighted voting games.", "This paper defines the canonical representative of each equivalence class in the classification of the majority decision functions by complementing and permuting variables and by complementing the output. Also, a method is proposed to obtain all the representatives with their optimum structures, and a table of the representatives of the majority decision functions of up to six variables is provided. The reader should be familiar with the content of a previous paper by the authors, included as reference [1]." ] }
1211.4704
1585138553
This paper is focused on privacy issues related to the prefix part of IPv6 addresses. Long-lived prefixes may introduce additional tracking opportunities for communication partners and third parties. We outline a number of prefix alteration schemes that may be deployed to maintain the unlinkability of users' activities. While none of the schemes will solve all privacy problems on the Internet on their own, we argue that the development of practical prefix alteration techniques constitutes a worthwile avenue to pursue: They would allow Internet Service Providers to increase the attainable privacy level well above the status quo in today's IPv4 networks.
@cite_8 propose a mechanism that allows two nodes to communicate privately by switching through a list of previously negotiated IPv6 interface identifiers. The objective of their technique is to provide relationship anonymity against eavesdroppers. To this end the nodes establish a secret key (possibly out-of-band), which is used to independently derive a shared list of randomised IP addresses in a deterministic way. During communication both nodes iterate over their address list in a synchronised manner and assign an ephemeral IP address to their respective network interfaces. This approach resembles that is used in many wireless networks today.
{ "cite_N": [ "@cite_8" ], "mid": [ "1570703809" ], "abstract": [ "Privacy is one of the most desirable properties in modern communication systems like the Internet. There are many techniques proposed to protect message contents, but it is difficult to protect message addresses because they should be clear to message router. In this paper we propose a mechanism of one-time receiver address in IPv6 for providing unlinkability against eavesdroppers. In our system, a pair of sender and receiver independently generate an identical sequence of addresses by using a secret key exchanged in advance. The sender changes the destination address every time when it initiates a transaction, and only the corresponding receiver can follow the change of the address. We have implemented the proposed mechanism on Linux systems. The prototype system hides relation between transactions with small overhead." ] }
1211.4704
1585138553
This paper is focused on privacy issues related to the prefix part of IPv6 addresses. Long-lived prefixes may introduce additional tracking opportunities for communication partners and third parties. We outline a number of prefix alteration schemes that may be deployed to maintain the unlinkability of users' activities. While none of the schemes will solve all privacy problems on the Internet on their own, we argue that the development of practical prefix alteration techniques constitutes a worthwile avenue to pursue: They would allow Internet Service Providers to increase the attainable privacy level well above the status quo in today's IPv4 networks.
Lindqvist and Tapio @cite_6 propose to implement a technique in the operating system of networked clients. Instead of only changing IP addresses, their solution takes into account that identifiers on other layers of the network stack can also be used to track users (e. ,g., MAC addresses and port numbers). The design requires a locally installed translation daemon that replaces all identifiers on all layers at once for every new outbound flow of the client, which ensures that consecutive as well as concurrent activities remain unlinkable. A prototypical implementation of the system is shown to be compatible with several well-known Linux applications. As it solely operates on the client side, protocol stack virtualisation cannot influence the provider-controlled address prefixes, though.
{ "cite_N": [ "@cite_6" ], "mid": [ "2093480120" ], "abstract": [ "Previously proposed host-based privacy protection mechanisms use pseudorandom or disposable identifiers on some or all layers of the protocol stack. These approaches either require changes to all hosts participating in the communication or do not provide privacy for the whole protocol stack or the system. Building on previous work, we propose a relatively simple approach: protocol stack virtualization. The key idea is to provide isolation for traffic sent to the network. The granularity of the isolation can be, for example, flow or process based. With process based granularity, every application uses a distinct identifier space on all layers of the protocol stack. This approach does not need any infrastructure support from the network and requires only minor changes to the single host that implements the privacy protection mechanism. To show that no changes to typical applications are required, we implemented the protocol stack virtualization as a user space daemon and tested it with various legacy applications." ] }
1211.4704
1585138553
This paper is focused on privacy issues related to the prefix part of IPv6 addresses. Long-lived prefixes may introduce additional tracking opportunities for communication partners and third parties. We outline a number of prefix alteration schemes that may be deployed to maintain the unlinkability of users' activities. While none of the schemes will solve all privacy problems on the Internet on their own, we argue that the development of practical prefix alteration techniques constitutes a worthwile avenue to pursue: They would allow Internet Service Providers to increase the attainable privacy level well above the status quo in today's IPv4 networks.
The most relevant related work for our scenario has been published by @cite_1 . The authors observe that all currently active customers of an ISP form an anonymity group, in which individual users can hide if addresses assigned to the customers are changed frequently. ISPs are supposed to assign two addresses to each customer: a that is replaced with a randomly assigned address for every outbound flow from the customer's network as well as a that remains constant over time to provide for reachability. The ISP runs a network address translation gateway that rewrites source and destination addresses on the fly. Although the technique is only described in the context of existing IPv4 networks it could be adapted to IPv6 prefixes. While present a single alternative, which resembles our Scheme 2, we focus on surveying and discussing various schemes that are conceivable in IPv6 networks in this paper.
{ "cite_N": [ "@cite_1" ], "mid": [ "2132976569" ], "abstract": [ "Today's Internet architecture makes no deliberate attempt to provide identity privacy--IP addresses are, for example, often static and the consistent use of a single IP address can leak private information to a remote party. Existing approaches for rectifying this situation and improving identity privacy fall into one of two broad classes: (1) building a privacy-enhancing overlay layer (like Tor) that can run on top of the existing Internet or (2) research into principled but often fundamentally different new architectures. We suggest a middle-ground: enlisting ISPs to assist in improving the identity privacy of users in a manner compatible with the existing Internet architecture, ISP best practices, and potential legal requirements." ] }
1211.4473
2952136083
Microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations. Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids. Traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand. They are no longer applicable to microgrids with unpredictable renewable energy supply and with co-generation (that needs to consider both electricity and heat demand). In this paper, we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co-generation, with the goal of maximizing the cost-savings with local generation. Based on the insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under typical settings, we show that CHASE achieves the best competitive ratio among all deterministic online algorithms, and the ratio is no larger than a small constant 3.
For large power systems, UC involves scheduling of a large number gigantic power plants of several hundred if not thousands of megawatts with heterogeneous operating constraints and logistics behind each action @cite_35 . The problem is very challenging to solve and has been shown to be NP-Complete in general We note that @math in (3a)-(3d) is an instance of UC, and that UC is NP-hard in general does not imply that the instance @math is also NP-hard. @cite_13 . Sophisticated approaches proposed in the literature for solving UC include mixed integer programming @cite_29 , dynamic programming @cite_31 , and stochastic programming @cite_41 . There have also been investigations on UC with high renewable energy penetration @cite_20 , based on over-provisioning approach. After UC determines the on off status of generators, ED computes their output levels by solving a nonlinear optimization problem using various heuristics without altering the on off status of generators @cite_23 . There is also recent interest in involving CHP generators in ED to satisfy both electricity and heat demand simultaneously @cite_12 . See comprehensive surveys on UC in @cite_35 and on ED in @cite_23 .
{ "cite_N": [ "@cite_35", "@cite_41", "@cite_29", "@cite_23", "@cite_31", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "", "2064073681", "", "2149156031", "2102380228", "1948613015", "2099118326", "" ], "abstract": [ "", "The authors develop a model and a solution technique for the problem of generating electric power when demands are not certain. They also provide techniques for improving the current methods used in solving the traditional unit commitment problem. The solution strategy can be run in parallel due to the separable nature of the relaxation used. Numerical results indicate significant savings in the cost of operating power generating systems when the stochastic model is used instead of the deterministic model.", "", "This paper proposes a particle swarm optimization (PSO) method for solving the economic dispatch (ED) problem in power systems. Many nonlinear characteristics of the generator, such as ramp rate limits, prohibited operating zone, and nonsmooth cost functions are considered using the proposed method in practical generator operation. The feasibility of the proposed method is demonstrated for three different systems, and it is compared with the GA method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.", "A field-proven dynamic programming formulation of the unit commitment problem is presented. This approach features the classification of generating units into related groups so as to minimize the number of unit combinations which must be tested without precluding the optimal path. Programming techniques are described which maximize efficiency. Considerations are discussed which determine when generating units must be evaluated and when they may be ignored. The heuristic procedures described in this paper are concerned with supplying all apriori information to the program thereby minimizing its execution time. Results are presented from field testing on a medium size utility. Composite generating unit formulation is described for the economic allocation of constrained fuel to a group of units.", "Lagrangian relaxation (LR) and general mixed integer programming (MIP) are two main approaches for solving unit commitment (UC) problems. This paper compares the LR and the state of art general MIP method for solving UC problems based on performance analysis and numerical testing. In this paper we have rigorously proved that UC is indeed an NP complete problem, and therefore it is impossible to develop an algorithm with polynomial computation time to solve it. In comparison with the general MIP methods, the LR methodology is more scaleable and efficient to obtain near optimal schedules for large scale and hard UC problems at the cost of a small percentage of deviation from the optimal solution. In particular, solving hydro generation subproblems within the LR framework can take advantages of both LR and general MIP methods and provide a synergetic combination of both approaches.", "This paper presents a new genetic approach for solving the economic dispatch problem in large-scale power systems. A new encoding technique is developed. The chromosome contains only an encoding of the normalized system incremental cost in this encoding technique. Therefore, the total number of bits of chromosome is entirely independent of the number of units. The salient feature makes the proposed genetic approach attractive in large and complex systems which other methodologies may fail to achieve. Moreover, the approach can take network losses, ramp rate limits, and prohibited zone avoidance into account because of genetic algorithm's flexibility. Numerical results on an actual utility system of up to 40 units show that the proposed approach is faster and more robust than the well-known lambda-iteration method in large-scale systems.", "" ] }
1211.4473
2952136083
Microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations. Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids. Traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand. They are no longer applicable to microgrids with unpredictable renewable energy supply and with co-generation (that needs to consider both electricity and heat demand). In this paper, we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co-generation, with the goal of maximizing the cost-savings with local generation. Based on the insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under typical settings, we show that CHASE achieves the best competitive ratio among all deterministic online algorithms, and the ratio is no larger than a small constant 3.
However, these studies assume the demand and energy supply (or their distributions) in the entire time horizon are known . As such, the schemes are not readily applicable to microgrid scenarios where accurate prediction of small-scale demand and wind power generation is difficult to obtain due to limited management resources and their unpredictable nature @cite_32 .
{ "cite_N": [ "@cite_32" ], "mid": [ "2145334893" ], "abstract": [ "This paper presents a security-constrained unit commitment (SCUC) algorithm which takes into account the intermittency and volatility of wind power generation. The UC problem is solved in the master problem with the forecasted intermittent wind power generation. Next, possible scenarios are simulated for representing the wind power volatility. The initial dispatch is checked in the subproblem and generation redispatch is considered for satisfying the hourly volatility of wind power in simulated scenarios. If the redispatch fails to mitigate violations, Benders cuts are created and added to the master problem to revise the commitment solution. The iterative process between the commitment problem and the feasibility check subproblem will continue until simulated wind power scenarios can be accommodated by redispatch. Numerical simulations indicate the effectiveness of the proposed SCUC algorithm for managing the security of power system operation by taking into account the intermittency and volatility of wind power generation." ] }
1211.4473
2952136083
Microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations. Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids. Traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand. They are no longer applicable to microgrids with unpredictable renewable energy supply and with co-generation (that needs to consider both electricity and heat demand). In this paper, we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co-generation, with the goal of maximizing the cost-savings with local generation. Based on the insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under typical settings, we show that CHASE achieves the best competitive ratio among all deterministic online algorithms, and the ratio is no larger than a small constant 3.
Several recent works have started to study energy generation strategies for microgrids. For example, the authors in @cite_33 develop a linear programming based cost minimization approach for UC in microgrids. @cite_24 considers the fuel consumption rate minimization in microgrids and advocates to build ICT infrastructure in microgrids. @cite_18 @cite_27 discuss the energy scheduling problems in data centers, whose models are similar with ours. The difference between these works and ours is that they assume the demand and energy supply are given beforehand, and ours does not rely on input prediction.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_18", "@cite_33" ], "mid": [ "2145963793", "2107128713", "", "2121010493" ], "abstract": [ "A cost optimization scheme for a microgrid is presented. Prior to the optimization of the microgrid itself, several schemes for sharing power between two generators are compared. The minimization of fuel use in a microgrid with a variety of power sources is then discussed. The optimization of a small power system has important differences from the case of a large system and its traditional economic dispatch problem. Among the most important differences is the presence of a local heat demand which adds another dimension to the optimization problem. The microgrid considered in this paper consists of two reciprocating gas engines, a combined heat and power plant, a photovoltaic array and a wind generator. The optimization is aimed at reducing the fuel consumption rate of the system while constraining it to fulfil the local energy demand (both electrical and thermal) and provide a certain minimum reserve power. A penalty is applied for any heat produced in excess of demand. The solution of the optimization problem strongly supports the idea of having a communication infrastructure operating between the power sources.", "Recently, the demand for data center computing has surged, increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This work presents a novel approach to model the energy flows in a data center and optimize its operation. Traditionally, supply-side constraints such as energy or cooling availability were treated independently from IT workload management. This work reduces electricity cost and environmental impact using a holistic approach that integrates renewable supply, dynamic pricing, and cooling supply including chiller and outside air cooling, with IT workload planning to improve the overall sustainability of data center operations. Specifically, we first predict renewable energy as well as IT demand. Then we use these predictions to generate an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce both the recurring power costs and the use of non-renewable energy by as much as 60 compared to existing techniques, while still meeting the Service Level Agreements.", "", "This article develops a linear programming cost minimisation model for the high level system design and corresponding unit commitment of generators and storage within a microgrid; a set of energy resources working co-operatively to create a cost effective, reliable and environmentally friendly energy provision system. Previous work in this area is used as a basis for formulation of a new approach to this problem, with particular emphasis on why a microgrid is different to centralised generation or other grid-connected decentralised energy resources. Specifically, the model explicitly defines the amount of time that the microgrid would be expected to operate autonomously, and restricts flow of heat between microgrid participants to defined cases. The model developed is applied to a set of United Kingdom commercial load profiles, under best current estimates of energy prices and technology capital costs, to determine investment attractiveness of the microgrid. Sensitivity analysis of results to variations in energy prices is performed. The results broadly indicate that a microgrid can offer an economic proposition, although it is necessarily slightly more expensive than regular grid-connected decentralised generation. The analysis results have raised important questions regarding a fair method for settlement between microgrid participants, and game theory has been identified as a suitable tool to analyse aspects of this situation." ] }
1211.4473
2952136083
Microgrids represent an emerging paradigm of future electric power systems that can utilize both distributed and centralized generations. Two recent trends in microgrids are the integration of local renewable energy sources (such as wind farms) and the use of co-generation (i.e., to supply both electricity and heat). However, these trends also bring unprecedented challenges to the design of intelligent control strategies for microgrids. Traditional generation scheduling paradigms rely on perfect prediction of future electricity supply and demand. They are no longer applicable to microgrids with unpredictable renewable energy supply and with co-generation (that needs to consider both electricity and heat demand). In this paper, we study online algorithms for the microgrid generation scheduling problem with intermittent renewable energy sources and co-generation, with the goal of maximizing the cost-savings with local generation. Based on the insights from the structure of the offline optimal solution, we propose a class of competitive online algorithms, called CHASE (Competitive Heuristic Algorithm for Scheduling Energy-generation), that track the offline optimal in an online fashion. Under typical settings, we show that CHASE achieves the best competitive ratio among all deterministic online algorithms, and the ratio is no larger than a small constant 3.
Online optimization and algorithm design is an established approach in optimizing the performance of various computer systems with minimum knowledge of inputs @cite_34 @cite_25 . Recently, it has found new applications in data centers @cite_39 @cite_22 @cite_1 @cite_7 @cite_11 @cite_5 . To the best of our knowledge, our work is the first to study the competitive online algorithms for energy generation in microgrids with intermittent energy sources and co-generation. The authors in @cite_26 apply online convex optimization framework @cite_17 to design ED algorithms for microgrids. The authors in @cite_0 adopt Lyapunov optimization framework @cite_25 to design electricity scheduling for microgrids, with consideration of energy storage. However, neither of the above considers the startup cost of the local generations. In contrast, our work jointly consider UC and ED in microgrids with co-generation. Furthermore, the above three works adopt different frameworks and provide online algorithms with different types of performance guarantee.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_1", "@cite_17", "@cite_39", "@cite_0", "@cite_5", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2022815703", "1502628442", "2120881346", "2005318779", "", "", "2950710752", "", "1552828154", "", "" ], "abstract": [ "Growing environmental awareness and new government directives have set the stage for an increase in the fraction of energy supplied using renewable resources. The fast variation in renewable power, coupled with uncertainty in availability, emphasizes the need for algorithms for intelligent online generation scheduling. These algorithms should allow us to compensate for the renewable resource when it is not available and should also account for physical generator constraints. We apply and extend recent work in the field of online optimization to the scheduling of generators in smart (micro) grids and derive bounds on the performance of asymptotically good algorithms in terms of the generator parameters. We also design online algorithms that intelligently leverage available information about the future, such as predictions of wind intensity, and show that they can be used to guarantee near optimal performance under mild assumptions. This allows us to quantify the benefits of resources spent on prediction technologies and different generation sources in the smart grid. Finally, we empirically show how both classes of online algorithms, (with or without the predictions of future availability) significantly outperform certain ‘natural’ algorithms.", "Energy costs are becoming the fastest-growing element in datacenter operation costs. One basic approach to reduce these costs is to exploit the spatiotemporal variation in electricity prices by moving computation to datacenters in which energy is available at a cheaper price. However, injudicious job migration between datacenters might increase the overall operation cost due to the bandwidth costs of transferring application state and data over the wide-area network. To address this challenge, we propose novel online algorithms for migrating batch jobs between datacenters, which handle the fundamental tradeoff between energy and bandwidth costs. A distinctive feature of our algorithms is that they consider not only the current availability and cost of (possibly multiple) energy sources, but also the future variability and uncertainty thereof. Using the framework of competitive-analysis, we establish worst-case performance bounds for our basic online algorithm. We then propose a practical, easy-to-implement version of the basic algorithm, and evaluate it through simulations on real electricity pricing and job workload data. The simulation results indicate that our algorithm outperforms plausible greedy algorithms that ignore future outcomes. Notably, the actual performance of our approach is significantly better than the theoretical guarantees, within 6 of the optimal offline solution.", "Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of low load. This paper investigates how much can be saved by dynamically \"right-sizing\" the data center by turning off servers during such periods and how to achieve that saving via an online algorithm. We propose a very general model and prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new \"lazy\" online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data-center workloads and show that significant cost savings are possible. Additionally, we contrast this new algorithm with the more traditional approach of receding horizon control.", "Since the electricity bill of a data center constitutes a significant portion of its overall operational costs, reducing this has become important. We investigate cost reduction opportunities that arise by the use of uninterrupted power supply (UPS) units as energy storage devices. This represents a deviation from the usual use of these devices as mere transitional fail-over mechanisms between utility and captive sources such as diesel generators. We consider the problem of opportunistically using these devices to reduce the time average electric utility bill in a data center. Using the technique of Lyapunov optimization, we develop an online control algorithm that can optimally exploit these devices to minimize the time average cost. This algorithm operates without any knowledge of the statistics of the workload or electricity cost processes, making it attractive in the presence of workload and pricing uncertainties. An interesting feature of our algorithm is that its deviation from optimality reduces as the storage capacity is increased. Our work opens up a new area in data center power management.", "", "", "Microgrid (MG) is a promising component for future smart grid (SG) deployment. The balance of supply and demand of electric energy is one of the most important requirements of MG management. In this paper, we present a novel framework for smart energy management based on the concept of quality-of-service in electricity (QoSE). Specifically, the resident electricity demand is classified into basic usage and quality usage. The basic usage is always guaranteed by the MG, while the quality usage is controlled based on the MG state. The microgrid control center (MGCC) aims to minimize the MG operation cost and maintain the outage probability of quality usage, i.e., QoSE, below a target value, by scheduling electricity among renewable energy resources, energy storage systems, and macrogrid. The problem is formulated as a constrained stochastic programming problem. The Lyapunov optimization technique is then applied to derive an adaptive electricity scheduling algorithm by introducing the QoSE virtual queues and energy storage virtual queues. The proposed algorithm is an online algorithm since it does not require any statistics and future knowledge of the electricity supply, demand and price processes. We derive several \"hard\" performance bounds for the proposed algorithm, and evaluate its performance with trace-driven simulations. The simulation results demonstrate the efficacy of the proposed electricity scheduling algorithm.", "", "Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index.", "", "" ] }
1211.5084
2394626294
In this paper, we present algorithms and data structures for the top-k nearest neighbor searching where the input points are exact and the query point is uncertain under the L1 distance metric in the plane. The uncertain query point is represented by a discrete probability density function, and the goal is to return the top-k expected nearest neighbors, which have the smallest expected distances to the query point. Given a set of n exact points in the plane, we build an O(n log n log log n)-size data structure in O(n log n log log n) time, such that for any uncertain query point with m possible locations and any integer k with 1 k n, the top-k expected nearest neighbors can be found in O(mlogm + (k+m)log^2 n) time. Even for the special case where k = 1, our result is better than the previously best method (in PODS 2012), which requires O(n log^2 n) preprocessing time, O(n log^2 n) space, and O(m^2 log^3 n) query time. In addition, for the one-dimensional version of this problem, our approach can build an O(n)-size data structure in O(n log n) time that can support O(min mk,mlog m + k + log n) time queries and the query time can be reduced to O(k+m+log n) time if the locations of Q are given sorted. In fact, the problem is equivalent to the aggregate or group nearest neighbor searching with the weighted SUM as the aggregate distance function operator.
In the formulation of (PNN), one considers the probability of each input point being the nearest neighbor of the query point. The main drawback of PNN is that it is computationally expensive: the probability of each input point being the nearest neighbor not only depends on the query point, but also depends on all the other input points. The formulation has been widely studied @cite_21 @cite_8 @cite_1 @cite_9 @cite_27 @cite_10 @cite_26 @cite_14 . All of these methods were R-tree based heuristics and did not provide any guarantee on the query time in the worst case. For instance, Cheng al @cite_8 studied the PNN query that returns those uncertain points whose probabilities of being the nearest neighbor are higher than some threshold, allowing some given errors in the answers. Pretty recently, Agarwal @cite_15 presented non-trivial results on nearest neighbor searching in a probabilistic framework.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_27", "@cite_15", "@cite_10" ], "mid": [ "19971480", "2103895255", "2013333366", "2034518740", "2133246278", "2022501110", "1491547607", "2167140130", "2099700811" ], "abstract": [ "Data uncertainty is inherent in many applications, including sensor networks, scientific data management, data integration, locationbased applications, etc. One of common queries for uncertain data is the probabilistic nearest neighbor (PNN) query that returns all uncertain objects with non-zero probabilities to be NN. In this paper we study the PNN query with a probability threshold (PNNT), which returns all objects with the NN probability greater than the threshold. Our PNNT query removes the assumption in all previous papers that the probability of an uncertain object always adds up to 1, i.e., we consider missing probabilities. We propose an augmented R-tree index with additional probabilistic information to facilitate pruning as well as global data structures for maintaining the current pruning status. We present our algorithm for efficiently answering PNNT queries and perform experiments to show that our algorithm significantly reduces the number of objects that need to be further evaluated as NN candidates.", "This paper proposes a new problem, called superseding nearest neighbor search, on uncertain spatial databases, where each object is described by a multidimensional probability density function. Given a query point q, an object is a nearest neighbor (NN) candidate if it has a nonzero probability to be the NN of q. Given two NN-candidates o1 and o2, o1 supersedes o2 if o1 is more likely to be closer to q. An object is a superseding nearest neighbor (SNN) of q, if it supersedes all the other NN-candidates. Sometimes no object is able to supersede every other NN-candidate. In this case, we return the SNN-core-the minimum set of NN-candidates each of which supersedes all the NN-candidates outside the SNN-core. Intuitively, the SNN-core contains the best objects, because any object outside the SNN-core is worse than all the objects in the SNN-core. We show that the SNN-core can be efficiently computed by utilizing a conventional multidimensional index, as confirmed by extensive experiments.", "In applications like location-based services, sensor monitoring and biological databases, the values of the database items are inherently uncertain in nature. An important query for uncertain objects is the probabilistic nearest-neighbor query (PNN), which computes the probability of each object for being the nearest neighbor of a query point. Evaluating this query is computationally expensive, since it needs to consider the relationship among uncertain objects, and requires the use of numerical integration or Monte-Carlo methods. Sometimes, a query user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the constrained nearest-neighbor query (C-PNN), which returns the IDs of objects whose probabilities are higher than some threshold, with a given error bound in the answers. The C-PNN can be answered efficiently with probabilistic verifiers. These are methods that derive the lower and upper bounds of answer probabilities, so that an object can be quickly decided on whether it should be included in the answer. We have developed three probabilistic verifiers, which can be used on uncertain data with arbitrary probability density functions. Extensive experiments were performed to examine the effectiveness of these approaches.", "The Voronoi diagram is an important technique for answering nearest-neighbor queries for spatial databases. In this paper, we study how the Voronoi diagram can be used on uncertain data, which are inherent in scientific and business applications. In particular, we propose the Uncertain-Voronoi Diagram (or UV-diagram in short). Conceptually, the data space is divided into distinct “UV-partitions”, where each UV-partition P is associated with a set S of objects; any point q located in P has the set S as its nearest neighbor with non-zero probabilities. The UV-diagram facilitates queries that inquire objects for having non-zero chances of being the nearest neighbor of a given query point. It also allows analysis of nearest neighbor information, e.g., finding out how many objects are the nearest neighbors in a given area. However, a UV-diagram requires exponential construction and storage costs. To tackle these problems, we devise an alternative representation for UV-partitions, and develop an adaptive index for the UV-diagram. This index can be constructed in polynomial time. We examine how it can be extended to support other related queries. We also perform extensive experiments to validate the effectiveness of our approach.", "Uncertainty pervades many domains in our lives. Current real-life applications, e.g., location tracking using GPS devices or cell phones, multimedia feature extraction, and sensor data management, deal with different kinds of uncertainty. Finding the nearest neighbor objects to a given query point is an important query type in these applications. In this paper, we study the problem of finding objects with the highest marginal probability of being the nearest neighbors to a query object. We adopt a general uncertainty model allowing for data and query uncertainty. Under this model, we define new query semantics, and provide several efficient evaluation algorithms. We analyze the cost factors involved in query evaluation, and present novel techniques to address the trade-offs among these factors. We give multiple extensions to our techniques including handling dependencies among data objects, and answering threshold queries. We conduct an extensive experimental study to evaluate our techniques on both real and synthetic data.", "In emerging applications such as location-based services, sensor monitoring and biological management systems, the values of the database items are naturally imprecise. For these uncertain databases, an important query is the Probabilistic k-Nearest-Neighbor Query (k-PNN), which computes the probabilities of sets of k objects for being the closest to a given query point. The evaluation of this query can be both computationally- and I O-expensive, since there is an exponentially large number of k object-sets, and numerical integration is required. Often a user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the Probabilistic Threshold k-Nearest-Neighbor Query (T-k-PNN), which returns sets of k objects that satisfy the query with probabilities higher than some threshold T. Three steps are proposed to handle this query efficiently. In the first stage, objects that cannot constitute an answer are filtered with the aid of a spatial index. The second step, called probabilistic candidate selection, significantly prunes a number of candidate sets to be examined. The remaining sets are sent for verification, which derives the lower and upper bounds of answer probabilities, so that a candidate set can be quickly decided on whether it should be included in the answer. We also examine spatially-efficient data structures that support these methods. Our solution can be applied to uncertain data with arbitrary probability density functions. We have also performed extensive experiments to examine the effectiveness of our methods.", "Nearest-neighbor queries are an important query type for commonly used feature databases. In many different application areas, e.g. sensor databases, location based services or face recognition systems, distances between objects have to be computed based on vague and uncertain data. A successful approach is to express the distance between two uncertain objects by probability density functions which assign a probability value to each possible distance value. By integrating the complete probabilistic distance function as a whole directly into the query algorithm, the full information provided by these functions is exploited. The result of such a probabilistic query algorithm consists of tuples containing the result object and a probability value indicating the likelihood that the object satisfies t he query predicate. In this paper we introduce an efficient strategy for cessing probabilistic nearest-neighbor queries, as the computation of these probability values is very expensive. In a detailed experimental evaluation, we demonstrate the benefits of our probabilistic query approach. The experiments show that we can achieve high quality query results with rather low computational cost.", "Nearest-neighbor (NN) search, which returns the nearest neighbor of a query point in a set of points, is an important and widely studied problem in many fields, and it has wide range of applications. In many of them, such as sensor databases, location-based services, face recognition, and mobile data, the location of data is imprecise. We therefore study nearest neighbor queries in a probabilistic framework in which the location of each input point is specified as a probability distribution function. We present efficient algorithms for (i) computing all points that are nearest neighbors of a query point with nonzero probability; (ii) estimating, within a specified additive error, the probability of a point being the nearest neighbor of a query point; (iii) using it to return the point that maximizes the probability being the nearest neighbor, or all the points with probabilities greater than some threshold to be the NN. We also present some experimental results to demonstrate the effectiveness of our approach.", "The ability to store and query uncertain information is of great benefit to databases that infer values from a set of observations, including databases of moving objects, sensor readings, historical business transactions, and biomedical images. These observations are often inexact to begin with, and even if they are exact, a set of observations of an attribute of an object is better represented by a probability distribution than by a single number, such as a mean. In this paper, we present adaptive, piecewise-linear approximations (APLAs), which represent arbitrary probability distributions compactly with guaranteed quality. We also present the APLA-tree, an index structure for APLAs. Because APLA is more precise than existing approximation techniques, the APLA-tree can answer probabilistic range queries twice as fast. APLA generalizes to multiple dimensions, and the APLA-tree can index multivariate distributions using either one-dimensional or multidimensional APLAs. Finally, we propose a new definition of k-NN queries on uncertain data. The new definition allows APLA and the APLA-tree to answer k-NN queries quickly, even on arbitrary probability distributions. No efficient k-NN search was previously possible on such distributions." ] }
1211.5084
2394626294
In this paper, we present algorithms and data structures for the top-k nearest neighbor searching where the input points are exact and the query point is uncertain under the L1 distance metric in the plane. The uncertain query point is represented by a discrete probability density function, and the goal is to return the top-k expected nearest neighbors, which have the smallest expected distances to the query point. Given a set of n exact points in the plane, we build an O(n log n log log n)-size data structure in O(n log n log log n) time, such that for any uncertain query point with m possible locations and any integer k with 1 k n, the top-k expected nearest neighbors can be found in O(mlogm + (k+m)log^2 n) time. Even for the special case where k = 1, our result is better than the previously best method (in PODS 2012), which requires O(n log^2 n) preprocessing time, O(n log^2 n) space, and O(m^2 log^3 n) query time. In addition, for the one-dimensional version of this problem, our approach can build an O(n)-size data structure in O(n log n) time that can support O(min mk,mlog m + k + log n) time queries and the query time can be reduced to O(k+m+log n) time if the locations of Q are given sorted. In fact, the problem is equivalent to the aggregate or group nearest neighbor searching with the weighted SUM as the aggregate distance function operator.
In the formulation of (SNN) @cite_14 , one considers the superseding relationship of each pair of input points: one supersedes the other if and only if it has probability more than 0.5 of being the nearest neighbor of the query point, where the probability computation is restricted to this pair of points. One can return the point, if such one exists, which supersedes all the others. Otherwise, one returns the minimal set @math of data points such that any data point in @math supersedes any data point not in @math .
{ "cite_N": [ "@cite_14" ], "mid": [ "2103895255" ], "abstract": [ "This paper proposes a new problem, called superseding nearest neighbor search, on uncertain spatial databases, where each object is described by a multidimensional probability density function. Given a query point q, an object is a nearest neighbor (NN) candidate if it has a nonzero probability to be the NN of q. Given two NN-candidates o1 and o2, o1 supersedes o2 if o1 is more likely to be closer to q. An object is a superseding nearest neighbor (SNN) of q, if it supersedes all the other NN-candidates. Sometimes no object is able to supersede every other NN-candidate. In this case, we return the SNN-core-the minimum set of NN-candidates each of which supersedes all the NN-candidates outside the SNN-core. Intuitively, the SNN-core contains the best objects, because any object outside the SNN-core is worse than all the objects in the SNN-core. We show that the SNN-core can be efficiently computed by utilizing a conventional multidimensional index, as confirmed by extensive experiments." ] }
1211.5084
2394626294
In this paper, we present algorithms and data structures for the top-k nearest neighbor searching where the input points are exact and the query point is uncertain under the L1 distance metric in the plane. The uncertain query point is represented by a discrete probability density function, and the goal is to return the top-k expected nearest neighbors, which have the smallest expected distances to the query point. Given a set of n exact points in the plane, we build an O(n log n log log n)-size data structure in O(n log n log log n) time, such that for any uncertain query point with m possible locations and any integer k with 1 k n, the top-k expected nearest neighbors can be found in O(mlogm + (k+m)log^2 n) time. Even for the special case where k = 1, our result is better than the previously best method (in PODS 2012), which requires O(n log^2 n) preprocessing time, O(n log^2 n) space, and O(m^2 log^3 n) query time. In addition, for the one-dimensional version of this problem, our approach can build an O(n)-size data structure in O(n log n) time that can support O(min mk,mlog m + k + log n) time queries and the query time can be reduced to O(k+m+log n) time if the locations of Q are given sorted. In fact, the problem is equivalent to the aggregate or group nearest neighbor searching with the weighted SUM as the aggregate distance function operator.
In the formulation of expected nearest neighbor (ENN), one considers the expected distance from each data point to the query point. Since the expected distance of any input point only depends on the query point, efficient data structures are available. Recently, Agarwal @cite_17 gave the first nontrivial methods for answering exact or approximate expected nearest neighbor queries under @math , @math , and the squared Euclidean distance, with provable performance guarantee. Efficient data structures are also provided in @cite_17 when the input data is uncertain and the query data is exact.
{ "cite_N": [ "@cite_17" ], "mid": [ "1782245179" ], "abstract": [ "We study the aggregate group nearest neighbor searching for the MAX operator in the plane. For a set @math of @math points and a query set @math of @math points, the query asks for a point of @math whose maximum distance to the points in @math is minimized. We present data structures for answering such queries for both @math and @math distance measures. Previously, only heuristic and approximation algorithms were given for both versions. For the @math version, we build a data structure of O(n) size in @math time, such that each query can be answered in @math time. For the @math version, we build a data structure in @math time and @math space, such that each query can be answered in @math time, and alternatively, we build a data structure in @math time and space for any @math , such that each query can be answered in @math time. Further, we extend our result for the @math version to the top- @math queries where each query asks for the @math points of @math whose maximum distances to @math are the smallest for any @math with @math : We build a data structure of O(n) size in @math time, such that each top- @math query can be answered in @math time." ] }
1211.5084
2394626294
In this paper, we present algorithms and data structures for the top-k nearest neighbor searching where the input points are exact and the query point is uncertain under the L1 distance metric in the plane. The uncertain query point is represented by a discrete probability density function, and the goal is to return the top-k expected nearest neighbors, which have the smallest expected distances to the query point. Given a set of n exact points in the plane, we build an O(n log n log log n)-size data structure in O(n log n log log n) time, such that for any uncertain query point with m possible locations and any integer k with 1 k n, the top-k expected nearest neighbors can be found in O(mlogm + (k+m)log^2 n) time. Even for the special case where k = 1, our result is better than the previously best method (in PODS 2012), which requires O(n log^2 n) preprocessing time, O(n log^2 n) space, and O(m^2 log^3 n) query time. In addition, for the one-dimensional version of this problem, our approach can build an O(n)-size data structure in O(n log n) time that can support O(min mk,mlog m + k + log n) time queries and the query time can be reduced to O(k+m+log n) time if the locations of Q are given sorted. In fact, the problem is equivalent to the aggregate or group nearest neighbor searching with the weighted SUM as the aggregate distance function operator.
When the input points are exact and the query point is uncertain, the ENN is the same as the weighted version of the (ANN), which is a generalization of the ANN. Only heuristics are known for answering ANN queries @cite_22 @cite_28 @cite_2 @cite_11 @cite_6 @cite_23 @cite_7 . The best known heuristic method for exact (weighted) ANN queries is based on R-tree @cite_6 , and Li al @cite_28 gave a data structure with 3-approximation query performance for the ANN. Agarwal @cite_17 gave a data structure with a polynomial-time approximation scheme for the ENN queries under the Euclidean distance metric, which also works for the ANN queries.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_28", "@cite_17", "@cite_6", "@cite_23", "@cite_2", "@cite_11" ], "mid": [ "2145814045", "2143709779", "1981634604", "1782245179", "2136046112", "1989519960", "", "2175280589" ], "abstract": [ "Group nearest neighbor (GNN) queries are a relatively new type of operations in spatial database applications. Different from a traditional kNN query which specifies a single query point only, a GNN query has multiple query points. Because of the number of query points and their arbitrary distribution in the data space, a GNN query is much more complex than a kNN query. In this paper, we propose two pruning strategies for GNN queries which take into account the distribution of query points. Our methods employ an ellipse to approximate the extent of multiple query points, and then derive a distance or minimum bounding rectangle (MBR) using that ellipse to prune intermediate nodes in a depth-first search via an R @math -tree. These methods are also applicable to the best-first traversal paradigm. We conduct extensive performance studies. The results show that the proposed pruning strategies are more efficient than the existing methods.", "Aggregate nearest neighbor queries return the object that minimizes an aggregate distance function with respect to a set of query points. Consider, for example, several users at specific locations (query points) that want to find the restaurant (data point), which leads to the minimum sum of distances that they have to travel in order to meet. We study the processing of such queries for the case where the position and accessibility of spatial objects are constrained by spatial (e.g., road) networks. We consider alternative aggregate functions and techniques that utilize Euclidean distance bounds, spatial access methods, and or network distance materialization structures. Our algorithms are experimentally evaluated with synthetic and real data. The results show that their relative performance depends on the problem characteristics.", "Aggregate similarity search, a.k.a. aggregate nearest neighbor (Ann) query, finds many useful applications in spatial and multimedia databases. Given a group Q of M query objects, it retrieves the most (or top-k) similar object to Q from a database P, where the similarity is an aggregation (e.g., sum, max) of the distances between the retrieved object p and all the objects in Q. In this paper, we propose an added flexibility to the query definition, where the similarity is an aggregation over the distances between p and any subset of AEM objects in Q for some support 0", "We study the aggregate group nearest neighbor searching for the MAX operator in the plane. For a set @math of @math points and a query set @math of @math points, the query asks for a point of @math whose maximum distance to the points in @math is minimized. We present data structures for answering such queries for both @math and @math distance measures. Previously, only heuristic and approximation algorithms were given for both versions. For the @math version, we build a data structure of O(n) size in @math time, such that each query can be answered in @math time. For the @math version, we build a data structure in @math time and @math space, such that each query can be answered in @math time, and alternatively, we build a data structure in @math time and space for any @math , such that each query can be answered in @math time. Further, we extend our result for the @math version to the top- @math queries where each query asks for the @math points of @math whose maximum distances to @math are the smallest for any @math with @math : We build a data structure of O(n) size in @math time, such that each top- @math query can be answered in @math time.", "Given two spatial datasets P (e.g., facilities) and Q (queries), an aggregate nearest neighbor (ANN) query retrieves the point(s) of P with the smallest aggregate distance(s) to points in Q. Assuming, for example, n users at locations q 1 ,…q n , an ANN query outputs the facility p ∈ P that minimizes the sum of distances vpq i v for 1 ≤ i ≤ n that the users have to travel in order to meet there. Similarly, another ANN query may report the point p ∈ P that minimizes the maximum distance that any user has to travel, or the minimum distance from some user to his her closest facility. If Q fits in memory and P is indexed by an R-tree, we develop algorithms for aggregate nearest neighbors that capture several versions of the problem, including weighted queries and incremental reporting of results. Then, we analyze their performance and propose cost models for query optimization. Finally, we extend our techniques for disk-resident queries and approximate ANN retrieval. The efficiency of the algorithms and the accuracy of the cost models are evaluated through extensive experiments with real and synthetic datasets.", "A very important class of spatial queries consists of nearest-neighbor (NN) query and its variations. Many studies in the past decade utilize R-trees as their underlying index structures to address NN queries efficiently. The general approach is to use R-tree in two phases. First, R-tree's hierarchical structure is used to quickly arrive to the neighborhood of the result set. Second, the R-tree nodes intersecting with the local neighborhood (Search Region) of an initial answer are investigated to find all the members of the result set. While R-trees are very efficient for the first phase, they usually result in the unnecessary investigation of many nodes that none or only a small subset of their including points belongs to the actual result set. On the other hand, several recent studies showed that the Voronoi diagrams are extremely efficient in exploring an NN search region, while due to lack of an efficient access method, their arrival to this region is slow. In this paper, we propose a new index structure, termed VoR-Tree that incorporates Voronoi diagrams into R-tree, benefiting from the best of both worlds. The coarse granule rectangle nodes of R-tree enable us to get to the search region in logarithmic time while the fine granule polygons of Voronoi diagram allow us to efficiently tile or cover the region and find the result. Utilizing VoR-Tree, we propose efficient algorithms for various Nearest Neighbor queries, and show that our algorithms have better I O complexity than their best competitors.", "", "Let P be a set of n points in the plane. The k-nearest-neighbor (abbreviated as k-NN) query problem is to preprocess P into a data structure that quickly reports k closest points in P for a query point q. This paper addresses a generalization of the k-NN query problem to a query set Q of points, namely, the group k-nearest-neighbor query problem, in the L 1 plane. More precisely, a query is assigned with a set Q of at most m points and a positive integer k with k ? n , and the distance between a point p of P and a query set Q is defined as the sum of L 1 distances from p to all q ? Q . The maximum number m of query points Q is assumed to be known in advance and to be at most n. In this paper, we propose two algorithms, one based on the range tree and the other based on a data structure for segment dragging queries, and obtain the following complexity bounds: (1) a group k-NN query can be handled in O ( T min log ? n + ( k + m 2 ) ( log ? log ? n + log ? m ) ) time after preprocessing P using O ( m 2 n log 2 ? n ) space, where T min = min ? k + m , m 2 , or (2) a group k-NN query can be handled in O ( ( k + m ) log 2 ? n + m 2 ( log ? ? n + log ? m ) ) time after preprocessing P using O ( m 2 n ) space, where ? 0 is an arbitrarily small constant. We also show that our approach can be applied to the weighted group k-nearest-neighbor query problem and the group k-farthest-neighbor query problem." ] }
1211.4552
1516242919
This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components
Case-based reasoning (CBR) approaches often use extensions of build trees as state lattices (and sets of tactics for each state) as for @cite_1 @cite_4 in Wargus. OntanonCBR ( OntanonCBR ) base their real-time case-based planning (CBP) system on a plan dependency graph which is learned from human demonstration in Wargus. In @cite_7 , they use situation assessment for plan retrieval'' from annotated replays, which recognizes distance to behaviors (a goal and a plan), and selected only the low-level features with the higher information gain. HsiehS08 ( HsiehS08 ) based their work on @cite_1 and used StarCraft replays to construct states and building sequences. Strategies are choices of building construction order in their model.
{ "cite_N": [ "@cite_1", "@cite_7", "@cite_4" ], "mid": [ "1565097357", "1553692989", "2167086559" ], "abstract": [ "While several researchers have applied case-based reasoning techniques to games, only Ponsen and Spronck (2004) have addressed the challenging problem of learning to win real-time games. Focusing on Wargus, they report good results for a genetic algorithm that searches in plan space, and for a weighting algorithm (dynamic scripting) that biases subplan retrieval. However, both approaches assume a static opponent, and were not designed to transfer their learned knowledge to opponents with substantially different strategies. We introduce a plan retrieval algorithm that, by using three key sources of domain knowledge, removes the assumption of a static opponent. Our experiments show that its implementation in the Case-based Tactician (CaT) significantly outperforms the best among a set of genetically evolved plans when tested against random Wargus opponents. CaT communicates with Wargus through TIELT, a testbed for integrating and evaluating decision systems with simulators. This is the first application of TIELT. We describe this application, our lessons learned, and our motivations for future work.", "Case-Based Planning (CBP) is an effective technique for solving planning problems that has the potential to reduce the computational complexity of the generative planning approaches [8,3]. However, the success of plan execution using CBP depends highly on the selection of a correct plan; especially when the case-base of plans is extensive. In this paper we introduce the concept of a situationand explain a situation assessmentalgorithm which improves plan retrieval for CBP. We have applied situation assessment to our previous CBP system, Darmok [11], in the domain of real-time strategy games. During Darmok's execution using situation assessment, the high-level representation of the game state i.e. situation is predicted using a decision tree based Situation-Classification model. Situation predicted is further used for the selection of relevant knowledge intensive features, which are derived from the basic representation of the game state, to compute the similarity of cases with the current problem. The feature selection performed here is knowledge based and improves the performance of similarity measurements during plan retrieval. The instantiation of the situation assessment algorithm to Darmok gave us promising results for plan retrieval within the real-time constraints.", "Spatial reasoning is a major challenge for strategy-game artificial intelligence systems. Qualitative spatial reasoning techniques can help overcome this challenge by providing more expressive spatial representations, better communication of intent, better path-finding and reusable strategy libraries." ] }
1211.4552
1516242919
This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components
schadd2007opponent ( schadd2007opponent ) describe opponent modeling through hierarchically structured models of the opponent behavior and they applied their work to the Spring RTS game (Total Annihilation open source clone). UCT ( UCT ) applied upper confidence bounds on trees (UCT: a Monte-Carlo planning algorithm) to tactical assault planning in Wargus, their tactical abstraction combines units hit points and locations. In @cite_0 , they predict the build trees of the opponent a few buildings before they are built. Another approach is to use the gamers' vocabulary of strategies (and openings) to abstract even more what strategies represent (a set of states, of sequences and of intentions) as in @cite_9 @cite_2 . HMMstrat_RTS_AIIDE11 ( HMMstrat_RTS_AIIDE11 ) used an hidden Markov model (HMM) whose states are extracted from (unsupervised) maximum likelihood on a StarCraft dataset. The HMM parameters are learned from unit counts (both buildings and military units) every 30 seconds and strategies'' are the most frequent sequences of the HMM states according to observations.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_2" ], "mid": [ "2106268731", "2144445939", "2083705347" ], "abstract": [ "The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. \"Tech trees\" or \"build trees\" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (common in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI.", "We present a data mining approach to opponent modeling in strategy games. Expert gameplay is learned by applying machine learning techniques to large collections of game logs. This approach enables domain independent algorithms to acquire domain knowledge and perform opponent modeling. Machine learning algorithms are applied to the task of detecting an opponent's strategy before it is executed and predicting when an opponent will perform strategic actions. Our approach involves encoding game logs as a feature vector representation, where each feature describes when a unit or building type is first produced. We compare our representation to a state lattice representation in perfect and imperfect information environments and the results show that our representation has higher predictive capabilities and is more tolerant of noise. We also discuss how to incorporate our data mining approach into a full game playing agent.", "This paper presents a Bayesian model to predict the opening (first strategy) of opponents in real-time strategy (RTS) games. Our model is general enough to be applied to any RTS game with the canonical gameplay of gathering resources to extend a technology tree and produce military units and we applied it to StarCraft1. This model can also predict the possible technology trees of the opponent, but we will focus on openings here. The parameters of this model are learned from replays (game logs), labeled with openings. We present a semi-supervised method of labeling replays with the expectation-maximization algorithm and key features, then we use these labels to learn our parameters and benchmark our method with cross-validation. Uses of such a model range from a commentary assistant (for competitive games) to a core component of a dynamic RTS bot AI, as it will be part of our StarCraft AI competition entry bot." ] }
1211.4929
2395334784
We present a novel summarization framework for reviews of products and services by selecting informative and concise text segments from the reviews. Our method consists of two major steps. First, we identify five frequently occurring variable-length syntactic patterns and use them to extract candidate segments. Then we use the output of a joint generative sentiment topic model to filter out the non-informative segments. We verify the proposed method with quantitative and qualitative experiments. In a quantitative study, our approach outperforms previous methods in producing informative segments and summaries that capture aspects of products and services as expressed in the user-generated pros and cons lists. Our user study with ninety users resonates with this result: individual segments extracted and filtered by our method are rated as more useful by users compared to previous approaches by users.
We first look at how text excerpts are extracted from reviews in the existing literature. Previous studies mainly generated aspect-based summary for products and services by aggregating subjective text excerpts related to each aspect. Different forms of the excerpts include sentence @cite_13 , concise phrase composing of a modifier and a header term @cite_19 , adjective-noun pair extracted based on POS tagging and the term-frequency of the pair @cite_9 , and phrase generated by rules @cite_5 . Some limitations of these previous work are i) they only worked with the simplistic adjective-noun pairs or specific form of reviews such as short comments, and ii) experiments were carried out with reviews of services only. Our approach to extract text segments by matching variable-length linguistic patterns overcome these shortcomings and can generalize well for free-text reviews of both products and services.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_13", "@cite_5" ], "mid": [ "2145071407", "2153207410", "2160660844", "2154416898" ], "abstract": [ "Web 2.0 technologies have enabled more and more people to freely comment on different kinds of entities (e.g. sellers, products, services). The large scale of information poses the need and challenge of automatic summarization. In many cases, each of the user-generated short comments comes with an overall rating. In this paper, we study the problem of generating a rated aspect summary'' of short comments, which is a decomposed view of the overall ratings for the major aspects so that a user could gain different perspectives towards the target entity. We formally define the problem and decompose the solution into three steps. We demonstrate the effectiveness of our methods by using eBay sellers' feedback comments. We also quantitatively evaluate each step of our methods and study how well human agree on such a summarization task. The proposed methods are quite general and can be used to generate rated aspect summary automatically given any collection of short comments each associated with an overall rating.", "Many people read online reviews written by other users to learn more about a product or venue. However, the overwhelming amount of user-generated reviews and variance in length, detail and quality across the reviews make it difficult to glean useful information. In this paper, we present the iterative design of our system, called Review Spotlight. It provides a brief overview of reviews using adjective-noun word pairs, and allows the user to quickly explore the reviews in greater detail. Through a laboratory user study which required participants to perform decision making tasks, we showed that participants could form detailed impressions about restaurants and decide between two options significantly faster with Review Spotlight than with traditional review webpages.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45 relative improvement in recall through the use of parsing methods, while also improving precision." ] }
1211.4929
2395334784
We present a novel summarization framework for reviews of products and services by selecting informative and concise text segments from the reviews. Our method consists of two major steps. First, we identify five frequently occurring variable-length syntactic patterns and use them to extract candidate segments. Then we use the output of a joint generative sentiment topic model to filter out the non-informative segments. We verify the proposed method with quantitative and qualitative experiments. In a quantitative study, our approach outperforms previous methods in producing informative segments and summaries that capture aspects of products and services as expressed in the user-generated pros and cons lists. Our user study with ninety users resonates with this result: individual segments extracted and filtered by our method are rated as more useful by users compared to previous approaches by users.
Various methods for selecting informative text fragments were applied in previous research, such as matching against pre-defined or frequently occurring aspects @cite_14 @cite_13 , ranking frequency @cite_9 , and topic models @cite_0 @cite_20 @cite_2 . We are interested in the application of joint sentiment topic models as they can infer sentiment words that are closely associative with an aspect. This is an important property of polarity of sentiment words as pointed out in @cite_4 @cite_1 @cite_10 @cite_7 , and recently several joint topic models have been proposed to unify the treatment of sentiment and topic (aspect) @cite_12 @cite_1 @cite_0 @cite_6 . Applications of these models have been limited to sentiment classification for reviews, but we hypothesize that they can also be helpful in summarization. We focus our next discussion on previous joint models in comparison to our proposed model.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_10", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "2152571774", "2587281424", "2166706824", "", "2153207410", "2108420397", "2096110600", "2129294185", "2092167320", "2160660844", "2044429219", "2154970197" ], "abstract": [ "Online user reviews are increasingly becoming the de-facto standard for measuring the quality of electronics, restau- rants, merchants, etc. The sheer volume of online reviews makes it difficult for a human to process and extract all meaningful information in order to make an educated pur- chase. As a result, there has been a trend toward systems that can automatically summarize opinions from a set of re- views and display them in an easy to process manner (1, 9). In this paper, we present a system that summarizes the sen- timent of reviews for a local service such as a restaurant or hotel. In particular we focus on aspect-based summarization models (8), where a summary is built by extracting relevant aspects of a service, such as service or value, aggregating the sentiment per aspect, and selecting aspect-relevant text. We describe the details of both the aspect extraction and sentiment detection modules of our system. A novel aspect of these models is that they exploit user provided labels and domain specific characteristics of service reviews to increase quality.", "An intraocular lens for implantation in an eye comprising an optic configured so that the optic can be deformed to permit the intraocular lens to be passed through an incision into the eye. A peripheral zone circumscribes the optical zone of the optic and one or more fixation members coupled to the peripheral zone and extending outwardly from the peripheral zone to retain the optic in the eye are provided. In one embodiment the fixation member or members are located so that the optical zone is free of such member or members. The peripheral zone preferably has a maximum axial thickness which is larger than the maximum axial thickness of the periphery of the optical zone.", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "", "Many people read online reviews written by other users to learn more about a product or venue. However, the overwhelming amount of user-generated reviews and variance in length, detail and quality across the reviews make it difficult to glean useful information. In this paper, we present the iterative design of our system, called Review Spotlight. It provides a brief overview of reviews using adjective-noun word pairs, and allows the user to quickly explore the reviews in greater detail. Through a laboratory user study which required participants to perform decision making tasks, we showed that participants could form detailed impressions about restaurants and decide between two options significantly faster with Review Spotlight than with traditional review webpages.", "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews [18, 19, 7, 12, 27, 36, 21]. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., 'waitress' and 'bartender' are part of the same topic 'staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.", "In this paper, we define the problem of topic-sentiment analysis on Weblogs and propose a novel probabilistic model to capture the mixture of topics and sentiments simultaneously. The proposed Topic-Sentiment Mixture (TSM) model can reveal the latent topical facets in a Weblog collection, the subtopics in the results of an ad hoc query, and their associated sentiments. It could also provide general sentiment models that are applicable to any ad hoc topics. With a specifically designed HMM structure, the sentiment models and topic models estimated with TSM can be utilized to extract topic life cycles and sentiment dynamics. Empirical experiments on different Weblog datasets show that this approach is effective for modeling the topic facets and sentiments and extracting their dynamics from Weblog collections. The TSM model is quite general; it can be applied to any text collections with a mixture of topics and sentiments, thus has many potential applications, such as search result summarization, opinion tracking, and user behavior prediction.", "In this paper, we study the aspect-based extractive summarization based on the observations that a good summary should present representative opinions on user concerned sub-aspects within limited words. According to these observations, we argue that, two requirements, i.e. representativeness and diversity, should be considered for generating a good summary in addition to the traditional requirements of aspect-relevance and sentiment intensity. We focus on the intrinsic relationship between sentences and the dependency between extracted sentences for summarization, and thus propose a novel aspect-based summarization method for online reviews, which employs an Aspect-sensitive Markov Random Walk Model to meet the representativeness requirement, as well as a greedy redundancy removal method to meet the diversity requirement. The conducted experiments verify the effectiveness of the proposed method by comparing it with the baselines which ignores representativeness and or diversity. The experimental results also show that, the two requirements we present are both indispensable for a good summary.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "User-generated reviews on the Web contain sentiments about detailed aspects of products and services. However, most of the reviews are plain text and thus require much effort to obtain information about relevant details. In this paper, we tackle the problem of automatically discovering what aspects are evaluated in reviews and how sentiments for different aspects are expressed. We first propose Sentence-LDA (SLDA), a probabilistic generative model that assumes all words in a single sentence are generated from one aspect. We then extend SLDA to Aspect and Sentiment Unification Model (ASUM), which incorporates aspect and sentiment together to model sentiments toward different aspects. ASUM discovers pairs of aspect, sentiment which we call senti-aspects. We applied SLDA and ASUM to reviews of electronic devices and restaurants. The results show that the aspects discovered by SLDA match evaluative details of the reviews, and the senti-aspects found by ASUM capture important aspects that are closely coupled with a sentiment. The results of sentiment classification show that ASUM outperforms other generative models and comes close to supervised classification methods. One important advantage of ASUM is that it does not require any sentiment labels of the reviews, which are often expensive to obtain.", "Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings ‐ a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals." ] }
1211.3776
2951692356
Adaptive Radio Resource Allocation is essential for guaranteeing high bandwidth and power utilization as well as satisfying heterogeneous Quality-of-Service requests regarding next generation broadband multicarrier wireless access networks like LTE and Mobile WiMAX. A downlink OFDMA single-cell scenario is considered where heterogeneous Constant-Bit-Rate and Best-Effort QoS profiles coexist and the power is uniformly spread over the system bandwidth utilizing a Uniform Power Loading (UPL) scenario. We express this particular QoS provision scenario in mathematical terms, as a variation of the well-known generalized assignment problem answered in the combinatorial optimization field. Based on this concept, we propose two heuristic search algorithms for dynamically allocating subchannels to the competing QoS classes and users which are executed under polynomially-bounded cost. We also propose an Integer Linear Programming model for optimally solving and acquiring a performance upper bound for the same problem at reasonable yet high execution times. Through extensive simulation results we show that the proposed algorithms exhibit high close-to-optimal performance, thus comprising attractive candidates for implementation in modern OFDMA-based systems.
In @cite_21 the above concepts were thoroughly presented first for single- and then for multi-antenna wireless mobile systems. According to this concept each receiver monitors the experienced SNR levels, feeds them back to the BS, while the BS schedules transmissions and adapts users' bit rates depending on the particular channel quality reports. Similar arguments were raised in @cite_2 where it was demonstrated that for 2G 3G systems the cellular spectral efficiency may significantly improve, even double at certain conditions, when the BS utilizes the per-user channel rate information. In @cite_12 the ideas were extended to multi-channel OFDM wireless systems like the one we examine in this paper. The widely used term "opportunistic" bears a strong relation with our work, since we tend to allocate subchannels with high channel conditions (near to their peak) to the corresponding users (frequency-domain opportunism), while in @cite_21 a similar policy is employed in the time-domain.
{ "cite_N": [ "@cite_21", "@cite_12", "@cite_2" ], "mid": [ "2161872841", "2096528583", "2108086149" ], "abstract": [ "Multiuser diversity is a form of diversity inherent in a wireless network, provided by independent time-varying channels across the different users. The diversity benefit is exploited by tracking the channel fluctuations of the users and scheduling transmissions to users when their instantaneous channel quality is near the peak. The diversity gain increases with the dynamic range of the fluctuations and is thus limited in environments with little scattering and or slow fading. In such environments, we propose the use of multiple transmit antennas to induce large and fast channel fluctuations so that multiuser diversity can still be exploited. The scheme can be interpreted as opportunistic beamforming and we show that true beamforming gains can be achieved when there are sufficient users, even though very limited channel feedback is needed. Furthermore, in a cellular system, the scheme plays an additional role of opportunistic nulling of the interference created on users of adjacent cells. We discuss the design implications of implementing. this scheme in a complete wireless system.", "Recently, a lot of research effort has been spent on cross-layer system design. It has been shown that cross-layer mechanisms (i.e., policies) potentially provide significant performance gains for various systems. In this article we review several aspects of cross-layer system optimization regarding wireless OFDM systems. We discuss basic optimization models and present selected heuristic approaches realizing cross-layer policies by means of dynamic resource allocation. Two specific areas are treated separately: models and dynamic approaches for single transmitter receiver pairs (i.e., a point-to-point communication scenario) as well as models and approaches for point-to-multipoint communication scenarios (e.g., the downlink of a wireless cell). This article provides basic knowledge in order to investigate future OFDM cross-layer-optimization issues", "Today's cellular systems are designed to achieve 90-95 percent coverage for voice users (i.e., the ratio of signal to interference plus noise must be above a design target over 90 to 95 percent of the cell area). This ensures that the desired data rate which achieves good voice quality can be provided \"everywhere\". As a result, SINRs that are much larger than the target are achieved over a large portion of the cellular coverage area. For a packet data service, the larger SINR can be used to provide higher data rates by reducing coding or spreading and or increasing the constellation density. It is straight-forward to see that cellular spectral efficiency (in terms of b s Hz sector) can be increased by a factor of two or more if users with better links are served at higher data rates. Procedures that exploit this are already in place for all the major cellular standards in the world. In this article, we describe data rate adaptation procedures for CDMA (IS-95), wideband CDMA (cdma2000 and UMTS WCDMA), TDMA (IS-136), and GSM (GPRS and EDGE)." ] }
1211.4041
2067630960
Carrier aggregation (CA) and small cells are two distinct features of next-generation cellular networks. Cellular networks with different types of small cells are often referred to as HetNets. In this paper, we introduce a load-aware model for CA-enabled multi-band HetNets. Under this model, the impact of biasing can be more appropriately characterized; for example, it is observed that with large enough biasing, the spectral efficiency of small cells may increase while its counterpart in a fully-loaded model always decreases. Further, our analysis reveals that the peak data rate does not depend on the base station density and transmit powers; this strongly motivates other approaches e.g. CA to increase the peak data rate. Last but not least, different band deployment configurations are studied and compared. We find that with large enough small cell density, spatial reuse with small cells outperforms adding more spectrum for increasing user rate. More generally, universal cochannel deployment typically yields the largest rate; and thus a capacity loss exists in orthogonal deployment. This performance gap can be reduced by appropriately tuning the HetNet coverage distribution (e.g. by optimizing biasing factors).
Cellular networks are undergoing a major evolution as current cellular networks cannot keep pace with user demand through simply deploying more macro base stations (BS) @cite_9 . As a result, attention is being shifted to deploying small, inexpensive, low-power nodes in the current macro cells; these low power nodes may include pico @cite_26 and femto @cite_9 BSs, as well as distributed antennas @cite_21 . Cellular networks with them take on a very heterogeneous characteristic, and are often referred to as HetNets @cite_16 @cite_38 @cite_0 @cite_34 . Due to the heterogeneous and ad hoc deployments common in low power nodes, the validity of adoption of the classical models such as Wyner model @cite_25 or hexagonal grid @cite_10 for HetNet study becomes questionable @cite_5 .
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_9", "@cite_21", "@cite_34", "@cite_0", "@cite_5", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "", "2117527647", "2109159967", "2120874535", "", "", "2150166076", "2062157062", "", "2156315971" ], "abstract": [ "", "There is a need to increase the capacity of cellular networks to support the successful growth of mobile broadband usage. Network densification with HetNet (Heterogeneous Networks) is identified as a key method to meet future demands. In this paper one promising HetNet component is studied, the low power pico node. The 3GPP LTE (Long Term Evolution) pico features are described and the deployment aspects are investigated by means of simulations. It is found that with good deployment the capacity increase from low power nodes can be significantly improved. Capturing hot spot traffic has larger impact on performance than radio access features like range extension. Thus, knowing the hot spot cluster positions and ability to deploy the pico node near these positions is important. Also the hotspot positions relative the overlaying macro cells have impact.", "Femtocells, despite their name, pose a potentially large disruption to the carefully planned cellular networks that now connect a majority of the planet's citizens to the Internet and with each other. Femtocells - which by the end of 2010 already outnumbered traditional base stations and at the time of publication are being deployed at a rate of about five million a year - both enhance and interfere with this network in ways that are not yet well understood. Will femtocells be crucial for offloading data and video from the creaking traditional network? Or will femtocells prove more trouble than they are worth, undermining decades of careful base station deployment with unpredictable interference while delivering only limited gains? Or possibly neither: are femtocells just a \"flash in the pan\"; an exciting but short-lived stage of network evolution that will be rendered obsolete by improved WiFi offloading, new backhaul regulations and or pricing, or other unforeseen technological developments? This tutorial article overviews the history of femtocells, demystifies their key aspects, and provides a preview of the next few years, which the authors believe will see a rapid acceleration towards small cell technology. In the course of the article, we also position and introduce the articles that headline this special issue.", "This paper studies the potential benefits of antenna array processing-used in conjunction with adaptive data-rate control-in broadband wireless networks. We focus on distributed antenna arrays, i.e., combining signals from a group of microcells, rather than the more conventional centralized (macrocellular) antenna, array processing. We show that distributed arrays promise significant power and capacity gains over centralized arrays. Moreover, we show that even selection combining (though less effective than coherent combining) can be very successful in this architecture, offering a promising tradeoff between performance and complexity.", "", "", "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.", "This paper provides a high-level overview of some technology components currently considered for the evolution of LTE including complete fulfillment of the IMT-advanced requirements. These technology components include extended spectrum flexibility, multi-antenna solutions, coordinated multipoint transmission reception, and the use of advanced repeaters relaying. A simple performance assessment is also included, indicating potential for significantly increased performance.", "", "We obtain Shannon-theoretic limits for a very simple cellular multiple-access system. In our model the received signal at a given cell site is the sum of the signals transmitted from within that cell plus a factor spl alpha (0 spl les spl alpha spl les 1) times the sum of the signals transmitted from the adjacent cells plus ambient Gaussian noise. Although this simple model is scarcely realistic, it nevertheless has enough meat so that the results yield considerable insight into the workings of real systems. We consider both a one dimensional linear cellular array and the familiar two-dimensional hexagonal cellular pattern. The discrete-time channel is memoryless. We assume that N contiguous cells have active transmitters in the one-dimensional case, and that N sup 2 contiguous cells have active transmitters in the two-dimensional case. There are K transmitters per cell. Most of our results are obtained for the limiting case as N spl rarr spl infin . The results include the following. (1) We define C sub N ,C spl circ sub N as the largest achievable rate per transmitter in the usual Shannon-theoretic sense in the one- and two-dimensional cases, respectively (assuming that all signals are jointly decoded). We find expressions for limN spl rarr spl infin C sub N and limN spl rarr spl infin C spl circ sub N . (2) As the interference parameter spl alpha increases from 0, C sub N and C spl circ sub N increase or decrease according to whether the signal-to-noise ratio is less than or greater than unity. (3) Optimal performance is attainable using TDMA within the cell, but using TDMA for adjacent cells is distinctly suboptimal. (4) We suggest a scheme which does not require joint decoding of all the users, and is, in many cases, close to optimal. >" ] }
1211.4041
2067630960
Carrier aggregation (CA) and small cells are two distinct features of next-generation cellular networks. Cellular networks with different types of small cells are often referred to as HetNets. In this paper, we introduce a load-aware model for CA-enabled multi-band HetNets. Under this model, the impact of biasing can be more appropriately characterized; for example, it is observed that with large enough biasing, the spectral efficiency of small cells may increase while its counterpart in a fully-loaded model always decreases. Further, our analysis reveals that the peak data rate does not depend on the base station density and transmit powers; this strongly motivates other approaches e.g. CA to increase the peak data rate. Last but not least, different band deployment configurations are studied and compared. We find that with large enough small cell density, spatial reuse with small cells outperforms adding more spectrum for increasing user rate. More generally, universal cochannel deployment typically yields the largest rate; and thus a capacity loss exists in orthogonal deployment. This performance gap can be reduced by appropriately tuning the HetNet coverage distribution (e.g. by optimizing biasing factors).
Using the PPP model, the locations of different type of BSs in a HetNet are often modeled by an independent PPP @cite_33 . Due to the tractability of PPP model, many analytical results such as coverage probability and rate can be obtained @cite_33 @cite_22 @cite_6 ; more interestingly, these analytical results fairly agree with industry findings obtained by extensive simulations and experiments @cite_27 . As a result, similar models have been further used to optimize the HetNet design including spectrum allocation @cite_18 , load balancing @cite_30 @cite_8 , spectrum sensing @cite_31 , etc. These encouraging progresses motivate us to adopt the PPP model for CA study in this paper.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_33", "@cite_8", "@cite_6", "@cite_27", "@cite_31" ], "mid": [ "2005108639", "2109830484", "", "2149170915", "", "", "2101121986", "2040714707" ], "abstract": [ "Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.", "The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities.", "", "Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.", "", "", "The proliferation of internet-connected mobile devices will continue to drive growth in data traffic in an exponential fashion, forcing network operators to dramatically increase the capacity of their networks. To do this cost-effectively, a paradigm shift in cellular network infrastructure deployment is occurring away from traditional (expensive) high-power tower-mounted base stations and towards heterogeneous elements. Examples of heterogeneous elements include microcells, picocells, femtocells, and distributed antenna systems (remote radio heads), which are distinguished by their transmit powers coverage areas, physical size, backhaul, and propagation characteristics. This shift presents many opportunities for capacity improvement, and many new challenges to co-existence and network management. This article discusses new theoretical models for understanding the heterogeneous cellular networks of tomorrow, and the practical constraints and challenges that operators must tackle in order for these networks to reach their potential.", "In a two-tier heterogeneous network (HetNet) where femto access points (FAPs) with lower transmission power coexist with macro base stations (BSs) with higher transmission power, the FAPs may suffer significant performance degradation due to inter-tier interference. Introducing cognition into the FAPs through the spectrum sensing (or carrier sensing) capability helps them avoiding severe interference from the macro BSs and enhance their performance. In this paper, we use stochastic geometry to model and analyze performance of HetNets composed of macro BSs and cognitive FAPs in a multichannel environment. The proposed model explicitly accounts for the spatial distribution of the macro BSs, FAPs, and users in a Rayleigh fading environment. We quantify the performance gain in outage probability obtained by introducing cognition into the femto-tier, provide design guidelines, and show the existence of an optimal spectrum sensing threshold for the cognitive FAPs, which depends on the HetNet parameters. We also show that looking into the overall performance of the HetNets is quite misleading in the scenarios where the majority of users are served by the macro BSs. Therefore, the performance of femto-tier needs to be explicitly accounted for and optimized." ] }
1211.4041
2067630960
Carrier aggregation (CA) and small cells are two distinct features of next-generation cellular networks. Cellular networks with different types of small cells are often referred to as HetNets. In this paper, we introduce a load-aware model for CA-enabled multi-band HetNets. Under this model, the impact of biasing can be more appropriately characterized; for example, it is observed that with large enough biasing, the spectral efficiency of small cells may increase while its counterpart in a fully-loaded model always decreases. Further, our analysis reveals that the peak data rate does not depend on the base station density and transmit powers; this strongly motivates other approaches e.g. CA to increase the peak data rate. Last but not least, different band deployment configurations are studied and compared. We find that with large enough small cell density, spatial reuse with small cells outperforms adding more spectrum for increasing user rate. More generally, universal cochannel deployment typically yields the largest rate; and thus a capacity loss exists in orthogonal deployment. This performance gap can be reduced by appropriately tuning the HetNet coverage distribution (e.g. by optimizing biasing factors).
One major concern about deploying small cells is that they have limited coverage due to their low transmit powers. As a result, small cells are often lightly loaded and would not accomplish much without load balancing, while macro cells are still heavily loaded. To alleviate this issue, a simple load balancing approach called biasing has been proposed @cite_16 ; biasing allows the low power nodes to increase their transmit powers. As a result, it helps expand the coverage areas of small cells and enables more user equipment (UE) to be served by small cells @cite_27 @cite_39 @cite_3 . It is expected that biasing can help balance network load and correspondingly leads to higher throughput. However, a theoretical study on the impact of biasing is challenging. While @cite_20 modeled biasing in a -band HetNet, it assumes a fully-loaded HetNet, i.e., all the BSs are simultaneously active all the time. Similar fully-loaded model is also used in @cite_30 for offloading study.
{ "cite_N": [ "@cite_30", "@cite_3", "@cite_39", "@cite_27", "@cite_16", "@cite_20" ], "mid": [ "2005108639", "", "", "2101121986", "2062157062", "2034420299" ], "abstract": [ "Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.", "", "", "The proliferation of internet-connected mobile devices will continue to drive growth in data traffic in an exponential fashion, forcing network operators to dramatically increase the capacity of their networks. To do this cost-effectively, a paradigm shift in cellular network infrastructure deployment is occurring away from traditional (expensive) high-power tower-mounted base stations and towards heterogeneous elements. Examples of heterogeneous elements include microcells, picocells, femtocells, and distributed antenna systems (remote radio heads), which are distinguished by their transmit powers coverage areas, physical size, backhaul, and propagation characteristics. This shift presents many opportunities for capacity improvement, and many new challenges to co-existence and network management. This article discusses new theoretical models for understanding the heterogeneous cellular networks of tomorrow, and the practical constraints and challenges that operators must tackle in order for these networks to reach their potential.", "This paper provides a high-level overview of some technology components currently considered for the evolution of LTE including complete fulfillment of the IMT-advanced requirements. These technology components include extended spectrum flexibility, multi-antenna solutions, coordinated multipoint transmission reception, and the use of advanced repeaters relaying. A simple performance assessment is also included, indicating potential for significantly increased performance.", "In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association policies. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a particular transmit power, path loss exponent, spatial density, and bias towards admitting mobile users. For example, as compared to macrocells, picocells would usually have lower transmit power, higher path loss exponent (lower antennas), higher spatial density (many picocells per macrocell), and a positive bias so that macrocell users are actively encouraged to use the more lightly loaded picocells. In the present paper we implicitly assume all base stations have full queues; future work should relax this. For this model, we derive the outage probability of a typical user in the whole network or a certain tier, which is equivalently the downlink SINR cumulative distribution function. The results are accurate for all SINRs, and their expressions admit quite simple closed-forms in some plausible special cases. We also derive the average ergodic rate of the typical user, and the minimum average user throughput - the smallest value among the average user throughputs supported by one cell in each tier. We observe that neither the number of BSs or tiers changes the outage probability or average ergodic rate in an interference-limited full-loaded HCN with unbiased cell association (no biasing), and observe how biasing alters the various metrics." ] }
1211.4041
2067630960
Carrier aggregation (CA) and small cells are two distinct features of next-generation cellular networks. Cellular networks with different types of small cells are often referred to as HetNets. In this paper, we introduce a load-aware model for CA-enabled multi-band HetNets. Under this model, the impact of biasing can be more appropriately characterized; for example, it is observed that with large enough biasing, the spectral efficiency of small cells may increase while its counterpart in a fully-loaded model always decreases. Further, our analysis reveals that the peak data rate does not depend on the base station density and transmit powers; this strongly motivates other approaches e.g. CA to increase the peak data rate. Last but not least, different band deployment configurations are studied and compared. We find that with large enough small cell density, spatial reuse with small cells outperforms adding more spectrum for increasing user rate. More generally, universal cochannel deployment typically yields the largest rate; and thus a capacity loss exists in orthogonal deployment. This performance gap can be reduced by appropriately tuning the HetNet coverage distribution (e.g. by optimizing biasing factors).
Sum rate is an important metric in wireless networks, particularly CA-enabled HetNets. Unfortunately, from the sum-rate perspective, biasing does not help in a fully-loaded network. Thus, in order to appropriately examine the effect of biasing on the sum rate of CA-enabled HetNets, an appropriate notion of BS load is needed. While @cite_4 proposed a load-aware model in which low power nodes can be less active than macro BSs over the domain, it focuses on -band HetNet and is still not sufficient for CA study which essentially involves multi-band modeling and analysis. Therefore, the main goal of this paper is to propose a multi-band HetNet model with an appropriate notion of load. With the proposed model, we would like to theoretically examine the impact of biasing and study how to deploy the available bands in a HetNet to best exploit CA.
{ "cite_N": [ "@cite_4" ], "mid": [ "2057540419" ], "abstract": [ "Random spatial models are attractive for modeling heterogeneous cellular networks (HCNs) due to their realism, tractability, and scalability. A major limitation of such models to date in the context of HCNs is the neglect of network traffic and load: all base stations (BSs) have typically been assumed to always be transmitting. Small cells in particular will have a lighter load than macrocells, and so their contribution to the network interference may be significantly overstated in a fully loaded model. This paper incorporates a flexible notion of BS load by introducing a new idea of conditionally thinning the interference field. For a K-tier HCN where BSs across tiers differ in terms of transmit power, supported data rate, deployment density, and now load, we derive the coverage probability for a typical mobile, which connects to the strongest BS signal. Conditioned on this connection, the interfering BSs of the i^ th tier are assumed to transmit independently with probability p_i, which models the load. Assuming — reasonably — that smaller cells are more lightly loaded than macrocells, the analysis shows that adding such access points to the network always increases the coverage probability. We also observe that fully loaded models are quite pessimistic in terms of coverage." ] }
1211.3006
1778067221
Wireless sensor networks are normally characterized by resource challenged nodes. Since communication costs the most in terms of energy in these networks, minimizing this overhead is important. We consider minimum length node scheduling in regular multi-hop wireless sensor networks. We present collision-free decentralized scheduling algorithms based on TDMA with spatial reuse that do not use message passing, this saving communication overhead. We develop the algorithms using graph-based k-hop interference model and show that the schedule complexity in regular networks is independent of the number of nodes and varies quadratically with k which is typically a very small number. We follow it by characterizing feasibility regions in the SINR parameter space where the constant complexity continues to hold while simultaneously satisfying the SINR criteria. Using simulation, we evaluate the efficiency of our solution on random network deployments.
In one of the earliest works on regular wireless networks, Silvester and Kleinrock investigated the capacity of multi-hop regular topology ALOHA networks @cite_23 . Recently, Mergen and Tong extended this work and analyzed the capacity of regular networks @cite_16 . @cite_15 , Mangharam and Rajkumar present a MAC protocol called MAX for square-grid networks with regular node placement. In all these, transmission and interference ranges are assumed to be the same, an assumption that we have gotten rid of in this paper. In related previous work, the author presented a distributed algorithm for convergecast in hexagonal networks @cite_8 , and investigated the feasibility of hexagonal backbone formation in sensor network deployments @cite_9 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_23", "@cite_15", "@cite_16" ], "mid": [ "2105749622", "2114860728", "2112735737", "1981391230", "2101064555" ], "abstract": [ "Since wireless ad-hoc networks use shared communication medium, accesses to the medium must be coordinated to avoid packet collisions. Transmission scheduling algorithms allocate time slots to the nodes of a network such that if the nodes transmit only during the allocated time slots, no collision occurs. For real-time applications, by ensuring deterministic channel access, transmission scheduling algorithms have the added significance of making guarantees on transmission latency possible. In this paper we present a distributed transmission scheduling algorithm for hexagonal wireless ad-hoc networks with a particular focus on Wireless Sensor Networks. Afforded by the techniques of ad-hoc networks topology control, hexagonal meshes enable trivial addressing and routing protocols. Our transmission scheduling algorithm constructs network-wide conflict-free packet transmission schedule for hexagonal networks, where the overhead of schedule construction in terms of message exchanges is zero above and beyond that for topology control and other network control related functions. Furthermore, the schedule is optimal in the sense that the bottleneck node does not idle. We also present an implicit clock synchronization algorithm to facilitate scheduling. We derive the real time capacity of our scheduling algorithm. We present evaluations of our scheduling algorithm in the presence of topological irregularities using simulation.", "Hexagonal wireless sensor network refers to a network topology where a subset of nodes have six peer neighbors. These nodes form a backbone for multi-hop communications. In a previous work, we proposed the use of hexagonal topology in wireless sensor networks and discussed its properties in relation to real-time (bounded latency) multi-hop communications in large-scale deployments. In that work, we did not consider the problem of hexagonal topology formation in practice — which is the subject of this research. In this paper, we present a decentralized algorithm that forms the hexagonal topology backbone in an arbitrary but sufficiently dense network deployment. We implemented a prototype of our algorithm in NesC for TinyOS based platforms. We present data from field tests of our implementation, collected using a deployment of fifty wireless sensor nodes.", "In this paper we investigate the capacity of networks with a regular structure operating under the slotted ALOHA access protocol. We first consider circular (loop) and linear (bus) networks and then proceed to two-dimensional networks. For one-dimensional networks we find that the capacity is basically independent of the network average degree and is almost constant with respect to network size. For two-dimensional networks we find that the capacity grows in proportion to the square root of the number of nodes in the network provided that the average degree is kept small. Furthermore, we find that reducing the average degree (with certain connectivity restrictions) allows a higher throughput to be achieved. We also investigate some of the peculiarities of routing in these networks.", "Multi-hop wireless networks facilitate applications in metropolitan area broadband, home multimedia, surveillance and industrial control networks. Many of these applications require high end-to- end throughput and or bounded delay. Random access link-layer protocols such as carrier sense multiple access (CSMA) which are widely used in single-hop networks perform poorly in the multi-hop regime and provide no end-to-end QoS guarantees. The primary causes for their poor performance are uncoordinated interference and unfairness in exclusive access of the shared wireless medium. Furthermore, random access schemes do not leverage spatial reuse effectively and require routes to be link- aware. In this paper, we propose and study MAX, a time-division- multiplexed resource allocation framework for multi-hop networks with regular topologies. MAX tiling delivers optimal end-to-end throughput across arbitrarily large regularly structured networks while providing bounded delay. It outperforms CSMA-based random access protocols by a factor of 5 to 8. The MAX approach also supports network services including flexible uplink and downlink bandwidth management, deterministic route admission control, and optimal gateway placement. MAX has been implemented on IEEE 802.15.3 embedded nodes and a test-bed of 50 nodes has been deployed both indoors and outdoors.", "We study the stability and capacity problems in regular wireless networks. In the first part of the paper, we provide a general approach to characterizing the capacity region of arbitrary networks, find an outer bound to the capacity region in terms of the transport capacity, and discuss connections between the capacity formulation and the stability of node buffers. In the second part of the paper, we obtain closed-form expressions for the capacity of Manhattan (two-dimensional grid) and ring networks (circular array of nodes). We also find the optimal (i.e., capacity-achieving) medium access and routing policies. Our objective in analyzing regular networks is to provide insights and design guidelines for general networks. The knowledge of the exact capacity enables us to quantify the loss incurred by suboptimal protocols such as slotted ALOHA medium access and random-walk-based routing. Optimal connectivity and the effects of link fading on network capacity are also investigated." ] }
1211.3006
1778067221
Wireless sensor networks are normally characterized by resource challenged nodes. Since communication costs the most in terms of energy in these networks, minimizing this overhead is important. We consider minimum length node scheduling in regular multi-hop wireless sensor networks. We present collision-free decentralized scheduling algorithms based on TDMA with spatial reuse that do not use message passing, this saving communication overhead. We develop the algorithms using graph-based k-hop interference model and show that the schedule complexity in regular networks is independent of the number of nodes and varies quadratically with k which is typically a very small number. We follow it by characterizing feasibility regions in the SINR parameter space where the constant complexity continues to hold while simultaneously satisfying the SINR criteria. Using simulation, we evaluate the efficiency of our solution on random network deployments.
McDiarmid and Reed showed that the chromatic number of general hexagonal graphs is bounded by 4 3 times the clique number @cite_13 . This result suggests that our hexagonal scheduling algorithm, the complexity of which is also bounded by 4 3 times the clique number, might indeed be very close to optimal. A 4 3-approximate distributed channel assignment algorithm that uses several rounds of message passing is presented in @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_13" ], "mid": [ "2074982499", "2082873662" ], "abstract": [ "A cellular network is generally modeled as a subgraph of the triangular lattice. In the static frequency assignment problem, each vertex of the graph is a base station in the network, and has associated with it an integer weight that represents the number of calls that must be served at the vertex by assigning distinct frequencies per call. The edges of the graph model interference constraints for frequencies assigned to neighboring stations. The static frequency assignment problem can be abstracted as a graph multicoloring problem. We describe an efficient algorithm to multicolor optimally any weighted even or odd length cycle representing a cellular network. This result is further extended to any outerplanar graph. For the problem of multicoloring an arbitrary connected subgraph of the triangular lattice, we demonstrate an approximation algorithm which guarantees that no more than 4 3 times the minimum number of required colors are used. Further, we show that this algorithm can be implemented in a distributed manner, where each station needs to have knowledge only of the weights at a small neighborhood.", "In cellular telephone networks, sets of radio channels (colors) must be assigned to transmitters (vertices) while avoiding interterence. Often, the transmitters are laid out like vertices of a triangular lattice in the plane. We investigated the corresponding weighted coloring problem of assigning sets of colors to vertices of the triangular lattice so that the sets of colors assigned to adjacent vertices are disjoint. We present a hardness result and an efficient algorithm yielding an approximate solution." ] }
1211.3006
1778067221
Wireless sensor networks are normally characterized by resource challenged nodes. Since communication costs the most in terms of energy in these networks, minimizing this overhead is important. We consider minimum length node scheduling in regular multi-hop wireless sensor networks. We present collision-free decentralized scheduling algorithms based on TDMA with spatial reuse that do not use message passing, this saving communication overhead. We develop the algorithms using graph-based k-hop interference model and show that the schedule complexity in regular networks is independent of the number of nodes and varies quadratically with k which is typically a very small number. We follow it by characterizing feasibility regions in the SINR parameter space where the constant complexity continues to hold while simultaneously satisfying the SINR criteria. Using simulation, we evaluate the efficiency of our solution on random network deployments.
The intractability of the minimum length scheduling problem has been established for graph-based models in @cite_1 @cite_6 and for the SINR model in @cite_7 . The scheduling algorithm in @cite_7 as well as several other papers @cite_7 @cite_20 @cite_22 are centralized. @cite_21 @cite_17 , SINR-based distributed scheduling algorithms are presented. These algorithms, however, are not collision-free and the performance guarantee is probabilistic. @cite_24 , a heuristic is presented, but no formal analysis is offered. Gronkvist presents slot stealing strategies @cite_4 . Power control strategies for cellular networks are presented in @cite_14 @cite_0 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_24", "@cite_0", "@cite_20", "@cite_17" ], "mid": [ "2020346433", "1524480268", "2116021438", "2155730370", "2152723361", "2068155033", "", "1976236778", "2168258456", "2111209182", "" ], "abstract": [ "", "Spatial reuse TDMA (STDMA) is a collision-free access scheme for ad hoc networks. The idea is to let spatially separated radio terminals reuse the same time slot when the resulting interferences are not too severe. In this paper we first describe the properties a distributed STDMA algorithm must have in order to be efficient. No existing algorithm fulfills all of these properties. Second we focus on how to efficiently use distributed information and describe an algorithm that can handle different amount of information. Furthermore, we evaluate this algorithm for different information and show that it can give the same capacity as a centralized reference algorithm.", "In this paper, we revisit the problem of determining the minimum-length schedule that satisfies certain traffic demands in a wireless network. Traditional approaches for the determination of minimum-length schedules are based on a collision channel model, in which neighboring transmissions cause destructive interference if and only if they are within the \"interference region\" of the receiving nodes. By contrast, we adopt here a more realistic model for the physical layer by requiring that a threshold be exceeded by the signal-to-interference-plus-noise ratio (SINR) for a transmission to be successful. We present a novel formulation of the problem that incorporates various power and rate adaptation schemes while seamlessly integrating the generation of \"matchings\" (i.e., sets of links that can be activated simultaneously) by taking into consideration the SINR constraints at the receivers. For the formulated problem, we propose a column-generation-based solution method and show that it theoretically converges to a globally optimal solution, with a potential advantage of not having to enumerate all the feasible matchings a priori. We also discuss the influence of power control, spatial reuse, and variable transmission rates on network performance. Furthermore, we include aspects of the routing problem and provide computational results for our proposed column-generation-based solution procedure.", "In this paper we study the problem of scheduling wireless links in the geometric SINR model, which explicitly uses the fact that nodes are distributed in the Euclidean plane. We present the first NP-completeness proofs in such a model. In particular, we prove two problems to be NP-complete: Scheduling and One-Shot Scheduling. The first problem consists in finding a minimum-length schedule for a given set of links. The second problem receives a weighted set of links as input and consists in finding a maximum-weight subset of links to be scheduled simultaneously in one shot. In addition to the complexity proofs, we devise an approximation algorithm for each problem.", "We present and analyze simple distributed contention resolution protocols for wireless networks. In our setting, one is given n pairs of senders and receivers located in a metric space. Each sender wants to transmit a signal to its receiver at a prespecified power level, e. g., all senders use the same, uniform power level as it is typically implemented in practice. Our analysis is based on the physical model in which the success of a transmission depends on the Signal-to-Interference-plus-Noise-Ratio (SINR). The objective is to minimize the number of time slots until all signals are successfully transmitted. Our main technical contribution is the introduction of a measure called maximum average affectance enabling us to analyze random contention-resolution algorithms in which each packet is transmitted in each step with a fixed probability depending on the maximum average affectance. We prove that the schedule generated this way is only an O(log2 n) factor longer than the optimal one, provided that the prespecified power levels satisfy natural monontonicity properties. By modifying the algorithm, senders need not to know the maximum average affectance in advance but only static information about the network. In addition, we extend our approach to multi-hop communication achieving the same appoximation factor.", "", "", "The problem of developing high-performance distributed scheduling algorithms for multi-hop wireless networks has seen enormous interest in recent years. The problem is especially challenging when studied under a physical interference model, which requires the SINR at the receiver to be above a certain threshold for decoding success. Under such an SINR model, transmission failure may be caused by interference due to simultaneous transmissions from far away nodes, which exacerbates the difficulty in developing a distributed algorithm. In this paper, we propose a scheduling algorithm that exploits carrier sensing and show that the algorithm is not only amenable to distributed implementation, but also results in throughput optimality. Our algorithm has a feature called the \"dual-state\" approach, which separates the transmission schedules from the system state and can be shown to improve delay performance.", "", "In this paper, we consider the classical problem of link scheduling in wireless networks under an accurate interference model, in which correct packet reception at a receiver node depends on the signa-to-interference-plus-noise ratio (SINR). While most previous work on wireless networks has addressed the scheduling problem using simplistic graph-based or distance-based interference models, a few recent papers have investigated scheduling with SINR-based interference models. However, these papers have either used approximations to the SINR model or have ignored important aspects of the problem. We study the problem of wireless link scheduling under the exact SINR model and present the first known true approximation algorithms for transmission scheduling under the exact model. We also introduce an algorithm with a proven approximation bound with respect to the length of the optimal schedule under primary interference. As an aside, our study identifies a class of \"difficult to schedule\" links, which hinder the derivation of tighter approximation bounds. Furthermore, we characterize conditions under which scheduling under SINR-based interference is within a constant factor from optimal under primary interference, which implies that secondary interference only degrades performance by a constant factor in these situations.", "" ] }
1211.2620
2155021036
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper o?ers a conceptual framework for the identi?cation and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
In their seminal paper on the four dark corners of RE, Zave and Jackson established a core ontology for RE, which describes the important concepts to be accounted for in RE @cite_16 . Doing so, they suggest that information about domain assumptions, requirements and specifications of the system-to-be must be collected, documented and analysed in order for RE to be successful. broaden this ontology by suggesting that any communicated information is relevant to consider as part of requirements problem @cite_14 . In other words, any information that is explicitely adressed to the engineer is relevant to consider . The present paper argues that both explicit implicit information are relevant to consider.
{ "cite_N": [ "@cite_14", "@cite_16" ], "mid": [ "2148347120", "2113435553" ], "abstract": [ "In their seminal paper ACM T. Softw. Eng. Methodol., 61 1997, 1--30, Zave and Jackson established a core ontology for Requirements Engineering RE and used it to formulate the “requirements problem”, thereby defining what it means to successfully complete RE. Starting from the premise that the stakeholders of the system-to-be communicate to the software engineer the information needed to perform RE, Zave and Jackson's ontology is shown to be incomplete, in that it does not cover all classes of basic concerns --namely, the beliefs, desires, intentions, and evaluations --that the stakeholders communicate. In response, we provide a new core ontology for requirements that covers these classes of basic stakeholder concerns. The proposed new core ontology leads to a new formulation of the requirements problem. We thereby establish a new framework for the information that needs to be elicited over the course of RE and new criteria for determining whether an RE problem has been successfully addressed.", "Research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. This article shines some light in the “four dark corners,” exposing problems and proposing solutions. We show that all descriptions involved in requirements engineering should be descriptions of the environment. We show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. Together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. They establish minimum standards for what information should be represented in a requirements language. They also make it possible to determine exactly what it means for requirements and engineering to be successfully completed." ] }
1211.2620
2155021036
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper o?ers a conceptual framework for the identi?cation and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
One common way to identify requirements is the goal-oriented approach, in which engineers should understand the why of a system before defining the what @cite_4 . Engineers should therefore try to capture intentions of stakeholders for the system-to-be. Various methods exist to capture such information @cite_12 , some of which focus on the decision-making process of stakeholders @cite_30 . In this paper, we complement such contribution using the more formal default logic theory.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_12" ], "mid": [ "2050721897", "2151451947", "" ], "abstract": [ "The requirements engineering (RE) process is a decision-rich complex problem solving activity. This paper examines the elements of organization-oriented macro decisions as well as process-oriented micro decisions in the RE process and illustrates how to integrate classical decision-making models with RE process models. This integration helps in formulating a common vocabulary and model to improve the manageability of the RE process, and contributes towards the learning process by validating and verifying the consistency of decision-making in RE activities.", "Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then compares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience, with such approaches and tool support are briefly discussed as well.", "" ] }
1211.2620
2155021036
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper o?ers a conceptual framework for the identi?cation and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
There has been limited attention regarding the question of . Yet, context as a source of information is not new. Many papers propose high level discussions about context in RE: Potts and Hsi @cite_7 emphasize the existence of -- opposed to abstractionism -- as a possible alternative design philosophy for information systems. Viller and Sommerville @cite_21 propose discussions about how is value-added to RE, thereby broadening the scope of RE context to culture questions. Beyer and Holtzblatt propose the Contextual Design model @cite_31 , which increases the scope of relevant information to any data about . Previous works illustrate the trend to include even more data in the scope of RE relevant information. Cohene and Easterbrook @cite_9 discuss a topic closer to what we address in this paper. They suggest that elicitation techniques that are used in an interview should be adapted to fit the kind of information engineers are trying to find, i.e. adapt the elicitation technique to the situation -- or context.
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_21", "@cite_7" ], "mid": [ "2147295172", "", "1941671609", "2115203384" ], "abstract": [ "Interviews with stakeholders can be a useful method for identifying user needs and establishing requirements. However, interviews are also problematic. They are time consuming and may result in insufficient, irrelevant or invalid data. Our goal is to re-examine the methodology of interview design, to determine how various contextual factors affect the success of interviews in requirements engineering. We present a case study of a Web conferencing system used by a support group for spousal caregivers of people with dementia. Two sets of interviews were conducted to identify requirements for a new version of the system. Both sets of interviews had the same information elicitation goals, but each used different interview tactics. A comparison of the participants' responses to each format offers insights into the relationship between the interview context and the relative success of each interview technique for eliciting the desired information. As a result of what we learned, we propose a framework to help analysts design interviews and chose tactics based on the context of the elicitation process. We call this the contextual risk analysis framework.", "", "Over a number of years, we have been involved in investigations into using workplace observation to inform requirements for complex systems. This paper discusses how our work has evolved from ethnography with prototyping through presentation of ethnographic fieldwork, to developing a method for social analysis that has been derived from our experience of applying ethnographic techniques. We discuss the strengths and weaknesses of each of these approaches with a particular focus on our most recent work in developing the Coherence method. This method is based on a fusion of viewpoint-oriented and ethnographic approaches to requirements engineering and uses an industry-standard notation (UML) to represent knowledge of work. We use a common example of an air traffic control system to illustrate each approach.", "The field of requirements engineering emerges out of tradition of research and engineering practice that stresses the importance of generalizations and abstractions. Although abstraction is essential to design it also has its dark side. By abstracting away from the context of an investigation, the designer too easily lapses into modeling only those things that are easy to model. The subtleties, special cases, interpretations and concrete features of the context of use are smoothed over in the rush to capture the essence of the requirements. Often, however, what is left out is essential to understanding stakeholders' needs. In contrast, approaches that stress context at the expense of abstraction may lead to floundering or to short-term customer satisfaction at the expense of long-term fragility of the system. What is needed is a synthesis of these two approaches: a synthesis that recognizes the complementary values of abstraction and context in requirements engineering and that does not relegate either one to a background role. Such a synthesis requires us not only to adopt new methods in practice but also to rethink our underlying assumptions about what requirements models are models of and what it means to validate them." ] }
1211.2620
2155021036
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper o?ers a conceptual framework for the identi?cation and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
While previous works highlight how valuable information about context is to RE, we find only few papers proposing a structured definition of context that is adapted to RE. One of them is a paper of @cite_0 , which goes on a method for requirements analysis that aims to accounts for individual, personal goals and the effect of time and context on requirements. They suggest a list of aspects to deal with, but do -- to the best of our knowledge -- no empirical validation. RE community seems to agree on the importance of further research on the link between context and RE. Cheng and Atlee @cite_27 stress the importance of context and empirical validation of RE models as a direction for future research to accelerate the transfer of research results into RE practice.
{ "cite_N": [ "@cite_0", "@cite_27" ], "mid": [ "2104817109", "2109105084" ], "abstract": [ "A method for requirements analysis is proposed that accounts for individual and personal goals, and the effect of time and context on personal requirements. First a framework to analyse the issues inherent in requirements that change over time and location is proposed. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. A scenario-based analysis method is described for specifying requirements goals and their potential change. The method addresses goal setting for measurement and monitoring, and conflict resolution when requirements at different layers (group, individual) and from different sources (personal, advice from an external authority) conflict. The method links requirements analysis to design by modelling alternative solution pathways. Different implementation pathways have cost–benefit implications for stakeholders, so cost–benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. The first case study illustrates personal requirements to help cognitively disabled users communicate via e-mail, while the second addresses personal and mobile requirements to help disabled users make journeys on their own, assisted by a mobile PDA guide. In both case studies the experience from requirements analysis to implementation, requirements monitoring, and requirements evolution is reported.", "In this paper, we review current requirements engineering (RE) research and identify future research directions suggested by emerging software needs. First, we overview the state of the art in RE research. The research is considered with respect to technologies developed to address specific requirements tasks, such as elicitation, modeling, and analysis. Such a review enables us to identify mature areas of research, as well as areas that warrant further investigation. Next, we review several strategies for performing and extending RE research results, to help delineate the scope of future research directions. Finally, we highlight what we consider to be the \"hot\" current and future research topics, which aim to address RE needs for emerging systems of the future." ] }
1211.2858
1515943088
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91 . Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Ashok propose a similar natural language search technique in which users can match an incoming report to previous reports, programmers and source code @cite_14 . By comparison, our technique is more lightweight and focuses only on searching the code and the defect report.
{ "cite_N": [ "@cite_14" ], "mid": [ "2101645128" ], "abstract": [ "In large software development projects, when a programmer is assigned a bug to fix, she typically spends a lot of time searching (in an ad-hoc manner) for instances from the past where similar bugs have been debugged, analyzed and resolved. Systematic search tools that allow the programmer to express the context of the current bug, and search through diverse data repositories associated with large projects can greatly improve the productivity of debugging This paper presents the design, implementation and experience from such a search tool called D ebug A dvisor . The context of a bug includes all the information a programmer has about the bug, including natural language text, textual rendering of core dumps, debugger output etc. Our key insight is to allow the programmer to collate this entire context as a query to search for related information. Thus, D ebug A dvisor allows the programmer to search using a fat query, which could be kilobytes of structured and unstructured data describing the contextual information for the current bug. Information retrieval in the presence of fat queries and variegated data repositories, all of which contain a mix of structured and unstructured data is a challenging problem. We present novel ideas to solve this problem. We have deployed D ebug A dvisor to over 100 users inside Microsoft. In addition to standard metrics such as precision and recall, we present extensive qualitative and quantitative feedback from our users." ] }
1211.2858
1515943088
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91 . Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Jones developed Tarantula, a technique that performs fault localization based on the insight that statements executed often during failed test cases likely account for potential fault locations @cite_36 . Similarly, Renieris and Rice use a nearest neighbor'' technique in their Whither tool to identify faults based on exposing differences in faulty and non-faulty runs that take very similar executions paths @cite_29 . These approaches are quite effective when a rich, indicative test suite is available and can be run as part of the fault localization process. They thus requires the fault-inducing input but not any natural language defect report. By contrast, our approach is lightweight, does not require an indicative test suite or fault-inducing input, but does require a natural language defect report. Both approaches will yield comparable performance, and could even be used in tandem.
{ "cite_N": [ "@cite_36", "@cite_29" ], "mid": [ "2101819268", "2166007208" ], "abstract": [ "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.", "We present a method for performing fault localization using similar program spectra. Our method assumes the existence of a faulty run and a larger number of correct runs. It then selects according to a distance criterion the correct run that most resembles the faulty run, compares the spectra corresponding to these two runs, and produces a report of \"suspicious\" parts of the program. Our method is widely applicable because it does not require any knowledge of the programinput and no more information from the user than a classification of the runs as either \"correct\" or \"faulty\". To experimentally validate the viability of the method, we implemented it in a tool, WHITHER using basic block profiling spectra. We experimented with two different similarity measures and the Siemens suite of 132 programs with injected bugs. To measure the success of the tool, we developed a generic method for establishing the quality of a report. The method is based on the way an \"ideal user\" would navigate the program using the report to save effort during debugging. The best results we obtained were, on average, above 50 , meaning that our ideal user would avoid looking at half of the program." ] }
1211.2858
1515943088
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91 . Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Cleve and Zeller localize faults by finding differences between correct and failing program execution states, limiting the scope of their search to only variables and values of interest to the fault in question @cite_27 . Notably, they focus on those variable and values that are relevant to the failure and to those program execution points where transitions occur and those variables become causes of failure. Their approach is in a strong sense finer-grained than ours: while nothing prevents our technique from being applied at the level of methods instead of files, their technique can give very precise information such as the transition to failure happened when @math became 2.'' Our approach is lighter-weight and does not require that the program be run, but it does require defect reports.
{ "cite_N": [ "@cite_27" ], "mid": [ "2036196659" ], "abstract": [ "Which is the defect that causes a software failure? By comparing the program states of a failing and a passing run, we can identify the state differences that cause the failure. However, these state differences can occur all over the program run. Therefore, we focus in space on those variables and values that are relevant for the failure, and in time on those moments where cause transitions occur - moments where new relevant variables begin being failure causes: \"Initially, variable argc was 3; therefore, at shell-sort(), variable a[2] was 0, and therefore, the program failed.\" In our evaluation, cause transitions locate the failure-inducing defect twice as well as the best methods known so far." ] }
1211.2858
1515943088
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91 . Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Liblit use Cooperative Bug Isolation, a statistical approach to isolate multiple defects within a program given a deployed user base. By analyzing large amounts of collected execution data from real users, they can successfully differentiate between different causes of faults in failing software @cite_39 . Their technique produces a ranked list of very specific fault localizations (e.g., the fault occurs when @math on line 57''). In general, their technique can produce more precise results than ours, but it requires a set of deployed users and works best on those defects experienced by many users. By contrast, we do not require that the program be runnable, much less deployed, and use only natural language defect report text.
{ "cite_N": [ "@cite_39" ], "mid": [ "2162376048" ], "abstract": [ "We present a statistical debugging algorithm that isolates bugs in programs containing multiple undiagnosed bugs. Earlier statistical algorithms that focus solely on identifying predictors that correlate with program failure perform poorly when there are multiple bugs. Our new technique separates the effects of different bugs and identifies predictors that are associated with individual bugs. These predictors reveal both the circumstances under which bugs occur as well as the frequencies of failure modes, making it easier to prioritize debugging efforts. Our algorithm is validated using several case studies, including examples in which the algorithm identified previously unknown, significant crashing bugs in widely used systems." ] }
1211.2858
1515943088
Maintenance is a dominant component of software cost, and localizing reported defects is a significant component of maintenance. We propose a scalable approach that leverages the natural language present in both defect reports and source code to identify files that are potentially related to the defect in question. Our technique is language-independent and does not require test cases. The approach represents reports and code as separate structured documents and ranks source files based on a document similarity metric that leverages inter-document relationships. We evaluate the fault-localization accuracy of our method against both lightweight baseline techniques and also reported results from state-of-the-art tools. In an empirical evaluation of 5345 historical defects from programs totaling 6.5 million lines of code, our approach reduced the number of files inspected per defect by over 91 . Additionally, we qualitatively and quantitatively examine the utility of the textual and surface features used by our approach.
Jalbert @cite_13 and Runeson @cite_17 have successfully detected duplicate defect reports by utilizing natural language processing techniques. We share with these techniques a common natural language architecture (e.g., frequency vectors, TF-IDF , etc.). We differ from these approaches by adapting the overall idea of document similarity to work across document formats (i.e., both structured defect reports and also program source code) and by tackling fault localization.
{ "cite_N": [ "@cite_13", "@cite_17" ], "mid": [ "2125587588", "2165022036" ], "abstract": [ "Bug tracking systems are important tools that guide the maintenance activities of software developers. The utility of these systems is hampered by an excessive number of duplicate bug reports-in some projects as many as a quarter of all reports are duplicates. Developers must manually identify duplicate bug reports, but this identification process is time-consuming and exacerbates the already high cost of software maintenance. We propose a system that automatically classifies duplicate bug reports as they arrive to save developer time. This system uses surface features, textual semantics, and graph clustering to predict duplicate status. Using a dataset of 29,000 bug reports from the Mozilla project, we perform experiments that include a simulation of a real-time bug reporting environment. Our system is able to reduce development cost by filtering out 8 of duplicate bug reports while allowing at least one report for each real defect to reach developers.", "Defect reports are generated from various testing and development activities in software engineering. Sometimes two reports are submitted that describe the same problem, leading to duplicate reports. These reports are mostly written in structured natural language, and as such, it is hard to compare two reports for similarity with formal methods. In order to identify duplicates, we investigate using natural language processing (NLP) techniques to support the identification. A prototype tool is developed and evaluated in a case study analyzing defect reports at Sony Ericsson mobile communications. The evaluation shows that about 2 3 of the duplicates can possibly be found using the NLP techniques. Different variants of the techniques provide only minor result differences, indicating a robust technology. User testing shows that the overall attitude towards the technique is positive and that it has a growth potential." ] }
1211.1784
2008669237
The paper concerns lattice triangulations, that is, triangulations of the integer points in a polygon in @math whose vertices are also integer points. Lattice triangulations have been studied extensively both as geometric objects in their own right and by virtue of applications in algebraic geometry. Our focus is on random triangulations in which a triangulation @math has weight @math , where @math is a positive real parameter, and @math is the total length of the edges in @math . Empirically, this model exhibits a "phase transition" at @math (corresponding to the uniform distribution): for @math very large regions of aligned edges appear. We substantiate this picture as follows. For @math we show that the mixing time is exponential. These are apparently the first rigorous quantitative results on the structure and dynamics of random lattice triangulations.
The literature on structural properties and Glauber dynamics of lattice spin systems is too vast to summarize here. We refer the reader to the standard references @cite_19 for structural properties such as spatial mixing, and @cite_1 for mixing times of Glauber dynamics. As explained earlier, while triangulations may be viewed as a spin system, their geometry is very different from that of a traditional spin system on the lattice; our paper can be seen as a first step towards obtaining structural and mixing time results for triangulations analogous to those for classical spin systems.
{ "cite_N": [ "@cite_19", "@cite_1" ], "mid": [ "1556987501", "2279676320" ], "abstract": [ "A state-of-the-art survey of both classical and quantum lattice gas models, this two-volume work will cover the rigorous mathematical studies of such models as the Ising and Heisenberg, an area in which scientists have made enormous strides during the past twenty-five years. This first volume addresses, among many topics, the mathematical background on convexity and Choquet theory, and presents an exhaustive study of the pressure including the Onsager solution of the two-dimensional Ising model, a study of the general theory of states in classical and quantum spin systems, and a study of high and low temperature expansions. The second volume will deal with the Peierls construction, infrared bounds, Lee-Yang theorems, and correlation inequality.This comprehensive work will be a useful reference not only to scientists working in mathematical statistical mechanics but also to those in related disciplines such as probability theory, chemical physics, and quantum field theory. It can also serve as a textbook for advanced graduate students.Originally published in 1993.The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.", "These notes have been the subject of a course I gave in the summer 1997 for the school in probability theory in Saint-Flour. I review in a self-contained way the state of the art, sometimes providing new and simpler proofs of the most relevant results, of the theory of Glauber dynamics for classical lattice spin models of statistical mechanics. The material covers the dynamics in the one phase region, in the presence of boundary phase transitions, in the phase coexistence region for the two dimensional Ising model and in the so-called Griffiths phase for random Systems." ] }
1211.1784
2008669237
The paper concerns lattice triangulations, that is, triangulations of the integer points in a polygon in @math whose vertices are also integer points. Lattice triangulations have been studied extensively both as geometric objects in their own right and by virtue of applications in algebraic geometry. Our focus is on random triangulations in which a triangulation @math has weight @math , where @math is a positive real parameter, and @math is the total length of the edges in @math . Empirically, this model exhibits a "phase transition" at @math (corresponding to the uniform distribution): for @math very large regions of aligned edges appear. We substantiate this picture as follows. For @math we show that the mixing time is exponential. These are apparently the first rigorous quantitative results on the structure and dynamics of random lattice triangulations.
We mention finally that our mixing time result for the special case of 1-dimensional regions with @math is related to work of Greenberg, Pascoe and Randall @cite_21 on lattice paths. Those authors use a similar path coupling argument, but for a different probability distribution on lattice paths: in their model paths are biased according to the area under the path, while in ours the bias depends on the excursions of the path from a fixed line.
{ "cite_N": [ "@cite_21" ], "mid": [ "1544652657" ], "abstract": [ "Monotonic surfaces spanning finite regions of Zd arise in many contexts, including DNA-based self-assembly, card-shuffling and lozenge tilings. We explore how we can sample these surfaces when the distribution is biased to favor higher surfaces. We show that a natural local chain is rapidly mixing with any bias for regions in Z2, and for bias λ > d2 in Zd, when d > 2. Moreover, our bounds on the mixing time are optimal on d-dimensional hyper-cubic regions. The proof uses a geometric distance function and introduces a variant of path coupling in order to handle distances that are exponentially large." ] }
1211.1041
1670485642
We consider a fundamental problem in unsupervised learning called : given a collection of @math points in @math , if many but not necessarily all of these points are contained in a @math -dimensional subspace @math can we find it? The points contained in @math are called inliers and the remaining points are outliers . This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds @math when it contains more than a @math fraction of the points. Hence, for say @math this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find @math when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.
We note that much of the recent work from statistics and machine learning has focused on a setting where one posits a distributional model that generates both the inliers and outliers and the goal is to recover the subspace @math with high probability. For example, see the recent work of @cite_32 and @cite_37 and references therein. In principle, our work is not directly comparable to these models since our results are not contingent on any one distributional model. Yet in some of these probabilistic models (e.g. in @cite_37 ) the probability that a point is chosen from the subspace @math is larger than @math in which case Condition is satisfied with high probability and hence our algorithm succeeds in these cases too.
{ "cite_N": [ "@cite_37", "@cite_32" ], "mid": [ "2165952088", "2139054653" ], "abstract": [ "Section 1 of the paper contains a general discussion of robustness. In Section 2 the influence function of the Hampel-Rousseeuw least median of squares estimator is derived. Linearly invariant weak metrics are constructed in Section 3. It is shown in Section 4 that @math -estimators satisfy an exact Holder condition of order 1 2 at models with normal errors. In Section 5 the breakdown points of the Hampel-Krasker dispersion and regression functionals are shown to be 0. The exact breakdown point of the Krasker-Welsch dispersion functional is obtained as well as bounds for the corresponding regression functional. Section 6 contains the construction of a linearly equivariant, high breakdown and locally Lipschitz dispersion functional for any design distribution. In Section 7 it is shown that there is no inherent contradiction between efficiency and a high breakdown point. Section 8 contains a linearly equivariant, high breakdown regression functional which is Lipschitz continuous at models with normal errors.", "This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [11], which signicantly broadens the range of problems where it is provably eective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the eectiveness of these methods." ] }
1211.1041
1670485642
We consider a fundamental problem in unsupervised learning called : given a collection of @math points in @math , if many but not necessarily all of these points are contained in a @math -dimensional subspace @math can we find it? The points contained in @math are called inliers and the remaining points are outliers . This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds @math when it contains more than a @math fraction of the points. Hence, for say @math this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find @math when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.
The above discussion has focused on notions of robustness that allow an adversary to corrupt a constant fraction of the entries in the matrix @math . However, this is only one possible definition of what it means for an estimator to be robust to noise. For example, principal component analysis can be seen as finding a @math -dimensional subspace that minimizes the sum of squared distances to the data points. A number of works have proposed modifications to this objective function (along with approximation algorithms) in the hopes that this objective function is more robust. As an example, @cite_27 gave a @math approximation algorithm for the problem of finding a subspace that minimizes the sum of @math distances to the data points (for @math ). Another example is the recent work of @cite_6 which gives a constant factor approximation for finding a @math -dimensional subspace that maximizes the sum of Euclidean lengths of the projections of the data points (instead of the sum of squared lengths). Lastly, we mention that @cite_33 gave a geometric definition of an outlier (that does not depend on a hidden subspace @math ) and give an optimal algorithm for removing outliers according to this definition.
{ "cite_N": [ "@cite_27", "@cite_33", "@cite_6" ], "mid": [ "", "1976986203", "2050991582" ], "abstract": [ "", "We study the problem of finding an outlier-free subset of a set of points (or a probability distribution) in n-dimensional Euclidean space. As in [BFKV 99], a point x is defined to be a β-outlier if there exists some direction w in which its squared distance from the mean along w is greater than β times the average squared distance from the mean along w. Our main theorem is that for any e > 0, there exists a (1 - e) fraction of the original distribution that has no O(n e(b + logn e))-outliers, improving on the previous bound of O(n7b e). This is asymptotically the best possible, as shown by a matching lower bound. The theorem is constructive, and results in a 1 1-e approximation to the following optimization problem: given a distribution µ (i.e. the ability to sample from it), and a parameter e > 0, find the minimum β for which there exists a subset of probability at least (1 - e) with no β-outliers.", "The classical Grothendieck inequality has applications to the design of approximation algorithms for NP-hard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a constant-factor polynomial time approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principle component analysis and the orthogonal Procrustes problem." ] }
1211.1080
1870111070
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions.
Copy-protection In software copy-protection @cite_20 , a program can be evaluated (a possibly unlimited number of times), but it should be impossible for the program to be split'' or copied'' into parts allowing separate executions. As with OTPs, copy-protection cannot be achieved by software means only. OTPs provide a hardware solution by enforcing that the program be run only once. However, the more interesting question is if quantum information alone (with computational assumptions) can provide a solution. Aaronson @cite_20 has proposed such schemes based on plausible, but non-standard, cryptographic assumptions. It is an open problem if quantum copy-protection could be based on standard assumptions. In contrast, the security of quantum OTPs is based on simple OTMs; it could be beneficial to study quantum copy-protection in light of our result.
{ "cite_N": [ "@cite_20" ], "mid": [ "1968293918" ], "abstract": [ "We show how to construct quantum gate arrays that can be programmed to perform different unitary operations on a data register, depending on the input to some program register. It is shown that a universal quantum gate array a gate array which can be programmed to perform any unitary operation exists only if one allows the gate array to operate in a probabilistic fashion. Thus it is not possible to build a fixed, general purpose quantum computer which can be programmed to perform an arbitrary quantum computation." ] }
1211.1080
1870111070
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions.
Program obfuscation A related but different task is program obfuscation, in which the receiver should not be able to efficiently learn'' anything from the description of the program that he could not also efficiently learn from the input-output behaviour of the program. In the case of classical information, it is known that secure program obfuscation is not possible in the plain model @cite_7 . As with copy-protection, OTPs provide a hardware solution by enforcing that the obfuscated program can be run only a limited number of times. Again, the more interesting question is if quantum information alone (with computational assumptions) can provide a solution; the impossibility proof for obfuscation breaks down in the quantum case due to the no-cloning theorem. It is an open problem whether there is a way to substitute the assumption of secure hardware in our main possibility result with a computational assumption in order to realize quantum program obfuscation.
{ "cite_N": [ "@cite_7" ], "mid": [ "2084641398" ], "abstract": [ "Informally, an obfuscator O is an (efficient, probabilistic) “compiler” that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is “unintelligible” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the “unintelligibility” condition in obfuscation as meaning that O(P) is a “virtual black box,” in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P. In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of efficient programs P that are unobfuscatable in the sense that (a) given any efficient program P' that computes the same function as a program P ∈ p, the “source code” P can be efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P ∈ p, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random) except with negligible probability. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families." ] }
1211.0947
1620174887
How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online.
Switching dynamic models are well-established in statistics @cite_15 @cite_9 , signal processing @cite_3 and machine learning @cite_4 @cite_2 @cite_21 . In contrast to these models, we define both, the dynamic models and the switching variables, using nonlinear dynamical systems with continuous states running in continuous time. This allows us to link our model more easily to computations implemented in analogue biological substrate such as the brain. Additionally, a formulation in continuous time allows us to easily perform time-rescaling of dynamical systems @cite_10 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_21", "@cite_3", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "2102716594", "2022023686", "2140890425", "", "2110575115", "605692607", "2123967136" ], "abstract": [ "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series models— hidden Markov models and linear dynamical systems—and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.", "In treating dynamic systems, sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling and weighted resampling to complete the on-line ‘filtering’ task. We propose a special sequential Monte Carlo method, the mixture Kalman filter, which uses a random mixture of the Gaussian distributions to approximate a target distribution. It is designed for on-line estimation and prediction of conditional and partial conditional dynamic linear models, which are themselves a class of widely used non-linear systems and also serve to approximate many others. Compared with a few available filtering methods including Monte Carlo methods, the gain in efficiency that is provided by the mixture Kalman filter can be very substantial. Another contribution of the paper is the formulation of many non-linear systems into conditional or partial conditional linear form, to which the mixture Kalman filter can be applied. Examples in target tracking and digital communications are given to demonstrate the procedures proposed.", "Condition monitoring often involves the analysis of systems with hidden factors that switch between different modes of operation in some way. Given a sequence of observations, the task is to infer the filtering distribution of the switch setting at each time step. In this paper, we present factorial switching linear dynamical systems as a general framework for handling such problems. We show how domain knowledge and learning can be successfully combined in this framework, and introduce a new factor (the ldquoX-factorrdquo) for dealing with unmodeled variation. We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of physiological measurements and artifacts. We have explicit knowledge of common factors and use the X-factor to model novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on typical intensive care unit monitoring data.", "", "", "WINNER OF THE 2007 DEGROOT PRIZE! @PARASPLIT The prominence of finite mixture modelling is greater than ever. Many important statistical topics like clustering data, outlier treatment, or dealing with unobserved heterogeneity involve finite mixture models in some way or other. The area of potential applications goes beyond simple data analysis and extends to regression analysis and to non-linear time series analysis using Markov switching models. @PARASPLIT For more than the hundred years since Karl Pearson showed in 1894 how to estimate the five parameters of a mixture of two normal distributions using the method of moments, statistical inference for finite mixture models has been a challenge to everybody who deals with them. In the past ten years, very powerful computational tools emerged for dealing with these models which combine a Bayesian approach with recent Monte simulation techniques based on Markov chains. This book reviews these techniques and covers the most recent advances in the field, among them bridge sampling techniques and reversible jump Markov chain Monte Carlo methods. @PARASPLIT It is the first time that the Bayesian perspective of finite mixture modelling is systematically presented in book form. It is argued that the Bayesian approach provides much insight in this context and is easily implemented in practice. Although the main focus is on Bayesian inference, the author reviews several frequentist techniques, especially selecting the number of components of a finite mixture model, and discusses some of their shortcomings compared to the Bayesian approach. @PARASPLIT The aim of this book is to impart the finite mixture and Markov switching approach to statistical modelling to a wide-ranging community. This includes not only statisticians, but also biologists, economists, engineers, financial agents, market researcher, medical researchers or any other frequent user of statistical models. This book should help newcomers to the field to understand how finite mixture and Markov switching models are formulated, what structures they imply on the data, what they could be used for, and how they are estimated. Researchers familiar with the subject also will profit from reading this book. The presentation is rather informal without abandoning mathematical correctness. Previous notions of Bayesian inference and Monte Carlo simulation are useful but not needed.", "Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is often defined as finding a desired trajectory that reaches a particular goal state. While reinforcement learning offers a theoretical framework to learn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of differential equations with well-defined attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. We demonstrate our techniques in the context of learning a set of movement skills for a humanoid robot from demonstrations of a human teacher. Policies are acquired rapidly, and, due to the properties of well formulated differential equations, can be re-used and modified on-line under dynamic changes of the environment. The linear parameterization of nonparametric regression moreover lends itself to recognize and classify previously learned movement skills. Evaluations in simulations and on an actual 30 degree-of-freedom humanoid robot exemplify the feasibility and robustness of our approach." ] }
1211.0947
1620174887
How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online.
More recently, other continuous-time switched dynamic models have been proposed, for example, nonparametric models @cite_11 @cite_16 which extend Bayesian online change point detection @cite_14 @cite_18 using Gaussian processes. Although online inference methods for these models have been described, their aim is not to identify a known dynamic process, but rather to make accurate predictions of observations across change points at which the underlying dynamic process changes. Similarly, switched latent force models @cite_12 are nonparametric models in which the position of change points and the underlying dynamic processes are modelled using Gaussian processes and DMPs. The proposed inference method is offline, i.e., all observed data are used and again the aim is not to discriminate between different, previously learnt models. Rather, this approach could be used to learn parametric models based on the obtained change point posterior.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "1991485860", "1483365869", "1963375715", "2162942763", "2136816045" ], "abstract": [ "We propose an on-line algorithm for exact filtering of multiple changepoint problems. This algorithm enables simulation from the true joint posterior distribution of the number and position of the changepoints for a class of changepoint models. The computational cost of this exact algorithm is quadratic in the number of observations. We further show how resampling ideas from particle filters can be used to reduce the computational cost to linear in the number of observations, at the expense of introducing small errors; and propose two new, optimum resampling algorithms for this problem. One, a version of rejection control, allows the particle filter to automatically choose the number of particles required at each time-step. The new resampling algorithms substantially out-perform standard resampling algorithms on examples we consider; and we demonstrate how the resulting particle filter is practicable for segmentation of human GC content.", "Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the probability distribution of the length of the current run,'' or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets.", "We combine Bayesian online change point detection with Gaussian processes to create a nonparametric time series model which can handle change points. The model can be used to locate change points in an online manner; and, unlike other Bayesian online change point detection algorithms, is applicable when temporal correlations in a regime are expected. We show three variations on how to apply Gaussian processes in the change point context, each with their own advantages. We present methods to reduce the computational burden of these models and demonstrate it on several real world data sets.", "Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology.", "We introduce a new sequential algorithm for making robust predictions in the presence of changepoints. Unlike previous approaches, which focus on the problem of detecting and locating changepoints, our algorithm focuses on the problem of making predictions even when such changes might be present. We introduce nonstationary covariance functions to be used in Gaussian process prediction that model such changes, and then proceed to demonstrate how to effectively manage the hyperparameters associated with those covariance functions. We further introduce covariance functions to be used in situations where our observation model undergoes changes, as is the case for sensor faults. By using Bayesian quadrature, we can integrate out the hyperparameters, allowing us to calculate the full marginal predictive distribution. Furthermore, if desired, the posterior distribution over putative changepoint locations can be calculated as a natural byproduct of our prediction algorithm." ] }
1211.0947
1620174887
How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online.
In @cite_22 , the authors derive a smoothing algorithm based on variational inference for an Ornstein-Uhlenbeck (OU) process which is switched by a random telegraph process. Thus, this model can only switch between two constant drifts. Similarly, @cite_0 @cite_20 propose Markov chain Monte Carlo inference for a switched OU process where the number of different parameter sets as well as the parameter values are automatically determined from the data. However, these parameters are limited to the constant drift and diffusion parameters of the OU process which cannot implement generic nonlinear dynamics. In contrast to these models, we approximate a change point process using our continuous-valued switching dynamics which allows us to maintain a coherent continuous framework and apply standard filtering algorithms.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_20" ], "mid": [ "2098657931", "2109462896", "2221871399" ], "abstract": [ "We consider the problem of Bayesian inference for continuous-time multi-stable stochastic systems which can change both their diffusion and drift parameters at discrete times. We propose exact inference and sampling methodologies for two specific cases where the discontinuous dynamics is given by a Poisson process and a two-state Markovian switch. We test the methodology on simulated data, and apply it to two real data sets in finance and systems biology. Our experimental results show that the approach leads to valid inferences and non-trivial insights.", "We present a novel approach to inference in conditionally Gaussian continuous time stochastic processes, where the latent process is a Markovian jump process. We first consider the case of jump-diffusion processes, where the drift of a linear stochastic differential equation can jump at arbitrary time points. We derive partial differential equations for exact inference and present a very efficient mean field approximation. By introducing a novel lower bound on the free energy, we then generalise our approach to Gaussian processes with arbitrary covariance, such as the non-Markovian RBF covariance. We present results on both simulated and real data, showing that the approach is very accurate in capturing latent dynamics and can be useful in a number of real data modelling tasks.", "We study a model of a stochastic process with unobserved parameters which suddenly change at random times. The possible parameter values are assumed to be from a finite but unknown set. Using a Chinese restaurant process prior over parameters we develop an efficient MCMC procedure for Bayesian inference. We demonstrate the significance of our approach with an application to systems biology data." ] }
1211.0176
2128696195
In this paper we introduce and experimentally compare alternative algorithms to join uncertain relations. Different algorithms are based on specific principles, e.g., sorting, indexing, or building intermediate relational tables to apply traditional approaches. As a consequence their performance is affected by different features of the input data, and each algorithm is shown to be more efficient than the others in specific cases. In this way statistics explicitly representing the amount and kind of uncertainty in the input uncertain relations can be used to choose the most efficient algorithm.
Uncertain relational models have been studied since the early 90's, and the first works on this topic mainly focused on theoretical aspects of probabilistic and possibilistic data management @cite_25 @cite_12 @cite_19 @cite_0 @cite_2 @cite_11 @cite_14 . More recently, there have been successful initiatives to build working systems for the efficient execution of queries over uncertain data @cite_5 @cite_27 @cite_18 @cite_23 @cite_13 @cite_16 @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "1486776102", "1992609556", "", "2085988990", "2112173025", "1512485814", "2088422262", "1491912291", "2140757415", "2044494469", "2129035130", "2125791539", "1500821679", "2024400846" ], "abstract": [ "Trio is a new database system that manages not only data, but also the accuracy and lineage of the data. Approximate (uncertain, probabilistic, incomplete, fuzzy, and imprecise!) databases have been proposed in the past, and the lineage problem also has been studied. The goals of the Trio project are to distill previous work into a simple and usable model, design a query language as an understandable extension to SQL, and most importantly build a working system---a system that augments conventional data management with both accuracy and lineage as an integral part of the data. This paper provides numerous motivating applications for Trio and lays out preliminary plans for the data model, query language, and prototype system.", "We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. In PRA, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression always conform to the underlying probabilistic model. We also show for which expressions extensional semantics yields the same results. Furthermore, we discuss complexity issues and indicate possibilities for optimization. With regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modeled. We introduce the concept of vague predicates which yield probabilistic weights instead of Boolean values, thus allowing for queries with vague selection conditions. With these features, PRA implements uncertainty and vagueness in combination with the relational model.", "", "Although the relational model for databases provides a great range of advantages over other data models, it lacks a comprehensive way to handle incomplete and uncertain data. Uncertainty in data values, however, is pervasive in all real-world environments and has received much attention in the literature. Several methods have been proposed for incorporating uncertain data into relational databases. However, the current approaches have many shortcomings and have not established an acceptable extension of the relational model. In this paper, we propose a consistent extension of the relational model. We present a revised relational structure and extend the relational algebra. The extended algebra is shown to be closed, a consistent extension of the conventional relational algebra, and reducible to the latter.", "An algebra is presented for a simple probabilistic data model that may be regarded as an extension of the standard relational model. The probabilistic algebra is developed in such a way that (restricted to spl alpha -acyclic database schemes) the relational algebra is a homomorphic image of it. Strictly probabilistic results are emphasized. Variations on the basic probabilistic data model are discussed. The algebra is used to explicate a commonly used statistical smoothing procedure and is shown to be potentially very useful for decision support with uncertain information. >", "In many systems, sensors are used to acquire information from external environments such as temperature, pressure and locations. Due to continuous changes in these values, and limited resources (e.g., network bandwidth and battery power), it is often infeasible for the database to store the exact values at all times. Queries that uses these old values can produce invalid results. In order to manage the uncertainty between the actual sensor value and the database value, we propose a system called U-DBMS. U-DBMS extends the database system with uncertainty management functionalities. In particular, each data value is represented as an interval and a probability distribution function, and it can be processed with probabilistic query operators to produce imprecise (but correct) answers. This demonstration presents a PostgreSQL-based system that handles uncertainty and probabilistic queries for constantly-evolving data.", "In the Trio project at Stanford, we are building a new kind of database management system: one in which data, uncertainty of the data, and data lineage are all first-class citizens. Trio is based on an extended relational model called ULDBs, and it supports a SQL-based query language called TriQL. Trio was motivated by a number of applications including scientific data management, data cleaning and integration, information extraction systems, and others. We have completed an initial working prototype of the Trio system. We will demonstrate our prototype by illustrating through two applications how uncertainty and lineage are represented in ULDBs, how TriQL operates over ULDBs both from the user and the system perspective, and in general how data, uncertainty, and lineage can work together to support interesting new functionality.", "The information to be stored in databases is not always precise and certain, and, occasionally, some information might be missing altogether.1 When the available information is imperfect, it is often desirable to try to represent it in the database nonetheless, so that it can be used to answer queries of interest as much as possible. A related issue is the handling of imperfect or flexible queries. For example, a natural query language may use a word or a phrase whose meaning is vague or even entirely unclear. As another example, a query may reflect a user’s uncertainty about what he is looking for. In addition, one may want to use vague predicates in a query to express pbl]References among the admissible answers.", "MystiQ is a system that uses probabilistic query semantics [3] to find answers in large numbers of data sources of less than perfect quality. There are many reasons why the data originating from many different sources may be of poor quality, and therefore difficult to query: the same data item may have different representation in different sources; the schema alignments needed by a query system are imperfect and noisy; different sources may contain contradictory information, and, in particular, their combined data may violate some global integrity constraints; fuzzy matches between objects from different sources may return false positives or negatives. Even in such environment, users some-times want to ask complex, structurally rich queries, using query constructs typically found in SQL queries: joins, subqueries, existential universal quantifiers, aggregate and group-by queries: for example scientists may use such queries to query multiple scientific data sources, or a law enforcement agency may use it in order to find rare associations from multiple data sources. If standard query semantics were applied to such queries, all but the most trivial queries will return an empty answer.", "Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed approximate probabilities, or did not scale, and it was shown recently that precise query evaluation is theoretically hard. In this paper we describe a novel approach, which computes and ranks efficiently the top-k answers to a SQL query on a probabilistic database. The restriction to top-k answers is natural, since imprecisions in the data often lead to a large number of answers of low quality, and users are interested only in the answers with the highest probabilities. The idea in our algorithm is to run in parallel several Monte-Carlo simulations, one for each candidate answer, and approximate each probability only to the extent needed to compute correctly the top-k answers.", "This paper explores an inherent tension in modeling and querying uncertain data: simple, intuitive representations of uncertain data capture many application requirements, but these representations are generally incomplete―standard operations over the data may result in unrepresentable types of uncertainty. Complete models are theoretically attractive, but they can be nonintuitive and more complex than necessary for many applications. To address this tension, we propose a two-layer approach to managing uncertain data: an underlying logical model that is complete, and one or more working models that are easier to understand, visualize, and query, but may lose some information. We explore the space of incomplete working models, place several of them in a strict hierarchy based on expressive power, and study their closure properties. We describe how the two-layer approach is being used in our prototype DBMS for uncertain data, and we identify a number of interesting open problems to fully realize the approach.", "It is often desirable to represent in a database, entities whose properties cannot be deterministically classified. The authors develop a data model that includes probabilities associated with the values of the attributes. The notion of missing probabilities is introduced for partially specified probability distributions. This model offers a richer descriptive language allowing the database to more accurately reflect the uncertain real world. Probabilistic analogs to the basic relational operators are defined and their correctness is studied. A set of operators that have no counterpart in conventional relational systems is presented. >", "We propose an extended relational database model which can model both uncertainty and imprecision in data. This model is based on Dempster-Shafer theory which has become popular in AI as an uncertainty reasoning tool. The definitions of Be1 and Pls functions in Dempster-Shafer theory are extended to compute the beliefs of various comparisons (e.g., equality, less than, etc.) between two basic probability assignments. Based on these new definitions of Be1 and Pls functions and the Boolean combinations of Be1 and Pls values for two events, five relational operators such as Select, Cartesian Product, Join, Projection Intersect, and Union are defined.", "Probability theory is mathematically the best understood paradigm for modeling and manipulating uncertain information. Probabilities of complex events can be computed from those of basic events on which they depend, using any of a number of strategies. Which strategy is appropriate depends very much on the known interdependencies among the events involved. Previous work on probabilistic databases has assumed a fixed and restrictive combination strategy (e.g., assuming all events are pairwise independent). In this article, we characterize, using postulates, whole classes of strategies for conjunction, disjunction, and negation, meaningful from the viewpoint of probability theory. (1) We propose a probabilistic relational data model and a generic probabilistic relational algebra that neatly captures various strategies satisfying the postulates, within a single unified framework. (2) We show that as long as the chosen strategies can be computed in polynomial time, queries in the positive fragment of the probabilistic relational algebra have essentially the same data complexity as classical relational algebra. (3) We establish various containments and equivalences between algebraic expressions, similar in spirit to those in classical algebra. (4) We develop algorithms for maintaining materialized probabilistic views. (5) Based on these ideas, we have developed a prototype probabilistic database system called ProbView on top of Dbase V.0. We validate our complexity results with experiments and show that rewriting certain types of queries to other equivalent forms often yields substantial savings." ] }
1211.0176
2128696195
In this paper we introduce and experimentally compare alternative algorithms to join uncertain relations. Different algorithms are based on specific principles, e.g., sorting, indexing, or building intermediate relational tables to apply traditional approaches. As a consequence their performance is affected by different features of the input data, and each algorithm is shown to be more efficient than the others in specific cases. In this way statistics explicitly representing the amount and kind of uncertainty in the input uncertain relations can be used to choose the most efficient algorithm.
In addition to specific works on uncertain data models and systems, this article builds over the large literature on relational join algorithms which is today consolidated and can be found in any text book on relational database management system architectures, e.g., @cite_24 . With regard to traditional join algorithms, we have reported and exemplified the few concepts necessary to understand the remaining of the paper in the introduction.
{ "cite_N": [ "@cite_24" ], "mid": [ "1569403765" ], "abstract": [ "From the Publisher: This introduction to database systems offers a readable comprehensive approach with engaging, real-world examples—users will learn how to successfully plan a database application before building it. The first half of the book provides in-depth coverage of databases from the point of view of the database designer, user, and application programmer, while the second half of the book provides in-depth coverage of databases from the point of view of the DBMS implementor. The first half of the book focuses on database design, database use, and implementation of database applications and database management systems—it covers the latest database standards SQL:1999, SQL PSM, SQL CLI, JDBC, ODL, and XML, with broader coverage of SQL than most other books. The second half of the book focuses on storage structures, query processing, and transaction management—it covers the main techniques in these areas with broader coverage of query optimization than most other books, along with advanced topics including multidimensional and bitmap indexes, distributed transactions, and information integration techniques. A professional reference for database designers, users, and application programmers." ] }
1211.0176
2128696195
In this paper we introduce and experimentally compare alternative algorithms to join uncertain relations. Different algorithms are based on specific principles, e.g., sorting, indexing, or building intermediate relational tables to apply traditional approaches. As a consequence their performance is affected by different features of the input data, and each algorithm is shown to be more efficient than the others in specific cases. In this way statistics explicitly representing the amount and kind of uncertainty in the input uncertain relations can be used to choose the most efficient algorithm.
On the contrary, the problem of direct optimization of uncertain data is more recent and to the best of our knowledge the first work suggesting the usage of specific statistics on the uncertainty of the input relations is @cite_22 . Other works have also dealt with query optimization on probabilistic data without focusing on join algorithms @cite_20 @cite_16 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_22", "@cite_20" ], "mid": [ "2143485006", "2044494469", "2171262729", "" ], "abstract": [ "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.", "Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed approximate probabilities, or did not scale, and it was shown recently that precise query evaluation is theoretically hard. In this paper we describe a novel approach, which computes and ranks efficiently the top-k answers to a SQL query on a probabilistic database. The restriction to top-k answers is natural, since imprecisions in the data often lead to a large number of answers of low quality, and users are interested only in the answers with the highest probabilities. The idea in our algorithm is to run in parallel several Monte-Carlo simulations, one for each candidate answer, and approximate each probability only to the extent needed to compute correctly the top-k answers.", "Data integration systems offer a uniform interface to a set of data sources. Despite recent progress, setting up and maintaining a data integration application still requires significant upfront effort of creating a mediated schema and semantic mappings from the data sources to the mediated schema. Many application contexts involving multiple data sources (e.g., the web, personal information management, enterprise intranets) do not require full integration in order to provide useful services, motivating a pay-as-you-go approach to integration. With that approach, a system starts with very few (or inaccurate) semantic mappings and these mappings are improved over time as deemed necessary. This paper describes the first completely self-configuring data integration system. The goal of our work is to investigate how advanced of a starting point we can provide a pay-as-you-go system. Our system is based on the new concept of a probabilistic mediated schema that is automatically created from the data sources. We automatically create probabilistic schema mappings between the sources and the mediated schema. We describe experiments in multiple domains, including 50-800 data sources, and show that our system is able to produce high-quality answers with no human intervention.", "" ] }
1211.0176
2128696195
In this paper we introduce and experimentally compare alternative algorithms to join uncertain relations. Different algorithms are based on specific principles, e.g., sorting, indexing, or building intermediate relational tables to apply traditional approaches. As a consequence their performance is affected by different features of the input data, and each algorithm is shown to be more efficient than the others in specific cases. In this way statistics explicitly representing the amount and kind of uncertainty in the input uncertain relations can be used to choose the most efficient algorithm.
Probabilistic joins are useful when objects may match , differently from our work where we compute exact joins and the additional workload depends on our ignorance of the real values we are manipulating. These approximate probabilistic joins have been studied in @cite_3 , dealing with joins between similar objects, and @cite_7 , focusing on nearest-neighbor joins.
{ "cite_N": [ "@cite_7", "@cite_3" ], "mid": [ "1491547607", "1561514023" ], "abstract": [ "Nearest-neighbor queries are an important query type for commonly used feature databases. In many different application areas, e.g. sensor databases, location based services or face recognition systems, distances between objects have to be computed based on vague and uncertain data. A successful approach is to express the distance between two uncertain objects by probability density functions which assign a probability value to each possible distance value. By integrating the complete probabilistic distance function as a whole directly into the query algorithm, the full information provided by these functions is exploited. The result of such a probabilistic query algorithm consists of tuples containing the result object and a probability value indicating the likelihood that the object satisfies t he query predicate. In this paper we introduce an efficient strategy for cessing probabilistic nearest-neighbor queries, as the computation of these probability values is very expensive. In a detailed experimental evaluation, we demonstrate the benefits of our probabilistic query approach. The experiments show that we can achieve high quality query results with rather low computational cost.", "An important database primitive for commonly used feature databases is the similarity join. It combines two datasets based on some similarity predicate into one set such that the new set contains pairs of objects of the two original sets. In many different application areas, e.g. sensor databases, location based services or face recognition systems, distances between objects have to be computed based on vague and uncertain data. In this paper, we propose to express the similarity between two uncertain objects by probability density functions which assign a probability value to each possible distance value. By integrating these probabilistic distance functions directly into the join algorithms the full information provided by these functions is exploited. The resulting probabilistic similarity join assigns to each object pair a probability value indicating the likelihood that the object pair belongs to the result set. As the computation of these probability values is very expensive, we introduce an efficient join processing strategy exemplarily for the distance-range join. In a detailed experimental evaluation, we demonstrate the benefits of our probabilistic similarity join. The experiments show that we can achieve high quality join results with rather low computational cost." ] }
1211.0176
2128696195
In this paper we introduce and experimentally compare alternative algorithms to join uncertain relations. Different algorithms are based on specific principles, e.g., sorting, indexing, or building intermediate relational tables to apply traditional approaches. As a consequence their performance is affected by different features of the input data, and each algorithm is shown to be more efficient than the others in specific cases. In this way statistics explicitly representing the amount and kind of uncertainty in the input uncertain relations can be used to choose the most efficient algorithm.
Other works have studied the execution of probabilistic joins on streaming data @cite_6 @cite_15 , focusing on the specific constraints of this context. Another situation where we often have uncertain data is that of spatial databases, where we may not know with certainty the shape of the objects. @cite_4 studied how to join this kind of data, and here the underlying (spatial) uncertainty model is different with respect to the one adopted in our work.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_6" ], "mid": [ "2129553531", "2116440837", "2005044405" ], "abstract": [ "Similarity join processing in the streaming environment has many practical applications such as sensor networks, object tracking and monitoring, and so on. Previous works usually assume that stream processing is conducted over precise data. In this paper, we study an important problem of similarity join processing on stream data that inherently contain uncertainty (or called uncertain data streams), where the incoming data at each time stamp are uncertain and imprecise. Specifically, we formalize this problem as join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data. To tackle the challenges with respect to efficiency and effectiveness such as limited memory and small response time, we propose effective pruning methods on both object and sample levels to filter out false alarms. We integrate the proposed pruning methods into an efficient query procedure that can incrementally maintain the USJ answers. Most importantly, we further design a novel strategy, namely, adaptive superset prejoin (ASP), to maintain a superset of USJ candidate pairs. ASP is in light of our proposed formal cost model such that the average USJ processing cost is minimized. We have conducted extensive experiments to demonstrate the efficiency and effectiveness of our proposed approaches.", "Probabilistic data have recently become popular in applications such as scientific and geospatial databases. For images and other spatial datasets, probabilistic values can capture the uncertainty in extent and class of the objects in the images. Relating one such dataset to another by spatial joins is an important operation for data management systems. We consider probabilistic spatial join (PSJ) queries, which rank the results according to a score that incorporates both the uncertainties associated with the objects and the distances between them. We present algorithms for two kinds of PSJ queries: Threshold PSJ queries, which return all pairs that score above a given threshold, and top-k PSJ queries, which return the k top-scoring pairs. For threshold PSJ queries, we propose a plane sweep algorithm that, because it exploits the special structure of the problem, runs in 0(n (log n + k)) time, where n is the number of points and k is the number of results. We extend the algorithms to 2-D data and to top-k PSJ queries. To further speed up top-k PSJ queries, we develop a scheduling technique that estimates the scores at the level of blocks, then hands the blocks to the plane sweep algorithm. By finding high-scoring pairs early, the scheduling allows a large portion of the datasets to be pruned. Experiments demonstrate speed-ups of two orders of magnitude.", "Join processing in the streaming environment has many practical applications such as data cleaning and outlier detection. Due to the inherent uncertainty in the real-world data, it has become an increasingly important problem to consider the join processing on uncertain data streams, where the incoming data at each timestamp are uncertain and imprecise. Different from the static databases, processing uncertain data streams has its own requirements such as the limited memory, small response time, and so on. To tackle the challenges with respect to efficiency and effectiveness, in this paper, we formalize the problem of join on uncertain data streams (USJ), which can guarantee the accuracy of USJ answers over uncertain data, and propose effective pruning methods to filter out false alarms. We integrate the pruning methods into an efficient query procedure for incrementally maintaining USJ answers. Extensive experiments have been conducted to demonstrate the efficiency and effectiveness of our approaches." ] }
1211.0589
1665264418
Spectral embedding of graphs uses the top k non-trivial eigenvectors of the random walk matrix to embed the graph into R^k. The primary use of this embedding has been for practical spectral clustering algorithms [SM00,NJW02]. Recently, spectral embedding was studied from a theoretical perspective to prove higher order variants of Cheeger's inequality [LOT12,LRTV12]. We use spectral embedding to provide a unifying framework for bounding all the eigenvalues of graphs. For example, we show that for any finite graph with n vertices and all k >= 2, the k-th largest eigenvalue is at most 1-Omega(k^3 n^3), which extends the only other such result known, which is for k=2 only and is due to [LO81]. This upper bound improves to 1-Omega(k^2 n^2) if the graph is regular. We generalize these results, and we provide sharp bounds on the spectral measure of various classes of graphs, including vertex-transitive graphs and infinite graphs, in terms of specific graph parameters like the volume growth. As a consequence, using the entire spectrum, we provide (improved) upper bounds on the return probabilities and mixing time of random walks with considerably shorter and more direct proofs. Our work introduces spectral embedding as a new tool in analyzing reversible Markov chains. Furthermore, building on [Lyo05], we design a local algorithm to approximate the number of spanning trees of massive graphs.
There have been many studies bounding from above the eigenvalues of the (normalized) Laplacian (equivalently, bounding the eigenvalues of the (normalized) adjacency matrix from below). For example, @cite_13 show that for @math -vertex, bounded-degree planar graphs, one has that the @math th smallest eigenvalue satisfies @math
{ "cite_N": [ "@cite_13" ], "mid": [ "2963487913" ], "abstract": [ "We present a method for proving upper bounds on the eigenvalues of the graph Laplacian. A main step involves choosing an appropriate “Riemannian” metric to uniformize the geometry of the graph. In many interesting cases, the existence of such a metric is shown by examining the combinatorics of special types of flows. This involves proving new inequalities on the crossing number of graphs." ] }
1211.0589
1665264418
Spectral embedding of graphs uses the top k non-trivial eigenvectors of the random walk matrix to embed the graph into R^k. The primary use of this embedding has been for practical spectral clustering algorithms [SM00,NJW02]. Recently, spectral embedding was studied from a theoretical perspective to prove higher order variants of Cheeger's inequality [LOT12,LRTV12]. We use spectral embedding to provide a unifying framework for bounding all the eigenvalues of graphs. For example, we show that for any finite graph with n vertices and all k >= 2, the k-th largest eigenvalue is at most 1-Omega(k^3 n^3), which extends the only other such result known, which is for k=2 only and is due to [LO81]. This upper bound improves to 1-Omega(k^2 n^2) if the graph is regular. We generalize these results, and we provide sharp bounds on the spectral measure of various classes of graphs, including vertex-transitive graphs and infinite graphs, in terms of specific graph parameters like the volume growth. As a consequence, using the entire spectrum, we provide (improved) upper bounds on the return probabilities and mixing time of random walks with considerably shorter and more direct proofs. Our work introduces spectral embedding as a new tool in analyzing reversible Markov chains. Furthermore, building on [Lyo05], we design a local algorithm to approximate the number of spanning trees of massive graphs.
However, to the best of our knowledge, universal lower bounds were known only for the second smallest eigenvalue of the normalized Laplacian. Namely, Landau and Odlyzko @cite_15 showed that the second eigenvalue of every simple connected graph of size @math is at least @math .
{ "cite_N": [ "@cite_15" ], "mid": [ "1983122377" ], "abstract": [ "Abstract We consider the class of stochastic matrices M generated in the following way from graphs: if G is an undirected connected graph on n vertices with adjacency matrix A , we form M from A by dividing the entries in each row of A by their row sum. Being stochastic, M has the eigenvalue λ=1 and possibly also an eigenvalue λ=-1. We prove that the remaining eigenvalues of M lie in the disk ¦λ¦⩽1– n -3 , and show by examples that the order of magnitude of this estimate is best possible. In these examples, G has a bar-bell structure, in which n 3 of the vertices are arranged along a line, with n 3 vertices fully interconnected at each end. We also obtain better bounds when either the diameter of G or the maximal degree of a vertex is restricted." ] }
1211.0020
2502236103
Presburger arithmetic is the first-order theory of the natural numbers with addition (but no multiplication). We characterize sets that can be defined by a Presburger formula as exactly the sets whose characteristic functions can be represented by rational generating functions; a geometric characterization of such sets is also given. In addition, if p=(p_1,...,p_n) are a subset of the free variables in a Presburger formula, we can define a counting function g(p) to be the number of solutions to the formula, for a given p. We show that every counting function obtained in this way may be represented as, equivalently, either a piecewise quasi-polynomial or a rational generating function. Finally, we translate known computational complexity results into this setting and discuss open directions.
The importance of understanding Presburger Arithmetic is highlighted by the fact that many problems in computer science and mathematics can be phrased in this language: for example, integer programming @cite_9 @cite_42 , geometry of numbers @cite_41 @cite_22 , Gr "obner bases and algebraic integer programming @cite_24 @cite_36 , neighborhood complexes and test sets @cite_15 @cite_12 , the Frobenius problem @cite_17 , Ehrhart theory @cite_43 @cite_34 , monomial ideals @cite_47 , and toric varieties @cite_30 . Several of the above references analyze the computational complexity of their specific problem. In most of the above references, the connection to Presburger arithmetic is only implicit.
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_22", "@cite_41", "@cite_36", "@cite_9", "@cite_42", "@cite_24", "@cite_43", "@cite_15", "@cite_34", "@cite_12", "@cite_17" ], "mid": [ "2566917795", "1537258143", "", "1557027117", "", "2114616381", "", "", "2949688382", "1974880185", "", "", "2332901121" ], "abstract": [ "", "Monomial Ideals.- Squarefree monomial ideals.- Borel-fixed monomial ideals.- Three-dimensional staircases.- Cellular resolutions.- Alexander duality.- Generic monomial ideals.- Toric Algebra.- Semigroup rings.- Multigraded polynomial rings.- Syzygies of lattice ideals.- Toric varieties.- Irreducible and injective resolutions.- Ehrhart polynomials.- Local cohomology.- Determinants.- Plucker coordinates.- Matrix Schubert varieties.- Antidiagonal initial ideals.- Minors in matrix products.- Hilbert schemes of points.", "", "Notation Prologue Chapter I. Lattices 1. Introduction 2. Bases and sublattices 3. Lattices under linear transformation 4. Forms and lattices 5. The polar lattice Chapter II. Reduction 1. Introduction 2. The basic process 3. Definite quadratic forms 4. Indefinite quadratic forms 5. Binary cubic forms 6. Other forms Chapter III. Theorems of Blichfeldt and Minkowski 1. Introduction 2. Blichfeldt's and Mnowski's theorems 3. Generalisations to non-negative functions 4. Characterisation of lattices 5. Lattice constants 6. A method of Mordell 7. Representation of integers by quadratic forms Chapter IV. Distance functions 1. Introduction 2. General distance-functions 3. Convex sets 4. Distance functions and lattices Chapter V. Mahler's compactness theorem 1. Introduction 2. Linear transformations 3. Convergence of lattices 4. Compactness for lattices 5. Critical lattices 6. Bounded star-bodies 7. Reducibility 8. Convex bodies 9. Speres 10. Applications to diophantine approximation Chapter VI. The theorem of Minkowski-Hlawka 1. Introduction 2. Sublattices of prime index 3. The Minkowski-Hlawka theorem 4. Schmidt's theorems 5. A conjecture of Rogers 6. Unbounded star-bodies Chapter VII. The quotient space 1. Introduction 2. General properties 3. The sum theorem Chapter VIII. Successive minima 1. Introduction 2. Spheres 3. General distance-functions Chapter IX. Packings 1. Introduction 2. Sets with V( varphi) =n^2 Delta( varphi) 3. Voronoi's results 4. Preparatory lemmas 5. Fejes Toth's theorem 6. Cylinders 7. Packing of spheres 8. The proudctio of n linear forms Chapter X. Automorphs 1. Introduction 2. Special forms 3. A method of Mordell 4. Existence of automorphs 5. Isolation theorems 6. Applications of isolation 7. An infinity of solutions 8. Local methods Chapter XI. Ihomogeneous problems 1. Introduction 2. Convex sets 3. Transference theorems for convex sets 4. The producti of n linear forms Appendix References Index quotient space. successive minima. Packings. Automorphs. Inhomogeneous problems.", "", "It is shown that the integer linear programming problem with a fixed number of variables is polynomially solvable. The proof depends on methods from geometry of numbers.", "", "", "We determine the maximal gap between the optimal values of an integer program and its linear programming relaxation, where the matrix and cost function are fixed but the right hand side is unspecified. Our formula involves irreducible decomposition of monomial ideals. The gap can be computed in polynomial time when the dimension is fixed.", "In this paper I discuss various properties of the simplicial complex of maximal lattice free bodies associated with a matrix A. If the matrix satisfies some mild conditions, and is generic, the edges of the complex form the minimal test set for the family of integer programs obtained by selecting a particular row of A as the objective function, and using the remaining rows to impose constraints on the integer variables. © 1997 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.", "", "", "Preface Acknowledgements 1. Algorithmic Aspects 2. The Frobenius Number for Small n 3. The General Problem 4. Sylvester Denumerant 5. Integers without Representation 6. Generalizations and Related Problems 7. Numerical Semigroups 8. Applications of the Frobenius Number 9. Appendix A Bibliography" ] }
1210.8418
2126351727
CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.
Present author has not come across any source in the literature which uses CAELinux to reconstruct surface models of three dimensional biological organs from image sequences that are obtained through CT-scan. However, the practice of using free software packages to reconstruct biological organs like liver and kidney may be found in the author’s previous works @cite_21 @cite_3 @cite_2 . Present paper and author’s previous works @cite_21 @cite_3 @cite_2 both use the same software packages, i.e., ImageJ @cite_17 @cite_1 @cite_8 , ITK-SNAP @cite_9 @cite_14 , MeshLab @cite_6 @cite_5 , to perform the three dimensional reconstruction. But the present work is different from these previous works in that the previous works do not use CAELinux while the present work uses CAELinux alone; CAELinux contains ImageJ, ITK-SNAP and MeshLab.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2127890285", "2138825607", "", "1154621090", "", "2944926434", "", "2107112096", "", "" ], "abstract": [ "Active contour segmentation and its robust implementation using level set methods are well-established theoretical approaches that have been studied thoroughly in the image analysis literature. Despite the existence of these powerful segmentation methods, the needs of clinical research continue to be fulfilled, to a large extent, using slice-by-slice manual tracing. To bridge the gap between methodological advances and clinical routine, we developed an open source application called ITK-SNAP, which is intended to make level set segmentation easily accessible to a wide range of users, including those with little or no mathematical expertise. This paper describes the methods and software engineering philosophy behind this new tool and provides the results of validation experiments performed in the context of an ongoing child autism neuroimaging study. The validation establishes SNAP intrarater and interrater reliability and overlap error statistics for the caudate nucleus and finds that SNAP is a highly reliable and efficient alternative to manual tracing. Analogous results for lateral ventricle", "Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). ImageJ is easy to use and can do many imaging manipulations. A very large and knowledgeable group makes up the user community for ImageJ. Topics covered are imaging abilities; cross platform; image formats support as of June 2004; extensions, including macros and plug-ins; and imaging library. NIH reports tens of thousands of downloads at a rate of about 24,000 per month currently. ImageJ can read most of the widely used and significant formats used in biomedical images. Manipulations supported are read write of image files and operations on separate pixels, image regions, entire images, and volumes (stacks in ImageJ). Basic operations supported include convolution, edge detection, Fourier transform, histogram and particle analyses, editing and color manipulation, and more advanced operations, as well as visualization. For assistance in using ImageJ, users e-mail each other, and the user base is highly knowledgeable and will answer requests on the mailing list. A thorough manual with many examples and illustrations has been written by Tony Collins of the Wright Cell Imaging Facility at Toronto Western Research Institute and is available, along with other listed resources, via the Web.", "", "In this work, a procedure is presented for the reconstruction of biological organs from image sequences obtained through CT-scan. Although commercial software, which can accomplish this task, are readily available, the procedure presented here needs only free software. The procedure has been applied to reconstruct a liver from the scan data available in literature. 3D biological organs obtained this way can be used for the finite element analysis of biological organs and this has been demonstrated by carrying out an FE analysis on the reconstructed liver.", "", "This work presents a methodology to reconstruct 3D biological organs from image sequences or other scan data using readily available free softwares with the final goal of using the organs (3D solids) for finite element analysis. The methodology deals with issues such as segmentation, conversion to polygonal surface meshes, and finally conversion of these meshes to 3D solids. The user is able to control the detail or the level of complexity of the solid constructed. The methodology is illustrated using 3D reconstruction of a porcine liver as an example. Finally, the reconstructed liver is imported into the commercial software ANSYS, and together with a cyst inside the liver, a nonlinear analysis performed. The results confirm that the methodology can be used for obtaining 3D geometry of biological organs. The results also demonstrate that the geometry obtained by following this methodology can be used for the nonlinear finite element analysis of organs. The methodology (or the procedure) would be of use in surgery planning and surgery simulation since both of these extensively use finite elements for numerical simulations and it is better if these simulations are carried out on patient specific organ geometries. Instead of following the present methodology, it would cost a lot to buy a commercial software which can reconstruct 3D biological organs from scanned image sequences.", "", "Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.", "", "" ] }
1210.8418
2126351727
CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.
Of course, @cite_21 @cite_3 @cite_2 are not the only works found in the literature that use free software packages to extract the geometry of biological organs. For example, @cite_12 deals with the three dimensional reconstruction of liver slice images using the free software called MITK @cite_19 . Also, one can find many works that deal with the three dimensional reconstruction of biological organs using commercial software packages, e.g., @cite_7 @cite_10 @cite_11 @cite_0 .
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_2", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "1993811392", "1154621090", "2944926434", "2413459418", "", "2107112096", "2102972667", "2111850706", "2053299773" ], "abstract": [ "The anatomy of the pelvis is complex, multilayered, and its three-dimensional organization is conceptually difficult for students to grasp. The aim of this project was to create an explorable and projectable stereoscopic, three-dimensional (3D) model of the female pelvis and pelvic contents for anatomical education. The model was created using cryosection images obtained from the Visible Human Project, in conjunction with a general-purpose three-dimensional segmentation and surface-rendering program. Anatomical areas of interest were identified and labeled on consecutive images. Each 2D slice was reassembled, forming a three-dimensional model. The model includes the pelvic girdle, organs of the pelvic cavity, surrounding musculature, the perineum, neurovascular structures, and the peritoneum. Each structure can be controlled separately (e.g. added, subtracted, made transparent) to reveal organization and or relationships between structures. The model can be manipulated and or projected stereoscopically to visualize structures and relationships from different angles with excellent spatial perception. Because of its ease of use and versatility, we expect this model may provide a powerful teaching tool for learning in the classroom or in the laboratory. Anat Sci Educ. © 2010 American Association of Anatomists.", "In this work, a procedure is presented for the reconstruction of biological organs from image sequences obtained through CT-scan. Although commercial software, which can accomplish this task, are readily available, the procedure presented here needs only free software. The procedure has been applied to reconstruct a liver from the scan data available in literature. 3D biological organs obtained this way can be used for the finite element analysis of biological organs and this has been demonstrated by carrying out an FE analysis on the reconstructed liver.", "This work presents a methodology to reconstruct 3D biological organs from image sequences or other scan data using readily available free softwares with the final goal of using the organs (3D solids) for finite element analysis. The methodology deals with issues such as segmentation, conversion to polygonal surface meshes, and finally conversion of these meshes to 3D solids. The user is able to control the detail or the level of complexity of the solid constructed. The methodology is illustrated using 3D reconstruction of a porcine liver as an example. Finally, the reconstructed liver is imported into the commercial software ANSYS, and together with a cyst inside the liver, a nonlinear analysis performed. The results confirm that the methodology can be used for obtaining 3D geometry of biological organs. The results also demonstrate that the geometry obtained by following this methodology can be used for the nonlinear finite element analysis of organs. The methodology (or the procedure) would be of use in surgery planning and surgery simulation since both of these extensively use finite elements for numerical simulations and it is better if these simulations are carried out on patient specific organ geometries. Instead of following the present methodology, it would cost a lot to buy a commercial software which can reconstruct 3D biological organs from scanned image sequences.", "Background Comparing with two dimensional (2D) imaging, both in diagnosis and treatment, three dimensional (3D) imaging has many advantages in clinical medicine. 3D reconstruction makes the target easier to identify and reveals the volume and shape of the organ much better than 2D imaging. A 3D digitized visible model of the liver was built to provide anatomical structure for planing of hepatic operation and for realizing accurate simulation of the liver on the computer. Methods Transverse sections of abdomen were chosen from the Chinese Visible Human dataset. And Amira software was selected to segment and reconstruct the structures of the liver. The liver was reconstructed in three-dimensions with both surface and volume rendering reconstruction. Results Accurately segmented images of the main structures of the liver were completed. The reconstructed structures can be displayed singly, in small groups or as a whole and can be continuously rotated in 3D space at different velocities. Conclusions The reconstructed liver is realistic, which demonstrates the natural shape and exact position of liver structures. It provides an accurate model for the automated segmentation algorithmic study and a digitized anatomical mode of viewing the liver.", "", "Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.", "Unlike volume models, surface models, which are empty three-dimensional images, have a small file size, so they can be displayed, rotated, and modified in real time. Thus, surface models of male urogenital organs can be effectively applied to an interactive computer simulation and contribute to the clinical practice of urologists. To create high-quality surface models, the urogenital organs and other neighboring structures were outlined in 464 sectioned images of the Visible Korean male using Adobe Photoshop; the outlines were interpolated on Discreet Combustion; then an almost automatic volume reconstruction followed by surface reconstruction was performed on 3D-DOCTOR. The surface models were refined and assembled in their proper positions on Maya, and a surface model was coated with actual surface texture acquired from the volume model of the structure on specially programmed software. In total, 95 surface models were prepared, particularly complete models of the urinary and genital tracts. These surface models will be distributed to encourage other investigators to develop various kinds of medical training simulations. Increasingly automated surface reconstruction technology using commercial software will enable other researchers to produce their own surface models more effectively.", "MITK supports an extensive set of image processing and volume rendering functionality, and it is a very convenient tool. 3-D reconstruction of the liver is performed by MITK under VC++ 6.0 platform. Compared with 3-D reconstruction results by Amira, good surface image of liver could be obtained by MITK. The 3-D reconstruction image of liver can be used for finite element analysis and temperature field simulation, which may be applied for surgical planning system and guide the process of clinical operation.", "Three-dimensional (3D) reconstruction of intrahepatic vessels is very useful in visualizing the complex anatomy of hepatic veins and intrahepatic portal vein. It also provides a 3D anatomic basis for diagnostic imaging and surgical operation on the liver. In the present study, we built a 3D digitized model of hepatic veins and intrahepatic portal vein based on the coronal sectional anatomic dataset of the liver. The dataset was obtained using the digital freezing milling technique. The pre-reconstructed structures were identified and extracted, and then were segmented by the method of manual intervention. The digitized model of hepatic veins and intrahepatic portal vein was established using 3D medical visualization software. This model facilitated a continuous and dynamic displaying of the hepatic veins and intrahepatic portal vein at different orientations, which demonstrated the complicated relationship of adjacent hepatic veins and intrahepatic portal vein realistically in the 3D space. This study indicated that high-quality 2D images, precise data segmentation, and suitable 3D reconstruction methods ensured the reality and accuracy of the digital visualized model of hepatic veins and intrahepatic portal vein." ] }
1210.7970
2950442529
We introduce and analyze greedy equilibria (GE) for the well-known model of selfish network creation by [PODC'03]. GE are interesting for two reasons: (1) they model outcomes found by agents which prefer smooth adaptations over radical strategy-changes, (2) GE are outcomes found by agents which do not have enough computational resources to play optimally. In the model of agents correspond to Internet Service Providers which buy network links to improve their quality of network usage. It is known that computing a best response in this model is NP-hard. Hence, poly-time agents are likely not to play optimally. But how good are networks created by such agents? We answer this question for very simple agents. Quite surprisingly, naive greedy play suffices to create remarkably stable networks. Specifically, we show that in the SUM version, where agents attempt to minimize their average distance to all other agents, GE capture Nash equilibria (NE) on trees and that any GE is in 3-approximate NE on general networks. For the latter we also provide a lower bound of 3 2 on the approximation ratio. For the MAX version, where agents attempt to minimize their maximum distance, we show that any GE-star is in 2-approximate NE and any GE-tree having larger diameter is in 6 5-approximate NE. Both bounds are tight. We contrast these positive results by providing a linear lower bound on the approximation ratio for the MAX version on general networks in GE. This result implies a locality gap of @math for the metric min-max facility location problem, where n is the number of clients.
A part of our work focuses on tree networks. Such topologies are common outcomes of NCGs if edges are expensive, which led the authors of @cite_3 to conjecture that all (non-transient) stable networks of NCGs are trees if @math is greater than some constant. The conjecture was later disproved by @cite_4 but it was shown to be true for high edge-cost. In particular, the authors of @cite_2 proved that all stable networks are trees if @math in the version or if @math in the version. Experimental evidence suggests that this transition to tree networks already happens at much lower edge-cost and it is an interesting open problem to improve on these bounds.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2032349632", "", "2594715668" ], "abstract": [ "We study a network creation game recently proposed by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game, each player (vertex) can create links (edges) to other players at a cost of α per edge. The goal of every player is to minimize the sum consisting of (a) the cost of the links he has created and (b) the sum of the distances to all other players. conjectured that there exists a constant A such that, for any α > A, all non-transient Nash equilibria graphs are trees. They showed that if a Nash equilibrium is a tree, the price of anarchy is constant. In this paper we disprove the tree conjecture. More precisely, we show that for any positive integer n 0 , there exists a graph built by n ≥ n 0 players which contains cycles and forms a non-transient Nash equilibrium, for any α with 1 < α ≤ √n 2. Our construction makes use of some interesting results on finite affine planes. On the other hand we show that, for α ≥ 12n[log n], every Nash equilibrium forms a tree.Without relying on the tree conjecture, proved an upper bound on the price of anarchy of O(√α), where α ∈ [2, n2]. We improve this bound. Specifically, we derive a constant upper bound for α ∈ O(√n) and for α ≥ 12n[log n]. For the intermediate values we derive an improved bound of O(1 + (min α2 n, n2 α )1 3).Additionally, we develop characterizations of Nash equilibria and extend our results to a weighted network creation game as well as to scenarios with cost sharing.", "", "We study the price of anarchy and the structure of equilibria in network creation games. A network creation game (first defined and studied by [4]) is played by n players 1, 2, . . . , n , each identified with a vertex of a graph (network), where the strategy of player i, i = 1, . . . , n, is to build some edges adjacent to i. The cost of building an edge is α > 0, a fixed parameter of the game. The goal of every player is to minimize its creation cost plus its usage cost. The creation cost of player i is α times the number of built edges. In the SumGame (the original variant of [4]) the usage cost of player i is the sum of distances from i to every node of the resulting graph. In the MaxGame (variant defined and studied by [3]) the usage cost is the eccentricity of i in the resulting graph of the game. In this paper we improve previously known bounds on the price of anarchy of the game (of both variants) for various ranges of α, and give new insights into the structure of equilibria for various values of α. The two main results of the paper show that for α > 273 · n all equilibria in SumGame are trees and thus the price of anarchy is constant, and that for α > 129 all equilibria in MaxGame are trees and the price of anarchy is constant. For SumGame this (almost) answers one of the basic open problems in the field – is price of anarchy of the network creation game constant for all values of α? – in an affirmative way, up to a tiny range of α." ] }
1210.7970
2950442529
We introduce and analyze greedy equilibria (GE) for the well-known model of selfish network creation by [PODC'03]. GE are interesting for two reasons: (1) they model outcomes found by agents which prefer smooth adaptations over radical strategy-changes, (2) GE are outcomes found by agents which do not have enough computational resources to play optimally. In the model of agents correspond to Internet Service Providers which buy network links to improve their quality of network usage. It is known that computing a best response in this model is NP-hard. Hence, poly-time agents are likely not to play optimally. But how good are networks created by such agents? We answer this question for very simple agents. Quite surprisingly, naive greedy play suffices to create remarkably stable networks. Specifically, we show that in the SUM version, where agents attempt to minimize their average distance to all other agents, GE capture Nash equilibria (NE) on trees and that any GE is in 3-approximate NE on general networks. For the latter we also provide a lower bound of 3 2 on the approximation ratio. For the MAX version, where agents attempt to minimize their maximum distance, we show that any GE-star is in 2-approximate NE and any GE-tree having larger diameter is in 6 5-approximate NE. Both bounds are tight. We contrast these positive results by providing a linear lower bound on the approximation ratio for the MAX version on general networks in GE. This result implies a locality gap of @math for the metric min-max facility location problem, where n is the number of clients.
@cite_13 investigated NCGs, where agents cannot buy every possible edge. Furthermore, @cite_17 recently analyzed a bounded-budget version. Both versions seem realistic, but in the following we do not restrict the set of edges which can be bought or the budget of an agent. Clearly, such restrictions reduce the qualitative gap between simple and arbitrary strategy-changes and would lead to weaker results for our analysis. Note, that this indicates that outcomes found by simple agents in the edge or budget-restricted version may be even more stable than we show in the following sections.
{ "cite_N": [ "@cite_13", "@cite_17" ], "mid": [ "2117907411", "2132936659" ], "abstract": [ "A fundamental family of problems at the intersection between computer science and operations research is network design. This area of research has become increasingly important given the continued growth of computer networks such as the Internet. Traditionally, we want to find a minimum-cost (sub)network that satisfies some specified property such as k-connectivity or connectivity on terminals (as in the classic Steiner tree problem). This goal captures the (possibly incremental) creation cost of the network, but does not incorporate the cost of actually using the network. In contrast, network routing has the goal of optimizing the usage cost of the network, but assumes that the network has already been created.", "We consider a network creation game in which, each player (vertex) has a limited budget to establish links to other players. In our model, each link has a unit cost and each agent tries to minimize its cost which is its local diameter or its total distance to other players in the (undirected) underlying graph of the created network. Two variants of the game are studied: in the MAX version, the cost incurred to a vertex is the maximum distance between that vertex and other vertices, and in the SUM version, the cost incurred to a vertex is the sum of distances between that vertex and other vertices. We prove that in both versions pure Nash equilibria exist, but the problem of finding the best response of a vertex is NP-hard. Next, we study the maximum possible diameter of an equilibrium graph with n vertices in various cases. For infinite numbers of n, we construct an equilibrium tree with diameter Θ(n) in the MAX version. Also, we prove that the diameter of any equilibrium tree is O(log n) in the SUM version and this bound is tight. When all vertices have unit budgets (i.e. can establish link to just one vertex), the diameter in both versions is O(1). We give an example of equilibrium graph in MAX version, such that all vertices have positive budgets and yet the diameter is as large as Ω(√log n). This interesting result shows that the diameter does not decrease necessarily and may increase as the budgets are increased. For the SUM version, we prove that every equilibrium graph has diameter 2O(√log n) when all vertices have positive budgets. Moreover, if the budget of every players is at least k, then every equilibrium graph with diameter more than 3 is k-connected." ] }
1210.7970
2950442529
We introduce and analyze greedy equilibria (GE) for the well-known model of selfish network creation by [PODC'03]. GE are interesting for two reasons: (1) they model outcomes found by agents which prefer smooth adaptations over radical strategy-changes, (2) GE are outcomes found by agents which do not have enough computational resources to play optimally. In the model of agents correspond to Internet Service Providers which buy network links to improve their quality of network usage. It is known that computing a best response in this model is NP-hard. Hence, poly-time agents are likely not to play optimally. But how good are networks created by such agents? We answer this question for very simple agents. Quite surprisingly, naive greedy play suffices to create remarkably stable networks. Specifically, we show that in the SUM version, where agents attempt to minimize their average distance to all other agents, GE capture Nash equilibria (NE) on trees and that any GE is in 3-approximate NE on general networks. For the latter we also provide a lower bound of 3 2 on the approximation ratio. For the MAX version, where agents attempt to minimize their maximum distance, we show that any GE-star is in 2-approximate NE and any GE-tree having larger diameter is in 6 5-approximate NE. Both bounds are tight. We contrast these positive results by providing a linear lower bound on the approximation ratio for the MAX version on general networks in GE. This result implies a locality gap of @math for the metric min-max facility location problem, where n is the number of clients.
To the best of our knowledge, approximate Nash equilibria have not been studied before in the context of selfish network creation. Closest to our approach here may be the work of @cite_14 , which analyzes for a related game how tolerant the agents have to be in order to accept a centrally designed solution. We adopt a similar point of view by asking how tolerant agents have to be to accept a solution found by greedy play.
{ "cite_N": [ "@cite_14" ], "mid": [ "1974040162" ], "abstract": [ "In this paper, we study a basic network design game in which n self-interested agents, each having individual connectivity requirements, wish to build a network by purchasing links from a given set of edges. A fundamental cost-sharing mechanism is Shapley cost-sharing, which splits the cost of an edge in a fair manner among the agents using the edge. It is well known that in such games, the price of anarchy is n, while the price of stability is H(n), where H(n) denotes the nth harmonic number. We investigate whether an optimal minimum-cost network represents an attractive, relatively stable state that agents might want to purchase: what extra cost does an agent incur compared to a possible strategy deviation? We employ the concept of α-approximate Nash equilibria, in which no agent can reduce its cost by a factor of more than α. We prove that for single-source games in undirected graphs, every optimal network represents an H(n)-approximate Nash equilibrium. We show that this bound is tight by presenting a..." ] }
1210.7970
2950442529
We introduce and analyze greedy equilibria (GE) for the well-known model of selfish network creation by [PODC'03]. GE are interesting for two reasons: (1) they model outcomes found by agents which prefer smooth adaptations over radical strategy-changes, (2) GE are outcomes found by agents which do not have enough computational resources to play optimally. In the model of agents correspond to Internet Service Providers which buy network links to improve their quality of network usage. It is known that computing a best response in this model is NP-hard. Hence, poly-time agents are likely not to play optimally. But how good are networks created by such agents? We answer this question for very simple agents. Quite surprisingly, naive greedy play suffices to create remarkably stable networks. Specifically, we show that in the SUM version, where agents attempt to minimize their average distance to all other agents, GE capture Nash equilibria (NE) on trees and that any GE is in 3-approximate NE on general networks. For the latter we also provide a lower bound of 3 2 on the approximation ratio. For the MAX version, where agents attempt to minimize their maximum distance, we show that any GE-star is in 2-approximate NE and any GE-tree having larger diameter is in 6 5-approximate NE. Both bounds are tight. We contrast these positive results by providing a linear lower bound on the approximation ratio for the MAX version on general networks in GE. This result implies a locality gap of @math for the metric min-max facility location problem, where n is the number of clients.
Guyl ' @cite_5 recently published a paper having a very similar title to ours. They investigate networks created by agents who use the length of greedy paths'' as communication cost and show that the resulting equilibria are substantially different to the ones we consider here. Their term greedy'' refers to the distances whereas our term greedy'' refers to the behavior of the agents.
{ "cite_N": [ "@cite_5" ], "mid": [ "2107820636" ], "abstract": [ "Greedy navigability is a central issue in the theory of networks. However, the exogenous nature of network models do not allow for describing how greedy routable-networks emerge in reality. In turn, network formation games focus on the very emergence proess, but the applied shortest-path based cost functions exclude navigational aspects. This paper takes a frst step towards incorporating both emergence (missing in algorithmic network models) and greedy navigability (missing in network formation games) into a single framework, and proposes the Greedy Network Formation Game. Our first contribution is the game definition, where we assume a hidden metric space underneath the network, and, instead of usual shortest path metric, we use the length of greedy paths as the measure of communiation cost between players. Our main finding is that greedy-routable small worlds do not emerge on constant dimensional Eulidean grids. This simply means that the emergence of topologies on which w eunderstood the priniples of greedy forwarding cannot be explained endogenously. We also present a very brief outlook on how the situation hanges in the hyperbolic space." ] }
1210.7350
2950290221
We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time "twist": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of "big data". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a "big data" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle "big" as well as "fast" data.
In information retrieval (IR), the general idea of augmenting a user's query is closely related to relevance feedback, which dates back to the 1960s @cite_28 . One specific form, pseudo-relevance feedback , automatically extracts expansion terms from an initial query's top-ranked results (see @cite_23 for a more modern formulation). Whether the user controls the use of these additional query terms is an interface design decision @cite_36 . We can consider the case where expansion terms are explicitly controlled by the user an early form of query suggestion---these and related techniques have been widely known in the IR literature for decades and predate the web.
{ "cite_N": [ "@cite_28", "@cite_36", "@cite_23" ], "mid": [ "2164547069", "1982451429", "2169213601" ], "abstract": [ "1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462 71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied.", "", "We explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches. It has long been recognized that the primary obstacle to effective performance of classical models is the need to estimate a relevance model : probabilities of words in the relevant class. We propose a novel technique for estimating these probabilities using the query alone. We demonstrate that our technique can produce highly accurate relevance models, addressing important notions of synonymy and polysemy. Our experiments show relevance models outperforming baseline language modeling systems on TREC retrieval and TDT tracking tasks. The main contribution of this work is an effective formal method for estimating a relevance model with no training data." ] }
1210.7350
2950290221
We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time "twist": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of "big data". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a "big data" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle "big" as well as "fast" data.
Prior to the web, most query expansion work focused on capturing term correlations across global and local contexts in the document collection @cite_2 . The advent of web search engines, however, provided a new and much richer resource to mine: query, clickthrough, and other behavioral interaction logs. One of the earliest use of logs for query expansion is the work of @cite_11 , who used clickthrough data to establish correlations between query terms and document terms, which were then extracted for query expansion. Related, a family of query suggestion techniques involves constructing a bipartite graph of query and clicked URLs, on which random walks @cite_12 or clustering can be performed @cite_15 ; cf. @cite_49 . Another use of query logs is to extract query substitutions from search sessions by mining statistical associations from users' successive queries @cite_27 ---this is the general approach we adopt. Similar techniques are also effective for spelling correction @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_49", "@cite_2", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "66690650", "2170741935", "1743515408", "2002306339", "2171743956", "2153190022", "2099548400" ], "abstract": [ "Logs of user queries to an internet search engine provide a large amount of implicit and explicit information about language. In this paper, we investigate their use in spelling correction of search queries, a task which poses many additional challenges beyond the traditional spelling correction problem. We present an approach that uses an iterative transformation of the input query strings into other strings that correspond to more and more likely queries according to statistics extracted from internet search query logs.", "We introduce the notion of query substitution, that is, generating a new query to replace a user's original search query. Our technique uses modifications based on typical substitutions web searchers make to their queries. In this way the new query is strongly related to the original query, containing terms closely related to all of the original terms. This contrasts with query expansion through pseudo-relevance feedback, which is costly and can lead to query drift. This also contrasts with query relaxation through boolean or TFIDF retrieval, which reduces the specificity of the query. We define a scale for evaluating query substitution, and show that our method performs well at generating new queries related to the original queries. We build a model for selecting between candidates, by using a number of features relating the query-candidate pair, and by fitting the model to human judgments of relevance of query suggestions. This further improves the quality of the candidates generated. Experiments show that our techniques significantly increase coverage and effectiveness in the setting of sponsored search.", "A recent query-log mining approach for query recommendation is based on Query Flow Graphs, a markov-chain representation of the query reformulation process followed by users of Web Search Engines trying to satisfy their information needs. In this paper we aim at extending this model by providing methods for dealing with evolving data. In fact, users' interests change over time, and the knowledge extracted from query logs may suffer an aging effect as new interesting topics appear. Starting from this observation validated experimentally, we introduce a novel algorithm for updating an existing query flow graph. The proposed solution allows the recommendation model to be kept always updated without reconstructing it from scratch every time, by incrementally merging efficiently the past and present data.", "Techniques for automatic query expansion have been extensively studied in information research as a means of addressing the word mismatch between queries and documents. These techniques can be categorized as either global or local. While global techniques rely on analysis of a whole collection to discover word relationships, local techniques emphasize analysis of the top-ranked documents retrieved for a query. While local techniques have shown to be more effective that global techniques in general, existing local techniques are not robust and can seriously hurt retrieved when few of the retrieval documents are relevant. We propose a new technique, called local context analysis, which selects expansion terms based on cooccurrence with the query terms within the top-ranked documents. Experiments on a number of collections, both English and non-English, show that local context analysis offers more effective and consistent retrieval results.", "Query suggestion plays an important role in improving the usability of search engines. Although some recently proposed methods can make meaningful query suggestions by mining query patterns from search logs, none of them are context-aware - they do not take into account the immediately preceding queries as context in query suggestion. In this paper, we propose a novel context-aware query suggestion approach which is in two steps. In the offine model-learning step, to address data sparseness, queries are summarized into concepts by clustering a click-through bipartite. Then, from session data a concept sequence suffix tree is constructed as the query suggestion model. In the online query suggestion step, a user's search context is captured by mapping the query sequence submitted by the user to a sequence of concepts. By looking up the context in the concept sequence sufix tree, our approach suggests queries to the user in a context-aware manner. We test our approach on a large-scale search log of a commercial search engine containing 1:8 billion search queries, 2:6 billion clicks, and 840 million query sessions. The experimental results clearly show that our approach outperforms two baseline methods in both coverage and quality of suggestions.", "Generating alternative queries, also known as query suggestion, has long been proved useful to help a user explore and express his information need. In many scenarios, such suggestions can be generated from a large scale graph of queries and other accessory information, such as the clickthrough. However, how to generate suggestions while ensuring their semantic consistency with the original query remains a challenging problem. In this work, we propose a novel query suggestion algorithm based on ranking queries with the hitting time on a large scale bipartite graph. Without involvement of twisted heuristics or heavy tuning of parameters, this method clearly captures the semantic consistency between the suggested query and the original query. Empirical experiments on a large scale query log of a commercial search engine and a scientific literature collection show that hitting time is effective to generate semantically consistent query suggestions. The proposed algorithm and its variations can successfully boost long tail queries, accommodating personalized query suggestion, as well as finding related authors in research.", "Queries to search engines on the Web are usually short. They do not provide sufficient information for an effective selection of relevant documents. Previous research has proposed the utilization of query expansion to deal with this problem. However, expansion terms are usually determined on term co-occurrences within documents. In this study, we propose a new method for query expansion based on user interactions recorded in user logs. The central idea is to extract correlations between query terms and document terms by analyzing user logs. These correlations are then used to select high-quality expansion terms for new queries. Compared to previous query expansion methods, ours takes advantage of the user judgments implied in user logs. The experimental results show that the log-based query expansion method can produce much better results than both the classical search method and the other query expansion methods." ] }
1210.7350
2950290221
We present the architecture behind Twitter's real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time "twist": after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of "big data". We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a "big data" platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle "big" as well as "fast" data.
There has been much related work on analyzing temporal patterns of web search queries. @cite_45 were among the first to model bursts in web queries to identify semantically similar queries from the MSN query logs. The temporal profile of queries has been analyzed @cite_37 and exploited to capture lexical semantic relationships @cite_35 @cite_38 . Forecasted query frequency has also been shown to be helpful in query auto-completion @cite_19 . Most recently, Radinsky al. @cite_29 proposed a general temporal modeling framework for user behavior in terms of queries, URLs, and clicks.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_38", "@cite_29", "@cite_19", "@cite_45" ], "mid": [ "1980248478", "2044002869", "2018022208", "2012354735", "", "2057714964" ], "abstract": [ "In this paper we investigate temporal patterns of web search queries. We carry out several evaluations to analyze the properties of temporal profiles of queries, revealing promising semantic and pragmatic relationships between words. We focus on two applications: query suggestion and query categorization. The former shows a potential for time-series similarity measures to identify specific semantic relatedness between words, which results in state-of-the-art performance in query suggestion while providing complementary information to more traditional distributional similarity measures. The query categorization evaluation suggests that the temporal profile alone is not a strong indicator of broad topical categories.", "Documents with timestamps, such as email and news, can be placed along a timeline. The timeline for a set of documents returned in response to a query gives an indication of how documents relevant to that query are distributed in time. Examining the timeline of a query result set allows us to characterize both how temporally dependent the topic is, as well as how relevant the results are likely to be. We outline characteristic patterns in query result set timelines, and show experimentally that we can automatically classify documents into these classes. We also show that properties of the query result set timeline can help predict the mean average precision of a query. These results show that meta-features associated with a query can be combined with text retrieval techniques to improve our understanding and treatment of text search on documents with timestamps.", "Seasonal events such as Halloween and Christmas repeat every year and initiate several temporal information needs. The impact of such events on users is often reflected in search logs in form of seasonal spikes in the frequency of related queries (e.g. \"halloween costumes\", \"where is santa\"). Many seasonal queries such as \"sigir conference\" mainly target fresh pages (e.g. sigir2011.org) that have less usage data such as clicks and anchor-text compared to older alternatives (e.g.sigir2009.org). Thus, it is important for search engines to correctly identify seasonal queries and make sure that their results are temporally reordered if necessary. In this poster, we focus on detecting seasonal queries using time-series analysis. We demonstrate that the seasonality of a query can be determined with high accuracy according to its historical frequency distribution.", "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.", "", "We present several methods for mining knowledge from the query logs of the MSN search engine. Using the query logs, we build a time series for each query word or phrase (e.g., 'Thanksgiving' or 'Christmas gifts') where the elements of the time series are the number of times that a query is issued on a day. All of the methods we describe use sequences of this form and can be applied to time series data generally. Our primary goal is the discovery of semantically similar queries and we do so by identifying queries with similar demand patterns. Utilizing the best Fourier coefficients and the energy of the omitted components, we improve upon the state-of-the-art in time-series similarity matching. The extracted sequence features are then organized in an efficient metric tree index structure. We also demonstrate how to efficiently and accurately discover the important periods in a time-series. Finally we propose a simple but effective method for identification of bursts (long or short-term). Using the burst information extracted from a sequence, we are able to efficiently perform 'query-by-burst' on the database of time-series. We conclude the presentation with the description of a tool that uses the described methods, and serves as an interactive exploratory data discovery tool for the MSN query database." ] }
1210.7403
1510704646
We report a method for super-resolution of range images. Our approach leverages the interpretation of LR image as sparse samples on the HR grid. Based on this interpretation, we demonstrate that our recently reported approach, which reconstructs dense range images from sparse range data by exploiting a registered colour image, can be applied for the task of resolution enhancement of range images. Our method only uses a single colour image in addition to the range observation in the super-resolution process. Using the proposed approach, we demonstrate super-resolution results for large factors (e.g. 4) with good localization accuracy.
As mentioned earlier, the idea of using a colour image has been explored for range super-resolution in various ways. For example, the work in @cite_0 interpolates the range image and, by exploiting the assumption that depth discontinuities coincide with color edges, improves estimation at discontinuities. Similar improvements are shown in MRF-based energy minimization approaches that also uses a HR optical image @cite_8 @cite_3 . However, these approaches are not known to perform for large super-resolution factors.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_8" ], "mid": [ "322039674", "2109945199", "1967832436" ], "abstract": [ "3D range sensors, particularly 3D laser range scanners, enjoy a rising popularity and are used nowadays for many different applications. The resolution 3D range sensors provide in the image plane i ...", "This paper describes a highly successful application of MRFs to the problem of generating high-resolution range images. A new generation of range sensors combines the capture of low-resolution range images with the acquisition of registered high-resolution camera images. The MRF in this paper exploits the fact that discontinuities in range and coloring tend to co-align. This enables it to generate high-resolution, low-noise range images by integrating regular camera images into the range data. We show that by using such an MRF, we can substantially improve over existing range imaging technology.", "Applying the machine-learning technique of inference in Markov random fields we build improved 3D models by integrating two different modalities. Visual input from a standard color camera delivers high-resolution texture data but also enables us to enhance the 3D data calculated from the range output of a 3D time-of-flight camera in terms of noise and spatial resolution. The proposed method to increase the visual quality of the 3D data makes this kind of camera a promising device for various upcoming 3DTV applications. With our two-camera setup we believe that the design of low-cost, fast and highly portable 3D scene acquisition systems will be possible in the near future." ] }
1210.7403
1510704646
We report a method for super-resolution of range images. Our approach leverages the interpretation of LR image as sparse samples on the HR grid. Based on this interpretation, we demonstrate that our recently reported approach, which reconstructs dense range images from sparse range data by exploiting a registered colour image, can be applied for the task of resolution enhancement of range images. Our method only uses a single colour image in addition to the range observation in the super-resolution process. Using the proposed approach, we demonstrate super-resolution results for large factors (e.g. 4) with good localization accuracy.
The authors in @cite_10 propose an application of bilateral filtering which exploits constraints from the HR colour image. This work demonstrates ability to achieve good quality super-resolution by large factors. However, the bilateral filter, which is based on the HR colour image, has to be defined at each pixel. This makes the approach computationally very demanding. Our approach which works on image segments rather than pixels, and is based on computing local costs over segments, is a lot more efficient.
{ "cite_N": [ "@cite_10" ], "mid": [ "2104599718" ], "abstract": [ "We present a new post-processing step to enhance the resolution of range images. Using one or two registered and potentially high-resolution color images as reference, we iteratively refine the input low-resolution range image, in terms of both its spatial resolution and depth precision. Evaluation using the Middlebury benchmark shows across-the-board improvement for sub-pixel accuracy. We also demonstrated its effectiveness for spatial resolution enhancement up to 100 times with a single reference image." ] }
1210.7403
1510704646
We report a method for super-resolution of range images. Our approach leverages the interpretation of LR image as sparse samples on the HR grid. Based on this interpretation, we demonstrate that our recently reported approach, which reconstructs dense range images from sparse range data by exploiting a registered colour image, can be applied for the task of resolution enhancement of range images. Our method only uses a single colour image in addition to the range observation in the super-resolution process. Using the proposed approach, we demonstrate super-resolution results for large factors (e.g. 4) with good localization accuracy.
More recently, an example based range super-resolution approach is also reported @cite_6 , which also performs well at large resolution factors but requires a separate range dataset from which the training examples are derived. In contrast, our approach, like the above mentioned methods, only requires a registered colour image.
{ "cite_N": [ "@cite_6" ], "mid": [ "1872406745" ], "abstract": [ "We present an algorithm to synthetically increase the resolution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to improve, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific processing, such as noise removal and correct patch normalization, dramatically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data." ] }
1210.7156
2170603511
In this paper we consider graph-coloring problems, an important subset of general constraint satisfaction problems that arise in wireless resource allocation. We constructively establish the existence of fully decentralized learning-based algorithms that are able to find a proper coloring even in the presence of strong sensing restrictions, in particular sensing asymmetry of the type encountered when hidden terminals are present. Our main analytic contribution is to establish sufficient conditions on the sensing behavior to ensure that the solvers find satisfying assignments with probability one. These conditions take the form of connectivity requirements on the induced sensing graph. These requirements are mild, and we demonstrate that they are commonly satisfied in wireless allocation tasks. We argue that our results are of considerable practical importance in view of the prevalence of both communication and sensing restrictions in wireless resource allocation problems. The class of algorithms analyzed here requires no message-passing whatsoever between wireless devices, and we show that they continue to perform well even when devices are only able to carry out constrained sensing of the surrounding radio environment.
The graph coloring problem has been the subject of a vast literature, from cellular networks (e.g. @cite_1 ), wireless LANs (e.g. @cite_1 @cite_6 @cite_2 @cite_8 @cite_4 and references therein) and graph theory (e.g. @cite_7 @cite_15 @cite_13 @cite_9 ). Almost all previous work has been concerned either with centralised schemes or with distributed schemes that employ extensive message-passing. Centralised and message-passing schemes have many inherent advantages. In certain situations, however, these systems may not be applicable. For example, differing administrative domains may be present in a network of WLANs.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_15", "@cite_13" ], "mid": [ "1521790357", "2029786597", "2053770984", "2056316084", "1662911496", "2118173950", "2129263279", "", "2212070637" ], "abstract": [ "If you want to get Handbook of Internet Computing pdf eBook copy write by good Handbook of Wireless Networks and Mobile Computing Google Books. Mobile Computing General. Handbook of Algorithms for Wireless Networking and Mobile Computing by Azzedine Boukerche (Editor). Call Number: TK 5103.2. CITS4419 Mobile and Wireless Computing software projects related to wireless networks, (2) write technical reports and documentation for complex computer.", "The connectivity graph of wireless networks, under many models as well as in practice, may contain unidirectional links. The simplifying assumption that such links are useless is often made, mainly because most wireless protocols use per-hop acknowledgments. However, two-way communication between a pair of nodes can be established as soon as there exists paths in both directions between them. Therefore, instead of discarding unidirectional links, one might be interested in studying the strongly connected components of the connectivity graph. In this paper, we look at the percolation phenomenon in some directed random geometric graphs that can be used to model wireless networks. We show that among the nodes that can be reached from the origin, a non-zero fraction can also reach the origin. In other words, the percolation threshold for strong connectivity is equal to the threshold for one-way connectivity.", "The IEEE 802.11 standard specifies both radio and MAC protocol design. We observe that its CSMA protocol helps avoid much of co-channel interference by sharing radio resources in time at the potential expense of degraded network performance. Due to the coupling between the physical and MAC layers, conventional frequency allocation methods for typical cellular networks cannot be applied directly to the 802.11 networks. In this paper, by focusing on interactions among access points, we formulate the channel assignment problem for the 802.11 network, considering the traffic load at the MAC layer, and prove that the problem is NP-complete. In light of computational complexity, a heuristic algorithm is proposed and analyzed. The algorithm is then applied to two cellular settings with known optimal assignments for verification. For one of the settings, the proposed technique generates the optimal channel assignment. As for the second case of a large network, although only a suboptimal solution is obtained by the algorithm, it is shown to be excellent. Thus, as the 802.11 networks are widely deployed, the proposed method can serve as a valuable tool for frequency planning of networks with non-uniform coverage and load.", "Abstract A very natural randomized algorithm for distributed vertex coloring of graphs is analyzed. Under the assumption that the random choices of processors are mutually independent, the execution time will be O(log n ) rounds almost always. A small modification of the algorithm is also proposed.", "Even though multiple non-overlapped channels exist in the 2.4 GHz and 5 GHz spectrum, most IEEE 802.11-based multi-hop ad hoc networks today use only a single channel. As a result, these networks rarely can fully exploit the aggregate bandwidth available in the radio spectrum provisioned by the standards. This prevents them from being used as an ISP's wireless last-mile access network or as a wireless enterprise backbone network. In this paper, we propose a multi-channel wireless mesh network (WMN) architecture (called Hyacinth) that equips each mesh network node with multiple 802.11 network interface cards (NICs). The central design issues of this multi-channel WMN architecture are channel assignment and routing. We show that intelligent channel assignment is critical to Hyacinth's performance, present distributed algorithms that utilize only local traffic load information to dynamically assign channels and to route packets, and compare their performance against a centralized algorithm that performs the same functions. Through an extensive simulation study, we show that even with just 2 NICs on each node, it is possible to improve the network throughput by a factor of 6 to 7 when compared with the conventional single-channel ad hoc network architecture. We also describe and evaluate a 9-node Hyacinth prototype that Is built using commodity PCs each equipped with two 802.11a NICs.", "Wireless 802.11 hotspots have grown in an uncoordinated fashion with highly variable deployment densities. Such uncoordinated deployments, coupled with the difficulty of implementing coordination protocols, has often led to conflicting configurations (e.g., in choice of transmission power and channel of operation) among the corresponding Access Points (APs). Overall, such conflicts cause both unpredictable network performance and unfairness among clients of neighboring hotspots. In this paper, we focus on the fairness problem for uncoordinated deployments. We study this problem from the channel assignment perspective. Our solution is based on the notion of channel-hopping, and meets all the important design considerations for control methods in uncoordinated deployments - distributed in nature, minimal to zero coordination among APs belonging to different hotspots, simple to implement, and interoperable with existing standards. In particular, we propose a specific algorithm called MAXchop, which works efficiently when using only non-overlapping wireless channels, but is particularly effective in exploiting partially-overlapped channels that have been proposed in recent literature. We also evaluate how our channel assignment approach complements previously proposed carrier sensing techniques in providing further performance improvements. Through extensive simulations on real hotspot topologies and evaluation of a full implementation of this technique, we demonstrate the efficacy of these techniques for not only fairness, but also the aggregate throughput, metrics.We believe that this is the first work that brings into focus the fairness properties of channel hopping techniques and we hope that the insights from this research will be applied to other domains where a fair division of a system's resources is an important consideration.", "We propose an efficient client-based approach for channel management (channel assignment and load balancing) in 802.11-based WLANs that lead to better usage of the wireless spectrum. This approach is based on a “conflict set coloring” formulation that jointly performs load balancing along with channel assignment. Such a formulation has a number of advantages. First, it explicitly captures interference effects at clients. Next, it intrinsically exposes opportunities for better channel re-use. Finally, algorithms based on this formulation do not depend on specific physical RF models and hence can be applied efficiently to a wide-range of in-building as well as outdoor scenarios. We have performed extensive packet-level simulations and measurements on a deployed wireless testbed of 70 APs to validate the performance of our proposed algorithms. We show that in addition to single network scenarios, the conflict set coloring formulation is well suited for channel assignment where multiple wireless networks share and contend for spectrum in the same physical space. Our results over a wide range of both simulated topologies and in-building testbed experiments indicate that our approach improves application level performance at the clients by upto three times (and atleast 50 ) in comparison to current best-known techniques.", "", "We propose two new self-stabilizing distributed algorithms for proper ?+1 (?is the maximum degree of a node in the graph) coloring of arbitrary system graphs. Both algorithms are capable of working with multiple types of demons (schedulers) as is the most recent algorithm in [1]. The first algorithm converges in O(m) moves while the second converges in at most n moves (n is the number of nodes and m is the number of edges in the graph) as opposed to the O(? × n) moves required by the algorithm [1]. The second improvement is that neither of the proposed algorithms requires each node to have knowledge of ?, as is required in [1]. Further, the coloring produced by our first algorithm provides an interesting special case of coloring, e.g., Grundy Coloring [2]." ] }
1210.6382
2204542369
Mobile multi-robot teams deployed for monitoring or search-and-rescue missions in urban disaster areas can greatly improve the quality of vital data collected on-site. Analysis of such data can identify hazards and save lives. Unfortunately, such real deployments at scale are cost prohibitive and robot failures lead to data loss. Moreover, scaled-down deployments do not capture significant levels of interaction and communication complexity. To tackle this problem, we propose novel mobility and failure generation frameworks that allow realistic simulations of mobile robot networks for large scale disaster scenarios. Furthermore, since data replication techniques can improve the survivability of data collected during the operation, we propose an adaptive, scalable data replication technique that achieves high data survivability with low overhead. Our technique considers the anticipated robot failures and robot heterogeneity to decide how aggressively to replicate data. In addition, it considers survivability priorities, with some data requiring more effort to be saved than others. Using our novel simulation generation frameworks, we compare our adaptive technique with flooding and broadcast-based replication techniques and show that for failure rates of up to 60 it ensures better data survivability with lower communication costs.
Studies that have considered node heterogeneity assume that some nodes are more stable and more resourceful than others. CLEAR @cite_6 deploys a super-peer architecture that exploits relatively stable peers having maximum remaining battery power and processing capacity among their regional neighbors to determine a near-optimal reallocation period based on mobile host schedules. @cite_35 , resourceful nodes serve as cores to enable core-aided routing. Replication schemes examined include copy-to-core'', where both regular nodes and core nodes are carriers of messages to the destination, and dump-to-core'', where the regular nodes delete the messages, leaving the cores to deal with the delivery. Such nodes, due to their extended resources, acquire a similar role to the and presented in this work. However, unlike in our work, no differentiation was made with respect to their mobility or failures exhibited and how they affect the replication effort.
{ "cite_N": [ "@cite_35", "@cite_6" ], "mid": [ "1974702288", "1504302205" ], "abstract": [ "Opportunistic networks (ONs) are a newly emerging type of delay tolerant network ( DTN) systems that opportunistically exploit unplanned contacts among nodes to share information. As with all DTN environments ONs experience frequent and large delays, and an end-to-end path from the source to destination may only exist for a brief and unpredictable period of time. Such network conditions present unique challenges to message routing. In this paper, we present the design and performance analysis of a novel core-based routing protocol for ON routing. Under the assumption that messages will have a delivery time constraint, we then provide a set of analytical results for rapid modeling and performance evaluation for the basic performance metrics of message delay, message delivery ratio, and buffer occupancy. We have implemented our protocol in ns-2, and our simulation results show that our protocol is quite effective and our analysis is accurate.", "We propose CLEAR (Context and Location-based Efficient Allocation of Replicas), a dynamic replica allocation scheme for improving data availability in mobile ad-hoc peer-to-peer (M-P2P) networks. To manage replica allocation efficiently, CLEAR exploits user mobility patterns and deploys a super-peer architecture, which avoids both broadcast storm during replica allocation as well as broadcast-based querying. CLEAR considers different levels of replica consistency and load as replica allocation criteria. Our performance study indicates CLEAR's overall effectiveness in improving data availability in M-P2P networks." ] }
1210.6382
2204542369
Mobile multi-robot teams deployed for monitoring or search-and-rescue missions in urban disaster areas can greatly improve the quality of vital data collected on-site. Analysis of such data can identify hazards and save lives. Unfortunately, such real deployments at scale are cost prohibitive and robot failures lead to data loss. Moreover, scaled-down deployments do not capture significant levels of interaction and communication complexity. To tackle this problem, we propose novel mobility and failure generation frameworks that allow realistic simulations of mobile robot networks for large scale disaster scenarios. Furthermore, since data replication techniques can improve the survivability of data collected during the operation, we propose an adaptive, scalable data replication technique that achieves high data survivability with low overhead. Our technique considers the anticipated robot failures and robot heterogeneity to decide how aggressively to replicate data. In addition, it considers survivability priorities, with some data requiring more effort to be saved than others. Using our novel simulation generation frameworks, we compare our adaptive technique with flooding and broadcast-based replication techniques and show that for failure rates of up to 60 it ensures better data survivability with lower communication costs.
Heterogeneity in data has been used in prioritized epidemic routing (PREP) @cite_44 , were bundles of data are prioritized based on cost to destination, source, and expiration time. Costs are derived from per-link average availability'' information that is disseminated in an epidemic manner. PREP maintains a gradient of replication density that decreases with increasing distance from destination. In our work, we also assume that data are replicated based on different requirements or priorities. However, these requirements are due to the significance to survive the mission, as assumed by the human coordinators and not due to their producer, their destination or associated lifetime. Our adaptive replication technique adjusts (i.e., increases) the survivability acquired by the data as they are replicated across new robots, until their accumulated survivability meets their original requirements set by the coordinators.
{ "cite_N": [ "@cite_44" ], "mid": [ "2071738571" ], "abstract": [ "We describe PRioritized EPidemic (PREP) for routing in opportunistic networks. PREP prioritizes bundles based on costs to destination, source, and expiry time. Costs are derived from per-link \"average availability\" information that is disseminated in an epidemic manner. PREP maintains a gradient of replication density that decreases with increasing distance from the destination. Simulation results show that PREP outperforms AODV and Epidemic Routing by a factor of about 4 and 1.4 respectively, with the gap widening with decreasing density and decreasing storage. We expect PREP to be of greater value than other proposed solutions in highly disconnected and mobile networks where no schedule information or repeatable patterns exist." ] }