aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1301.6179
|
1485489889
|
This paper presents an algorithm to automatically design two-level fat-tree networks, such as ones widely used in large-scale data centres and cluster supercomputers. The two levels may each use a different type of switches from design database to achieve an optimal network structure. Links between layers can run in bundles to simplify cabling. Several sample network designs are examined and their technical and economic characteristics are discussed. The characteristic feature of our approach is that real life equipment prices and values of technical characteristics are used. This allows to select an optimal combination of hardware to build the network (including semi-populated configurations of modular switches) and accurately estimate the cost of this network. We also show how technical characteristics of the network can be derived from its per-port metrics and suggest heuristics for equipment placement. The algorithm is useful as a part of a bigger design procedure that selects optimal hardware of cluster supercomputer as a whole. Therefore the article is focused on the use of fat-trees for high-performance computing, although the results are valid for any type of data centres.
|
Fat-trees were initially introduced by C. Leiserson @cite_14 . The mathematical formalism to describe their structure, k-ary n-trees'', was proposed by Petrini and Vanneschi @cite_3 . Zahavi @cite_7 further introduces two other formalisms for describing fat-trees, , where links between switches can run in parallel, and where bandwidth between layers stays constant to guarantee content-free operation.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_3"
],
"mid": [
"2177671275",
"2014455962",
"2166214860"
],
"abstract": [
"",
"As the size of High Performance Computing clusters grows, so does the probability of interconnect hot spots that degrade the latency and effective bandwidth the network provides. This paper presents a solution to this scalability problem for real life constant bisectional-bandwidth fat-tree topologies. It is shown that maximal bandwidth and cut-through latency can be achieved for MPI global collective traffic. To form such a congestion-free configuration, MPI programs should utilize collective communication, MPI-node-order should be topology aware, and the packet routing should match the MPI communication patterns. First, we show that MPI collectives can be classified into unidirectional and bidirectional shifts. Using this property, we propose a scheme for congestion-free routing of the global collectives in fully and partially populated fat trees running a single job. The no-contention result is then obtained for multiple jobs running on the same fat-tree by applying some job size and placement restrictions. Simulation results of the proposed routing, MPI-node-order and communication patterns show no contention which provides a 40 throughput improvement over previously published results for all-to-all collectives.",
"The past few years have seen a rise in popularity of massively parallel architectures that use fat-trees as their interconnection networks. In this paper we study the communication performance of a parametric family of fat-trees, the k-ary n-trees, built with constant arity switches interconnected in a regular topology. Through simulation on a 4-ary 4-tree with 256 nodes, we analyze some variants of an adaptive algorithm that utilize wormhole routing with one, two and four virtual channels. The experimental results show that the uniform, bit reversal and transpose traffic patterns are very sensitive to the flow control strategy. In all these cases, the saturation points are between 35-40 of the network capacity with one virtual channel, 55-60 with two virtual channels and around 75 with four virtual channels. The complement traffic, a representative of the class of the congestion-free communication patterns, reaches an optimal performance, with a saturation point at 97 of the capacity for all flow control strategies."
]
}
|
1301.6179
|
1485489889
|
This paper presents an algorithm to automatically design two-level fat-tree networks, such as ones widely used in large-scale data centres and cluster supercomputers. The two levels may each use a different type of switches from design database to achieve an optimal network structure. Links between layers can run in bundles to simplify cabling. Several sample network designs are examined and their technical and economic characteristics are discussed. The characteristic feature of our approach is that real life equipment prices and values of technical characteristics are used. This allows to select an optimal combination of hardware to build the network (including semi-populated configurations of modular switches) and accurately estimate the cost of this network. We also show how technical characteristics of the network can be derived from its per-port metrics and suggest heuristics for equipment placement. The algorithm is useful as a part of a bigger design procedure that selects optimal hardware of cluster supercomputer as a whole. Therefore the article is focused on the use of fat-trees for high-performance computing, although the results are valid for any type of data centres.
|
Gupta and Dally @cite_0 suggested a tool to optimize network topology, in the broad class of hybrid Clos-torus networks. Cost, packaging and performance constraints can be specified. This tool is most valuable for building custom interconnection solutions when arbitrary topologies are feasible, contrary to the case of using commodity switches where most parameters are fixed but optimization can take actual prices into account.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1983242583"
],
"abstract": [
"This paper describes an automatic optimization tool that searches a family of network topologies to select the topology that best achieves a specified set of design goals while satisfying specified packaging constraints. Our tool uses a model of signaling technology that relates bandwidth, cost and distance of links. This model captures the distance-dependent bandwidth of modern high-speed electrical links and the cost differential between electrical and optical links. Using our optimization tool, we explore the design space of hybrid Clos-torus (C-T) networks. For a representative set of packaging constraints we determine the optimal hybrid C-T topology to minimize cost and the optimal C-T topology to minimize latency for various packet lengths. We then use the tool to measure the sensitivity of the optimal topology to several important packaging constraints such as pin count and critical distance."
]
}
|
1301.6179
|
1485489889
|
This paper presents an algorithm to automatically design two-level fat-tree networks, such as ones widely used in large-scale data centres and cluster supercomputers. The two levels may each use a different type of switches from design database to achieve an optimal network structure. Links between layers can run in bundles to simplify cabling. Several sample network designs are examined and their technical and economic characteristics are discussed. The characteristic feature of our approach is that real life equipment prices and values of technical characteristics are used. This allows to select an optimal combination of hardware to build the network (including semi-populated configurations of modular switches) and accurately estimate the cost of this network. We also show how technical characteristics of the network can be derived from its per-port metrics and suggest heuristics for equipment placement. The algorithm is useful as a part of a bigger design procedure that selects optimal hardware of cluster supercomputer as a whole. Therefore the article is focused on the use of fat-trees for high-performance computing, although the results are valid for any type of data centres.
|
Al-Fares @cite_5 proposed to use fat-trees for generic data centre networks, using commodity hardware. Farrington @cite_15 followed up, suggesting to build a 3,456-port data centre switch with commodity chips ( merchant silicon'') internally connected in a fat-tree topology. They also advice to use optical fibre cables with as much as 72 or even 120 separate fibres (strands) to minimize the volume and weight of cable bundles for inter-switch links.
|
{
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2130531694",
"2114497163"
],
"abstract": [
"Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches routers, resulting topologies may only support 50 of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.",
"Today, data center networks that scale to tens of thousands of ports require the use of highly-specialized ASICs, with correspondingly high development costs. Simultaneously, these networks also face significant performance and management limitations. Just as commodity processors and disks now form the basis of computing and storage in the data center, we argue that there is an opportunity to leverage emerging high-speed commodity merchant switch silicon as the basis for a scalable, cost-effective, modular, and more manageable data center network fabric. This paper describes how to save cost and power by repackaging an entire data center network as a distributed multi-stage switch using a fat-tree topology and merchant silicon instead of proprietary ASICs. Compared to a fat tree of discrete packet switches, a 3,456-port 10 Gigabit Ethernet realization of our architecture costs 52 less, consumes 31 less power, occupies 84 less space, and reduces the number of long, cumbersome cables from 6,912 down to 96, relative to existing approaches."
]
}
|
1301.6179
|
1485489889
|
This paper presents an algorithm to automatically design two-level fat-tree networks, such as ones widely used in large-scale data centres and cluster supercomputers. The two levels may each use a different type of switches from design database to achieve an optimal network structure. Links between layers can run in bundles to simplify cabling. Several sample network designs are examined and their technical and economic characteristics are discussed. The characteristic feature of our approach is that real life equipment prices and values of technical characteristics are used. This allows to select an optimal combination of hardware to build the network (including semi-populated configurations of modular switches) and accurately estimate the cost of this network. We also show how technical characteristics of the network can be derived from its per-port metrics and suggest heuristics for equipment placement. The algorithm is useful as a part of a bigger design procedure that selects optimal hardware of cluster supercomputer as a whole. Therefore the article is focused on the use of fat-trees for high-performance computing, although the results are valid for any type of data centres.
|
Navaridas @cite_10 introduced such a reduced topology, , and analysed its behaviour using simulation and several synthetic workloads. Overall, for the mix of workloads, different configurations of the reduced topology were found beneficial in terms of performance cost'' ratio compared to traditional fat-trees, especially when collective operations were only lightly used. They add, however, that in the absence of a topology-aware scheduler, neighbouring processes may be assigned to physically distant processing nodes, requiring full bandwidth at upper levels and thus rendering reduced topologies useless.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2163392581"
],
"abstract": [
"Interconnection networks based on the k-ary n-tree topology are widely used in high-performance parallel computers. However, this topology is expensive and complex to build. In this paper we evaluate an alternative tree-like topology that is cheaper in terms of cost and complexity because it uses fewer switches and links. This alternative topology leaves unused upward ports on switches, which can be rearranged to be used as downward ports. The increase of locality might be efficiently exploited by applications. We test the performance of these thin-trees, and compare it with that of regular trees. Evaluation is carried out using a collection of synthetic traffic patterns that emulate the behavior of scientific applications and functions within message passing libraries, not only in terms of sources and destinations of messages, but also considering the causal relationships among them. We also propose a methodology to perform cost and performance analysis of different networks. Our main conclusion is that, for the set of studied workloads, the performance drop in thin-trees is less noticeable than the cost savings."
]
}
|
1301.5844
|
2119471446
|
An extensive literature in economics and social science addresses contests, in which players compete to outperform each other on some measurable criterion, often referred to as a player's score, or output. Players incur costs that are an increasing function of score, but receive prizes for obtaining higher score than their competitors. In this paper we study finite games that are discretized contests, and the problems of computing exact and approximate Nash equilibria. Our motivation is the worst-case hardness of Nash equilibrium computation, and the resulting interest in important classes of games that admit polynomial-time algorithms. For games that have a tie-breaking rule for players' scores, we present a polynomial-time algorithm for computing an exact equilibrium in the 2-player case, and for multiple players, a characterization of Nash equilibria that shows an interesting parallel between these games and unrestricted 2-player games in normal form. When ties are allowed, via a reduction from these games to a subclass of anonymous games, we give approximation schemes for two special cases: constant-sized set of strategies, and constant number of players.
|
Anonymous games represent a class of finite games that relate to the discretized contests considered here. Anonymous games are games where a player's payoff depends on his own action and on the distribution of actions taken by the other players, but not on the identities of the players who chose each action. Anonymous games admit polynomial-time approximation schemes (PTAS's) @cite_3 @cite_4 but may be PPAD-complete to solve exactly. The algorithms for anonymous games can be applied to an interesting subclass of the discretized contests that we study here. In particular they apply to a special case in which all players have the same (finite) set of score levels available to them, with prizes being shared in the event of ties (which is a standard assumption in much of the literature).
|
{
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2122653474",
"1855561663"
],
"abstract": [
"We show that there is a polynomial-time approximation scheme for computing Nash equilibria in anonymous games with any fixed number of strategies (a very broad and important class of games), extending the two-strategy result of Daskalakis and Papadimitriou 2007. The approximation guarantee follows from a probabilistic result of more general interest: The distribution of the sum of n independent unit vectors with values ranging over e1,...,ek , where ei is the unit vector along dimension i of the k-dimensional Euclidean space, can be approximated by the distribution of the sum of another set of independent unit vectors whose probabilities of obtaining each value are multiples of 1 z for some integer z, and so that the variational distance of the two distributions is at most eps, where eps is bounded by an inverse polynomial in z and a function of k, but with no dependence on n. Our probabilistic result specifies the construction of a surprisingly sparse epsi-cover- under the total variation distance - of the set of distributions of sums of independent unit vectors, which is of interest on its own right.",
"We present a novel polynomial time approximation scheme fortwo-strategy anonymous games, in which the players’ utilityfunctions, although potentially different, do not differentiateamong the identities of the other players. Our algorithm computesan e-approximate Nash equilibrium of an n-player 2-strategyanonymous game in time @math , which significantly improves uponthe running time @math required by the algorithm ofDaskalakis & Papadimitriou, 2007. The improved running time isbased on a new structural understanding of approximate Nashequilibria: We show that, for any e, there exists ane-approximate Nash equilibrium in which either onlyO(1 e 3) players randomize, or all players who randomizeuse the same mixed strategy. To show this result we employ toolsfrom the literature on Stein’s Method."
]
}
|
1301.5844
|
2119471446
|
An extensive literature in economics and social science addresses contests, in which players compete to outperform each other on some measurable criterion, often referred to as a player's score, or output. Players incur costs that are an increasing function of score, but receive prizes for obtaining higher score than their competitors. In this paper we study finite games that are discretized contests, and the problems of computing exact and approximate Nash equilibria. Our motivation is the worst-case hardness of Nash equilibrium computation, and the resulting interest in important classes of games that admit polynomial-time algorithms. For games that have a tie-breaking rule for players' scores, we present a polynomial-time algorithm for computing an exact equilibrium in the 2-player case, and for multiple players, a characterization of Nash equilibria that shows an interesting parallel between these games and unrestricted 2-player games in normal form. When ties are allowed, via a reduction from these games to a subclass of anonymous games, we give approximation schemes for two special cases: constant-sized set of strategies, and constant number of players.
|
We next mention some of the more classical literature on continuous contests. An influential line of work is the literature on rent-seeking problems, initiated by Tullock @cite_34 . These are problems in which players compete to receive favorable treatment from a regulator. Another large body of literature initiated by Lazear and Rosen @cite_27 has focused on contests in labor markets. Lazear and Rosen study the merits of rank-based prizes (as an alternative to paying a piece rate) as a means of incentivizing effort by workers in organizations. These models incorporate a random noise process that affects the selection of the winner. In a Tullock contest, the probability of winning is the amount of effort exerted by a player, divided by the total effort. Besides being a model of artificial competition, games of this kind are a model for competition for status within society @cite_22 .
|
{
"cite_N": [
"@cite_27",
"@cite_34",
"@cite_22"
],
"mid": [
"2050904362",
"170486074",
"1488667070"
],
"abstract": [
"This paper analyzes compensation schemes which pay according to an individual's ordinal rank in an organization rather than his output level. When workers are risk neutral, it is shown that wages based upon rank induce the same efficient allocation of resources as an incentive reward scheme based on individual output levels. Under some circumstances, risk-averse workers actually prefer to be paid on the basis of rank. In addition, if workers are heterogeneous inability, low-quality workers attempt to contaminate high-quality firms, resulting in adverse selection. However, if ability is known in advance, a competitive handicapping structure exists which allows all workers to compete efficiently in the same organization.",
"MOST of the papers in this volume* implicitly or explicitly assume that rent-seeking activity discounts the entire rent to be derived. Unfortunately, this is not necessarily true; the reality is much more complicated. The problem here is that the average cost and marginal cost are not necessarily identical.",
"If individuals care about their status, defined as their rank in the distribution of consumption of one \"positional\" good, then the consumer's problem is strategic as her utility depends on the consumption choices of others. In the symmetric Nash equilibrium, each individual spends an inefficiently high amount on the status good. Using techniques from auction theory, we analyze the effects of exogenous changes in the distribution of income. In a richer society, almost all individuals spend more on conspicuous consumption, and individual utility is lower at each income level. In a more equal society, the poor are worse off."
]
}
|
1301.5844
|
2119471446
|
An extensive literature in economics and social science addresses contests, in which players compete to outperform each other on some measurable criterion, often referred to as a player's score, or output. Players incur costs that are an increasing function of score, but receive prizes for obtaining higher score than their competitors. In this paper we study finite games that are discretized contests, and the problems of computing exact and approximate Nash equilibria. Our motivation is the worst-case hardness of Nash equilibrium computation, and the resulting interest in important classes of games that admit polynomial-time algorithms. For games that have a tie-breaking rule for players' scores, we present a polynomial-time algorithm for computing an exact equilibrium in the 2-player case, and for multiple players, a characterization of Nash equilibria that shows an interesting parallel between these games and unrestricted 2-player games in normal form. When ties are allowed, via a reduction from these games to a subclass of anonymous games, we give approximation schemes for two special cases: constant-sized set of strategies, and constant number of players.
|
Szymanski @cite_33 considers the application of the theory of contests to the design of sporting competitions. The allocation of prizes in dynamic sport contests in which players determine their efforts at different stages of the game (e.g., at the beginning of each half in a soccer game) is considered in @cite_28 . There, the focus is on comparing rank-based versus score-based prizes when spectators care about contestants' efforts or about the suspense'' of the game (see also @cite_6 @cite_28 ). @cite_19 study a version of the contest design problem where the prize fund may be chosen by the designer, who wants to maximize the effort elicited, with prizes representing the price paid for effort. A related line of research @cite_7 @cite_5 has addressed from a game-theoretic perspective, the impact of the point scoring system on offensive versus defensive play in the UK Premier League; our concern here is slightly different, being focused on highly competitive versus weakly competitive play.
|
{
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_5"
],
"mid": [
"2136884928",
"2121828540",
"2141855083",
"2043634452",
"2040946682",
""
],
"abstract": [
"This paper reviews the literature on commercial sport through the lens of the economic theory of contests tournaments. It seeks to draw together research on incentives in individualistic sports such as golf and footraces with the research on uncertainty of outcome and competitive balance in team sports such as baseball and soccer. The contest framework is used to analyze issues such as the optimal distribution of prizes, the impact of revenue sharing, salary caps, draft rules, and a variety of other restraints commonly employed in sports leagues. The paper draws heavily on a comparative analysis of contest organization, in particular between North America and Europe.",
"This article addresses effects caused by the transition from a 2-1-0 to a 3-1-0 award system in soccer. The first part of the article discusses consequences of the transition on offensive versus defensive play. This part may be seen as a valuable supplement to work by Brocas and Carrillo (2004) as the choice of a different game theoretic framework provides increased insight into the concept offensive defensive play in soccer. The second and main part of the paper addresses additional effects induced by the award system transition, especially effects on competitive imbalance. It is shown by simple game theory that under a relatively general set of team descriptions, such a transition may affect competitive balance adversely. In the final sections of the paper some empirical examples strengthen the hypothesis on adverse competitive effects.",
"In a dynamic model of sports competition, if spectators care only about contestants' efforts, incentive schemes depending linearly on the final score difference dominate rank order schemes based only on who wins. If spectators also care about suspense, defined as valuing more contestants' efforts when the game is closer, rank order schemes can dominate linear score difference schemes, and this will be the case when the demand for suspense is sufficiently high. Under additional assumptions, we show that the optimal rank order scheme dominates a broad class of incentive schemes. Copyright © The Author(s). Journal compilation © Royal Economic Society 2009.",
"We study a contest with multiple (not necessarily equal) prizes. Contestants have private information about an ability parameter that affects their costs of bidding. The contestant with the highest bid wins the first prize, the contestant with the second-highest bid wins the second prize, and so on until all the prizes are allocated. All contestants incur their respective costs of bidding. The contest's designer maximizes the expected sum of bids. Our main results are: 1) We display bidding equlibria for any number of contestants having linear, convex or concave cost functions, and for any distribution of abilities. 2) If the cost functions are linear or concave, then, no matter what the distribution of abilities is, it is optimal for the designer to allocate the entire prize sum to a single ''first'' prize. 3) We give a necessary and sufficient conditions ensuring that several prizes are optimal if contestants have a convex cost function.",
"We study all-pay contests under incomplete information where the reward is a function of the contestant's type and effort. We analyse the optimal reward for the designer when the reward is either multiplicatively separable or additively separable in effort and type. In the multiplicatively separable environment the optimal reward is always positive while in the additively separable environment it may also be negative. In both environments, depending on the designer's utility, the optimal reward may either increase or decrease in the contestants' effort. Finally, in both environments, the designer's payoff depends only upon the expected value of the effort-dependent rewards and not the number of rewards.",
""
]
}
|
1301.5887
|
1968414620
|
Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...
|
Enumeration algorithms however, can be expensive, due to the extremely large number of triangles (see e.g., networks ), even for graphs even of moderate size (millions of vertices). Much theoretical work has been done on characterizing the hardness of exhausting triangle enumeration and finding weighted triangles @cite_2 @cite_43 . Eigenvalue trace based methods adopted by Tsourakakis @cite_42 and Avron @cite_47 compute estimates of the total and per-degree number of triangles. However, the compute-intensive nature of eigenvalue computations (even just a few of the largest magnitude) makes these methods intractable on large graphs.
|
{
"cite_N": [
"@cite_43",
"@cite_47",
"@cite_42",
"@cite_2"
],
"mid": [
"2084803275",
"2342372809",
"2120595041",
"2098207812"
],
"abstract": [
"We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^ 3- ( M)) time for some > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^ 2.99 negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the ( , +)-semiring. Therefore, if APSP cannot be solved in n^ 3- time for any > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.",
"Triangle counting is an important problem in graph mining with several real-world applications. Interesting metrics, such as the clustering coefficient and the transitivity ratio, involve computing the number of triangles. Furthermore, several interesting graph mining applications rely on computing the number of triangles in a large-scale graph. However, exact triangle counting is expensive and memory consuming, and current approximation algorithms are unsatisfactory and not practical for very large-scale graphs. In this paper we present a new highly-parallel randomized algorithm for approximating the number of triangles in an undirected graph. Our algorithm uses a well-known relation between the number of triangles and the trace of the cubed adjacency matrix. A Monte-Carlo simulation is used to estimate this quantity. Each sample requires O(|E|) time and O(ǫ 2 log(1 δ)ρ(G) 2 ) samples are required to guarantee an (ǫ, δ)-approximation, where ρ(G) is a measure of the triangle sparsity of G (ρ(G) is not necessarily small). Our algorithm requires only O(|V |) space in order to work efficiently. We present experiments that demonstrate that in practice usually only O(log 2 |V |) samples are required to get good approximations for graphs frequently encountered in data-mining tasks, and that our algorithm is competitive with state-of-the-art approximate triangle counting methods both in terms of accuracy and in terms of running-time. The use of Monte-Carlo simulation support parallelization well: our algorithm is embarrassingly parallel with a critical path of only O(|E|), achievable on as few as O(log 2 |V |) processors.",
"How can we quickly find the number of triangles in a large graph, without actually counting them? Triangles are important for real world social networks, lying at the heart of the clustering coefficient and of the transitivity ratio. However, straight-forward and even approximate counting algorithms can be slow, trying to execute or approximate the equivalent of a 3-way database join. In this paper, we provide two algorithms, the eigentriangle for counting the total number of triangles in a graph, and the eigentrianglelocal algorithm that gives the count of triangles that contain a desired node. Additional contributions include the following: (a) We show that both algorithms achieve excellent accuracy, with up to sime 1000x faster execution time, on several, real graphs and (b) we discover two new power laws (degree-triangle and triangleparticipation laws) with surprising properties.",
"We consider a number of dynamic problems with no known poly-logarithmic upper bounds, and show that they require nΩ(1) time per operation, unless 3SUM has strongly subquadratic algorithms. Our result is modular: (1) We describe a carefully-chosen dynamic version of set disjointness (the \"multiphase problem\"), and conjecture that it requires n^Omega(1) time per operation. All our lower bounds follow by easy reduction. (2) We reduce 3SUM to the multiphase problem. Ours is the first nonalgebraic reduction from 3SUM, and allows 3SUM-hardness results for combinatorial problems. For instance, it implies hardness of reporting all triangles in a graph. (3) It is plausible that an unconditional lower bound for the multiphase problem can be established via a number-on-forehead communication game."
]
}
|
1301.5887
|
1968414620
|
Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...
|
The wedge-sampling approach used in this paper, first discussed by Schank and Wagner @cite_1 , is a sampling approach with the high accuracy and speed advantages of other sampling-based methods (like Doulion) but a hard bound on the variance. Previous work by a subset of the authors of this paper @cite_35 presents a detailed empirical study of wedge sampling. It was also shown that wedge sampling can compute a variety of triangle-based metrics including degree-wise clustering coefficients and uniform randomly sampled triangles. This distinguishes wedge sampling from previous sampling methods that can only estimate the total number of triangles.
|
{
"cite_N": [
"@cite_35",
"@cite_1"
],
"mid": [
"1614522028",
"2116759966"
],
"abstract": [
"Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of a graph. Some of the most useful graph metrics, especially those measuring social cohesion, are based on triangles. Despite the importance of these triadic measures, associated algorithms can be extremely expensive. We propose a new method based on wedge sampling. This versatile technique allows for the fast and accurate approximation of all current variants of clustering coefficients and enables rapid uniform sampling of the triangles of a graph. Our methods come with provable and practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state-of-the-art, while providing nearly the accuracy of full enumeration. Our results will enable more wide-scale adoption of triadic measures for analysis of extremely large graphs, as demonstrated on several real-world examples.",
"Since its introduction in the year 1998 by Watts and Strogatz, the clustering coecient has become a frequently used tool for analyzing graphs. In 2002 the transitivity was proposed by Newman, Watts and Strogatz as an alternative to the clustering coecient. As many networks considered in complex systems are huge, the ecient computation of such network parameters is crucial. Several algorithms with polynomial running time can be derived from results known in graph theory. The main contribution of this work is a new fast approximation algorithm for the weighted clustering coecient which also gives very ecient approximation algorithms for the clustering coecient and the transitivity. We namely present an algorithm with running time in O(1) for the clustering coecient, respectively with running time in O(n) for the transitivity. By an experimental study we demonstrate the performance of the proposed algorithms on real-world data as well as on generated graphs. Moreover we give a simple graph generator algorithm that works according to the preferential attachment rule but also generates graphs with adjustable clustering coecient."
]
}
|
1301.5887
|
1968414620
|
Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...
|
MapReduce @cite_24 is a conceptual programming model for processing massive data sets. The most popular implementation is the open-source Apache Hadoop @cite_16 along with the Apache Hadoop Distributed File System (HDFS) @cite_16 , which we have used in our experiments. MapReduce assumes that the data is distributed across storage in roughly equal-sized blocks. The MapReduce paradigm divides a parallel program into two parts: a step and a step. During the map step, each block of data is assigned to a which processes the data block to emit key-value pairs. The mappers run in parallel and are ideally local to the block of data being processed, minimizing communication overhead. In between the map and reduce steps, a parallel takes place in order to group all values for each key together. This step is hidden from the user and is extremely efficient. For every key, its values are grouped together and sent to a , which processes the values for a single key and writes the result to file. All keys are processed in parallel.
|
{
"cite_N": [
"@cite_24",
"@cite_16"
],
"mid": [
"2173213060",
"2130904350"
],
"abstract": [
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.",
"Relational sub graph analysis, e.g. finding labeled sub graphs in a network, which are isomorphic to a template, is a key problem in many graph related applications. It is computationally challenging for large networks and complex templates. In this paper, we develop SAHAD, an algorithm for relational sub graph analysis using Hadoop, in which the sub graph is in the form of a tree. SAHAD is able to solve a variety of problems closely related with sub graph isomorphism, including counting labeled unlabeled sub graphs, finding supervised motifs, and computing graph let frequency distribution. We prove that the worst case work complexity for SAHAD is asymptotically very close to that of the best sequential algorithm. On a mid-size cluster with about 40 compute nodes, SAHAD scales to networks with up to 9 million nodes and a quarter billion edges, and templates with up to 12 nodes. To the best of our knowledge, SAHAD is the first such Hadoop based subgraph subtree analysis algorithm, and performs significantly better than prior approaches for very large graphs and templates. Another unique aspect is that SAHAD is also amenable to running quite easily on Amazon EC2, without needs for any system level optimization."
]
}
|
1301.5887
|
1968414620
|
Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...
|
MapReduce has been used for network and graph analysis in a variety of contexts. It is a natural choice, if for no other reason than the fact that it is widely deployed @cite_60 . Pegasus @cite_26 is a general library for large-scale graph processing; the largest graph they considered was 1.4M vertices and 6.6M edges and the PageRank analytic, but they did not report execution times. Lin and Schatz @cite_50 propose some special techniques for graph algorithms such as PageRank that depend on matrix-vector products. MapReduce sampling-based techniques that reduce the overall graph size are discussed by @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_50",
"@cite_60"
],
"mid": [
"2153977620",
"2167927436",
"2020302654",
"2169541495"
],
"abstract": [
"The MapReduce framework is currently the de facto standard used throughout both industry and academia for petabyte scale data analysis. As the input to a typical MapReduce computation is large, one of the key requirements of the framework is that the input cannot be stored on a single machine and must be processed in parallel. In this paper we describe a general algorithmic design technique in the MapReduce framework called filtering. The main idea behind filtering is to reduce the size of the input in a distributed fashion so that the resulting, much smaller, problem instance can be solved on a single machine. Using this approach we give new algorithms in the MapReduce framework for a variety of fundamental graph problems for sufficiently dense graphs. Specifically, we present algorithms for minimum spanning trees, maximal matchings, approximate weighted matchings, approximate vertex and edge covers and minimum cuts. In all of these cases, we parameterize our algorithms by the amount of memory available on the machines allowing us to show tradeoffs between the memory available and the number of MapReduce rounds. For each setting we will show that even if the machines are only given substantially sublinear memory, our algorithms run in a constant number of MapReduce rounds. To demonstrate the practical viability of our algorithms we implement the maximal matching algorithm that lies at the core of our analysis and show that it achieves a significant speedup over the sequential version.",
"In this paper, we describe PEGASUS, an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node and finding the connected components. As the size of graphs reaches several Giga-, Tera- or Peta-bytes, the necessity for such a library grows too. To the best of our knowledge, PEGASUS is the first such library, implemented on the top of the Hadoop platform, the open source version of MapReduce. Many graph mining operations (PageRank, spectral clustering, diameter estimation, connected components etc.) are essentially a repeated matrix-vector multiplication. In this paper we describe a very important primitive for PEGASUS, called GIM-V (Generalized Iterated Matrix-Vector multiplication). GIM-V is highly optimized, achieving (a) good scale-up on the number of available machines (b) linear running time on the number of edges, and (c) more than 5 times faster performance over the non-optimized version of GIM-V. Our experiments ran on M45, one of the top 50 supercomputers in the world. We report our findings on several real graphs, including one of the largest publicly available Web Graphs, thanks to Yahoo!, with 6,7 billion edges.",
"Graphs are analyzed in many important contexts, including ranking search results based on the hyperlink structure of the world wide web, module detection of proteinprotein interaction networks, and privacy analysis of social networks. Many graphs of interest are difficult to analyze because of their large size, often spanning millions of vertices and billions of edges. As such, researchers have increasingly turned to distributed solutions. In particular, MapReduce has emerged as an enabling technology for large-scale graph processing. However, existing best practices for MapReduce graph algorithms have significant shortcomings that limit performance, especially with respect to partitioning, serializing, and distributing the graph. In this paper, we present three design patterns that address these issues and can be used to accelerate a large class of graph algorithms based on message passing, exemplified by PageRank. Experiments show that the application of our design patterns reduces the running time of PageRank on a web graph with 1.4 billion edges by 69 .",
"Abstract Hadoop is currently the large-scale data analysis “hammer” of choice, but there exist classes of algorithms that aren't “nails” in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This article espouses a very different position: that MapReduce is “good enough,” and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce. The simple solution is to find alternative, noniterative algorithms that solve the same problem. This article captures my personal experiences as an academic researcher as well as a software engineer in a “real-world” production analytics environment. From this combined perspective, I reflect on the current state and fu..."
]
}
|
1301.5898
|
2088332638
|
We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes.
|
There are several algorithm suggested and tested for dictionary learning, see e.g. @cite_18 @cite_17 @cite_8 @cite_4 . The algorithm we derive in this paper is closely related but different to the bilinear AMP proposed by @cite_2 as explained in Sec. .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_2",
"@cite_17"
],
"mid": [
"2105464873",
"2005876975",
"2160547390",
"",
"2115429828"
],
"abstract": [
"The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd",
"Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.",
"In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data",
"",
"A frame design technique for use with vector selection algorithms, for example matching pursuits (MP), is presented. The design algorithm is iterative and requires a training set of signal vectors. The algorithm, called method of optimal directions (MOD), is an improvement of the algorithm presented by Engan, Aase and Husoy see (Proc. ICASSP '98, Seattle, USA, p.1817-20, 1998). The MOD is applied to speech and electrocardiogram (ECG) signals, and the designed frames are tested on signals outside the training sets. Experiments demonstrate that the approximation capabilities, in terms of mean squared error (MSE), of the optimized frames are significantly better than those obtained using frames designed by the algorithm of Engan et. al. Experiments show typical reduction in MSE by 20-50 ."
]
}
|
1301.5898
|
2088332638
|
We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes.
|
The question of how many samples @math are necessary for the dictionary to be identifiable has a straightforward lower bound @math , otherwise there is more unknown variables than measurements and hence exact recovery is clearly impossible. Several works analyzed what is a sufficient number of samples for exact recovery. While early rigorous results were able to show learnability from only exponential many samples @cite_14 , more recent analysis of convex relaxation based approaches shows that @math samples are needed for @math @cite_3 and polynomially many for @math @cite_22 . Another study of sample complexity for dictionary learning for @math establishes a @math bound for the number of samples @cite_5 . A very recent non-rigorous work suggested that @math samples should be sufficient to identify the dictionary @cite_6 . That work was based on the replica analysis of the problem, but did not analyze the Bayes-optimal approach.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_3",
"@cite_6",
"@cite_5"
],
"mid": [
"2155981690",
"2962959718",
"2127114523",
"1994395934",
"2963641291"
],
"abstract": [
"Abstract A full-rank under-determined linear system of equations Ax = b has in general infinitely many possible solutions. In recent years there is a growing interest in the sparsest solution of this equation—the one with the fewest non-zero entries, measured by ∥ x ∥ 0 . Such solutions find applications in signal and image processing, where the topic is typically referred to as “sparse representation”. Considering the columns of A as atoms of a dictionary, it is assumed that a given signal b is a linear composition of few such atoms. Recent work established that if the desired solution x is sparse enough, uniqueness of such a result is guaranteed. Also, pursuit algorithms, approximation solvers for the above problem, are guaranteed to succeed in finding this solution. Armed with these recent results, the problem can be reversed, and formed as an implied matrix factorization problem: Given a set of vectors b i , known to emerge from such sparse constructions, Ax i = b i , with sufficiently sparse representations x i , we seek the matrix A . In this paper we present both theoretical and algorithmic studies of this problem. We establish the uniqueness of the dictionary A , depending on the quantity and nature of the set b i , and the sparsity of x i . We also describe a recently developed algorithm, the K-SVD, that practically find the matrix A , in a manner similar to the K-Means algorithm. Finally, we demonstrate this algorithm on several stylized applications in image processing.",
"The idea that many important classes of signals can be well-represented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is not known ahead of time, a very popular and successful heuristic is to search for a dictionary that minimizes an appropriate sparsity surrogate over a given set of sample data. While this idea is appealing, the behavior of these algorithms is largely a mystery; although there is a body of empirical evidence suggesting they do learn very eective representations, there is little theory to guarantee when they will behave correctly, or when the learned dictionary can be expected to generalize. In this paper, we take a step towards such a theory. We show that under mild hypotheses, the dictionary learning problem is locally well-posed: the desired solution is indeed a local minimum of the ‘ 1 norm. Namely, if A 2 R m n is an incoherent (and possibly overcomplete) dictionary, and the coecients X 2 R n p follow a random sparse model, then with high probability (A;X) is a local minimum of the ‘ 1 norm over the manifold of factorizations (A 0 ;X 0 ) satisfying A 0 X 0 = Y , provided the number of samples p =",
"This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via l1-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y1 . . . yN), yn ∈ ℝd of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x1 . . . xN), xn ∈ ℝK, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) l1-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.",
"Finding a basis matrix (dictionary) by which objective signals are represented sparsely is of major relevance in various scientific and technological fields. We consider a problem to learn a dictionary from a set of training signals. We employ techniques of statistical mechanics of disordered systems to evaluate the size of the training set necessary to typically succeed in the dictionary learning. The results indicate that the necessary size is much smaller than previously estimated, which theoretically supports and or encourages the use of dictionary learning in practical situations.",
"A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a given set of signals to be represented. Can we expect that the error in representing by such a dictionary a previously unseen signal from the same source will be of similar magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study these questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L2 error in representation when the dictionary is used. For the case of l1 regularized coefficient selection we provide a generalization bound of the order of O(√np ln(mλ) m), where n is the dimension, p is the number of elements in the dictionary, λ is a bound on the l1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(√np ln(mk) m) under an assumption on the closeness to orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results also include bounds that converge as 1 m, not previously known for this problem. We provide similar results in a general setting using kernels with weak smoothness requirements."
]
}
|
1301.5898
|
2088332638
|
We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes.
|
Several works also considered blind calibration, where only a uncertain version of the matrix @math is known, on the other hand one has the access to many signals and their measurements such that calibration of the matrix @math is possible, see e.g. @cite_15 and reference therein. Cases when both the signal and the dictionary are sparse are also considered in the literature, e.g. @cite_25 , and our theory can be applied to these as well.
|
{
"cite_N": [
"@cite_15",
"@cite_25"
],
"mid": [
"2102662752",
"2099321050"
],
"abstract": [
"We consider the problem of calibrating a compressed sensing measurement system under the assumption that the decalibration consists in unknown gains on each measure. We focus on blind calibration, using measures performed on a few unknown (but sparse) signals. A naive formulation of this blind calibration problem, using l 1 minimization, is reminiscent of blind source separation and dictionary learning, which are known to be highly non-convex and riddled with local minima. In the considered context, we show that in fact this formulation can be exactly expressed as a convex optimization problem, and can be solved using off-the-shelf algorithms. Numerical simulations demonstrate the effectiveness of the approach even for highly uncalibrated measures, when a sufficient number of (unknown, but sparse) calibrating signals is provided. We observe that the success failure of the approach seems to obey sharp phase transitions.",
"An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ? A, where ? is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but non-efficient and costly to deploy. In this paper, we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3-D image denoising."
]
}
|
1301.5522
|
2135119290
|
This paper considers the Gaussian half-duplex relay channel (G-HD-RC): a channel model where a source transmits a message to a destination with the help of a relay that cannot transmit and receive at the same time. It is shown that the cut-set upper bound on the capacity can be achieved to within a constant gap, regardless of the actual value of the channel parameters, by either partial-decode-and-forward or compress-and-forward. The performance of these coding strategies is evaluated with both random and deterministic switch at the relay. Numerical evaluations show that the actual gap is less than what analytically obtained, and that random switch achieves higher rates than deterministic switch. As a result of this analysis, the generalized degrees-of-freedom of the G-HD-RC is exactly characterized for this channel. In order to get insights into practical schemes for the G-HD-RC that are less complex than partial-decode-and-forward or compress-and-forward, the exact capacity of the linear deterministic approximation (LDA) of the G-HD-RC at high signal-to-noise-ratio is determined. It is shown that random switch and correlated nonuniform inputs bits are optimal for the LDA. It is then demonstrated that deterministic switch is to within one bit from the capacity. This latter scheme is translated into a coding strategy for the original G-HD-RC and its optimality to within a constant gap is proved. The gap attained by this scheme is larger than that of partial-decode-and-forward, thereby pointing to an interesting practical tradeoff between gap to capacity and complexity.
|
The HD-RC was studied by Host-Madsen in @cite_12 . Here the author derives both an upper and a lower bound on the capacity. The former is based on the cut-set arguments, the latter exploits the Partial-Decode-and-Forward (PDF) strategy where the relay only decodes part of the message sent by the source. Host-Madsen considers the transmit listen state of the relay as fixed and therefore known a priori to all nodes.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2109170655"
],
"abstract": [
"We consider wireless relaying: one or more nodes in a wireless (ad-hoc) network assist other nodes in their transmission by partially retransmitting messages. A characteristic of wireless relays - as compared to the work by T.M. Cover and A.A. El Gamal on the relay channel (see IEEE Trans. on Inf. Theory, vol.25, no.5, p.572-84, 1979) - is that they cannot transmit and receive simultaneously at the same frequency. We derive new upper and lower bounds for the capacity of this wireless relay channel. We then apply these result to a 4 terminal network, and show that the gain (considering outage capacity) of using wireless relaying is of the order of 8-9 dB."
]
}
|
1301.5522
|
2135119290
|
This paper considers the Gaussian half-duplex relay channel (G-HD-RC): a channel model where a source transmits a message to a destination with the help of a relay that cannot transmit and receive at the same time. It is shown that the cut-set upper bound on the capacity can be achieved to within a constant gap, regardless of the actual value of the channel parameters, by either partial-decode-and-forward or compress-and-forward. The performance of these coding strategies is evaluated with both random and deterministic switch at the relay. Numerical evaluations show that the actual gap is less than what analytically obtained, and that random switch achieves higher rates than deterministic switch. As a result of this analysis, the generalized degrees-of-freedom of the G-HD-RC is exactly characterized for this channel. In order to get insights into practical schemes for the G-HD-RC that are less complex than partial-decode-and-forward or compress-and-forward, the exact capacity of the linear deterministic approximation (LDA) of the G-HD-RC at high signal-to-noise-ratio is determined. It is shown that random switch and correlated nonuniform inputs bits are optimal for the LDA. It is then demonstrated that deterministic switch is to within one bit from the capacity. This latter scheme is translated into a coding strategy for the original G-HD-RC and its optimality to within a constant gap is proved. The gap attained by this scheme is larger than that of partial-decode-and-forward, thereby pointing to an interesting practical tradeoff between gap to capacity and complexity.
|
In @cite_8 , Kramer shows that larger rates can be achieved by using a random transmit listen switch strategy at the relay. In this way, the source and the relay can harness the randomness that lies in the switch in order to transmit extra information. An important observation of @cite_8 is that there is no need to develop a separate theory for memoryless networks with HD nodes as the HD constraints can be incorporated into the memoryless FD framework. In this work we shall adopt this approach in deriving outer and inner bounds for HD relay networks.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2159965721"
],
"abstract": [
"Relay channels where terminals cannot receive and transmit at the same time are modeled as being memoryless with cost constraints. Cost functions are considered that measure the power consumed in each of three sleep-listen-or-talk (SLoT) modes, as well as the fraction of time the modes are used. It is shown that strategies that have the SLoT modes known ahead of time by all terminals are generally suboptimal. It is further shown that Gaussian input distributions are generally suboptimal for Gaussian channels. For several types of models and SLoT constraints, it is shown that multi-hopping (or decode-andforward) achieves the information-theoretic capacity if the relay is geometrically near the source terminal, and if the fraction of time the relay listens to the source is lower bounded by a positive number. SLoT constraints for which the capacity claim might not be valid are discussed. Finally, it is pointed out that a lack of symbol synchronization between the relays has little or no effect on the capacity theorems if the signals are bandlimited and if independent input signals are optimal."
]
}
|
1301.4839
|
1524809281
|
Summary Service-Oriented Computing (SOC) enables the composition of loosely coupled service agents provided with varying Quality of Service (QoS) levels, effectively forming a multiagent system (MAS). Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. As the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this and opposed to most MAS approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. Thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. Our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. Also, we propose an extended QoS aggregation algorithm that allows to accurately estimate network QoS. Finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
|
The foundation for our research is given in @cite_3 where the QoS-aware composition problem (CP) is introduced. Common notions, which we also use, are given, and the problem is formalized and solved with (Linear) Integer Programming (IP), which is still a common way to obtain optimal solutions for the CP. A genetic algorithm (GA) is used in @cite_17 @cite_19 . Besides, many efficient heuristic algorithms have been introduced in @cite_11 @cite_9 @cite_0 , and most recently in @cite_4 @cite_22 @cite_15 . All these approaches share the same definition of the CP which ignores the QoS of the network connecting the services. Except for IP which requires a linear function to compute the utility of a workflow, most approaches can be easily augmented with our two-phased QoS algorithm.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_11"
],
"mid": [
"1996162665",
"2136233250",
"2142424862",
"2172105112",
"",
"1965810726",
"2541360521",
"2115449863",
"2171075212"
],
"abstract": [
"Web service composition enables seamless and dynamic integration of business applications on the web. The performance of the composed application is determined by the performance of the involved web services. Therefore, non-functional, quality of service aspects are crucial for selecting the web services to take part in the composition. Identifying the best candidate web services from a set of functionally-equivalent services is a multi-criteria decision making problem. The selected services should optimize the overall QoS of the composed application, while satisfying all the constraints specified by the client on individual QoS parameters. In this paper, we propose an approach based on the notion of skyline to effectively and efficiently select services for composition, reducing the number of candidate services to be considered. We also discuss how a provider can improve its service to become more competitive and increase its potential of being included in composite applications. We evaluate our approach experimentally using both real and synthetically generated datasets.",
"Service-Oriented Architecture enables the composition of loosely coupled services provided with varying Quality of Service (QoS) levels. Given a composition, finding the set of services that optimizes some QoS attributes under given QoS constraints has been shown to be NP-hard. Therefore, heuristic algorithms are widely used, finding acceptable solutions in polynomial time. Still the time complexity of such algorithms can be prohibitive for real-time use, especially if the algorithms are required to run until they find near-optimal solutions. Thus, we propose a heuristic approach based on Hill-Climbing that makes effective use of an initial bias computed with Linear Programming, and works on a reduced search space. In our evaluation, we show that our approach finds near-optimal solutions and achieves a low time complexity.",
"Optimizating semantic web service compositions is known to be NP-hard, so most approaches restrict the number of services and offer poor scalability. We address the scalability issue by selecting compositions which satisfy a set of constraints rather than attempting to produce an optimal composition. Firstly, we define constraints within an innovative and extensible quality model designed to balance semantic fit (or functional quality) with quality of service (QoS) metrics. The semantic fit criterion evaluates the quality of semantic links between the semantic description of Web services parameters, whilst QoS focuses on non-functional criteria of services. Coupling these criteria allows us to further constrain and select valid compositions. To allow the use of this model in the context of millions of services as foreseen by the strategic EC-funded project SOA4All, we i) formulate the selection problem as a Constraint Satisfaction Problem and ii) test the use of a stochastic search method. Finally we compare the latter with state-of-the-art approaches.",
"Web services are rapidly changing the landscape of software engineering. One of the most interesting challenges introduced by web services is represented by Quality Of Service (QoS)--aware composition and late--binding. This allows to bind, at run--time, a service--oriented system with a set of services that, among those providing the required features, meet some non--functional constraints, and optimize criteria such as the overall cost or response time. In other words, QoS--aware composition can be modeled as an optimization problem.We propose to adopt Genetic Algorithms to this aim. Genetic Algorithms, while being slower than integer programming, represent a more scalable choice, and are more suitable to handle generic QoS attributes. The paper describes our approach and its applicability, advantages and weaknesses, discussing results of some numerical simulations.",
"",
"Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"In today’s businesses we can see the trend that service-oriented architectures (SOA) represent the main paradigm for IT infrastructures. In this setting, software offers its functionality as an electronic service to other software in a network. In order to realise more complex tasks or business processes that are comprised of individual services, compositions of these are formed. Thus, several research efforts cover the creation of service compositions, including their modelling, development",
"We present an optimization approach for service compositions in large-scale service-oriented systems that are subject to Quality of Service (QoS) constraints. In particular, we leverage a composition model that allows a flexible specification of QoS constraints by using constraint hierarchies. We propose an extensible met heuristic framework for optimizing such compositions. It provides coherent implementation of common met heuristic functionalities, such as the objective function, improved mutation or neighbor generation. We implement three met heuristic algorithms that leverage these improved operations. The experiments show the efficiency of these implementations and the improved convergence behavior compared to purely randomized met heuristic operators.",
"The run-time binding of web services has been recently put forward in order to support rapid and dynamic web service compositions. With the growing number of alternative web services that provide the same functionality but differ in quality parameters, the service composition becomes a decision problem on which component services should be selected such that user's end-to-end QoS requirements (e.g. availability, response time) and preferences (e.g. price) are satisfied. Although very efficient, local selection strategy fails short in handling global QoS requirements. Solutions based on global optimization, on the other hand, can handle global constraints, but their poor performance renders them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques to benefit from the advantages of both worlds. The proposed solution consists of two steps: first, we use mixed integer programming (MIP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use distributed local selection to find the best web services that satisfy these local constraints. The results of experimental evaluation indicate that our approach significantly outperforms existing solutions in terms of computation time while achieving close-to-optimal results."
]
}
|
1301.4839
|
1524809281
|
Summary Service-Oriented Computing (SOC) enables the composition of loosely coupled service agents provided with varying Quality of Service (QoS) levels, effectively forming a multiagent system (MAS). Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. As the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this and opposed to most MAS approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. Thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. Our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. Also, we propose an extended QoS aggregation algorithm that allows to accurately estimate network QoS. Finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
|
The previously mentioned approaches all simply aggregate static QoS values defined in SLAs. Time-dependent QoS evaluated depending on the execution time are given in @cite_16 . As we will see, our algorithm computes when the execution of each service starts, so we can also compute time-dependent QoS. SLAs with conditionally defined QoS are given in @cite_14 , which can be considered a special case of input-dependent QoS, and, thus, can be handled by our approach, as well.
|
{
"cite_N": [
"@cite_14",
"@cite_16"
],
"mid": [
"1512026489",
"1903280123"
],
"abstract": [
"Service selection is a central challenge in the context of a Service Oriented Architecture. Once functionally sufficient services have been selected, a further selection based on non-functional properties (NFPs) becomes essential in meeting the user's requirements and preferences. However, current descriptions of NFPs and approaches to NFP-aware selection lack the ability to handle the variability of NFPs, that stems from the complex nature of real-world business scenarios. Therefore, we propose a probabilistic approach to service selection as follows: First, to address the inherent variability in the actual values of NFPs at runtime, we treat them as probability distributions. Then, on top of that, we tackle the variability needed in describing NFPs, by providing conditional contracts. Finally, from usage patterns, we compute user-specific expectations for such NFPs. Further, we depict a typical scenario, which serves both as a motivation for our approach, and as a basis for its evaluation.",
"Quality of Services (QoS) plays an essential role in realizing user tasks by service composition. Most QoS-aware service composition approaches have ignored the fact that QoS values can depend on the time of execution. Common QoS attributes such as response time may depend for instance on daytime, due to access tendency or conditional Service of Level Agreements. Application-specific QoS attributes often have tight relationships with the current state of resources, such as availability of hotel rooms. In response to these problems, this paper proposes an integrated multi-objective approach to QoS-aware service composition and selection."
]
}
|
1301.4839
|
1524809281
|
Summary Service-Oriented Computing (SOC) enables the composition of loosely coupled service agents provided with varying Quality of Service (QoS) levels, effectively forming a multiagent system (MAS). Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. As the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this and opposed to most MAS approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. Thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. Our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. Also, we propose an extended QoS aggregation algorithm that allows to accurately estimate network QoS. Finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
|
In @cite_2 constraints on the choice of providers are given, requiring certain services to be executed on the same provider. Introducing such constraints for critical services could also reduce network delay and transfer times to some extent. This would require a significant effort to introduce such heuristic constraints though, while still not necessarily leading to a (near-)optimal solution.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2024250897"
],
"abstract": [
"Service Oriented Architectures enable a multitude of service providers to provide loosely coupled and interoperable services at different Quality of Service and cost levels. This paper considers business processes composed of activities that are supported by service providers. The structure of a business process may be expressed by languages such as BPEL and allows for constructs such as sequence, switch, while, flow, and pick. This paper considers the problem of finding the set of service providers that minimizes the total execution time of the business process subject to cost and execution time constraints. The problem is clearly NP-hard. However, the paper presents an optimized algorithm that finds the optimal solution without having to explore the entire solution space. This algorithm can be used to find the optimal solution in problems of moderate size. A heuristic solution is also presented. Thorough experimental studies, based on random business processes, demonstrate that the heuristic algorithm was able to produce service provider allocations that result in execution times that are only a few percentage points (less than 2.5 ) worse than the allocations obtained by the optimal algorithm while examining a tiny fraction of the solution space (tens of points versus millions of points)."
]
}
|
1301.4839
|
1524809281
|
Summary Service-Oriented Computing (SOC) enables the composition of loosely coupled service agents provided with varying Quality of Service (QoS) levels, effectively forming a multiagent system (MAS). Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. As the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this and opposed to most MAS approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. Thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. Our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. Also, we propose an extended QoS aggregation algorithm that allows to accurately estimate network QoS. Finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
|
Many approaches, such as @cite_13 @cite_10 , deal with point-to-point network QoS, but they do not consider services and compositions from SOC. One of the few examples that combines this with SOC is @cite_18 which looks at service compositions in cloud computing. The difference is that instead of the normal composition problem a scheduling problem is solved where services can be deployed on virtual machines at will. Also no QoS algorithm is given, so it is unclear, if that approach can compute input-dependent QoS and network transfer times.
|
{
"cite_N": [
"@cite_10",
"@cite_18",
"@cite_13"
],
"mid": [
"1991294872",
"1513336745",
"2165622584"
],
"abstract": [
"Motivated by the fact that most of the existing QoS service composition solutions have limited scalability, we develop a hierarchical-based solution framework to achieve scalability by means of topology abstraction and routing state aggregation. The paper presents and solves several unique challenges associated with the hierarchical-based QoS service composition solution in overlay networks, including topology formation (cluster detection and dynamic reclustering), QoS and service state aggregation and distribution, and QoS service path computation in a hierarchically structured network topology. In our framework, we (1) cluster network nodes based on their Internet distances and maintain clustering optimality at low cost by means of local reclustering operations when dealing with dynamic membership; (2) use data clustering and Bloom filter techniques to jointly reduce complexity of data representation associated with services within a cluster; and (3) investigate a top-down approach for computing QoS service paths in a hierarchical topology.",
"Services in cloud computing can be categorized into two groups: Application services and Utility Computing Services. Compositions in the application level are similar to the Web service compositions in SOC (Service-Oriented Computing). Compositions in the utility level are similar to the task matching and scheduling in grid computing. Contributions of this paper include: 1) An extensible QoS model is proposed to calculate the QoS values of services in cloud computing. 2) A genetic-algorithm-based approach is proposed to compose services in cloud computing. 3) A comparison is presented between the proposed approach and other algorithms, i.e., exhaustive search algorithms and random selection algorithms.",
"Next generation networks are envisioned to support dynamic and customizable service compositions at Internet scale. To facilitate the communication between distributed software components, on-demand and QoS-aware network service composition across large scale networks emerges as a key research challenge. This paper presents a fast QoS-aware service composition algorithm for selecting a set of interconnected domains with specific service classes. We further show how such algorithm can be used to support network adaptation and service mobility. In simulation studies performed on large scale networks, the algorithm exhibits very high probability of finding the optimal solution within short execution time. In addition, we present a distributed service composition framework utilizing this algorithm."
]
}
|
1301.4839
|
1524809281
|
Summary Service-Oriented Computing (SOC) enables the composition of loosely coupled service agents provided with varying Quality of Service (QoS) levels, effectively forming a multiagent system (MAS). Selecting a (near-)optimal set of services for a composition in terms of QoS is crucial when many functionally equivalent services are available. As the number of distributed services, especially in the cloud, is rising rapidly, the impact of the network on the QoS keeps increasing. Despite this and opposed to most MAS approaches, current service approaches depend on a centralized architecture which cannot adapt to the network. Thus, we propose a scalable distributed architecture composed of a flexible number of distributed control nodes. Our architecture requires no changes to existing services and adapts from a centralized to a completely distributed realization by adding control nodes as needed. Also, we propose an extended QoS aggregation algorithm that allows to accurately estimate network QoS. Finally, we evaluate the benefits and optimality of our architecture in a distributed environment.
|
In the related field of workflow scheduling, a workflow is mapped to heterogeneous resources (CPUs, virtual machines, etc.), and information about the network is sometimes considered, as well. The goal is to achieve a (near-) scheduling minimizing the execution time, which is often achieved by greedy heuristic approaches, like HEFT @cite_20 . The reason such greedy algorithms seem to suffice is that only one QoS property (response time) is optimized, and that no QoS constraints have to be adhered to, greatly simplifying the problem. Thus, while the setting is similar to ours, the complexity of the problem is quite different, as we optimize multiple QoS properties under given QoS constraints.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2149294210"
],
"abstract": [
"Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. The application scheduling problem has been shown to be NP-complete in general cases as well as in several restricted cases. Because of its key importance, this problem has been extensively studied and various algorithms have been proposed in the literature which are mainly for systems with homogeneous processors. Although there are a few algorithms in the literature for heterogeneous processors, they usually require significantly high scheduling costs and they may not deliver good quality schedules with lower costs. In this paper, we present two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time, which are called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The HEFT algorithm selects the task with the highest upward rank value at each step and assigns the selected task to the processor, which minimizes its earliest finish time with an insertion-based approach. On the other hand, the CPOP algorithm uses the summation of upward and downward rank values for prioritizing tasks. Another difference is in the processor selection phase, which schedules the critical tasks onto the processor that minimizes the total execution time of the critical tasks. In order to provide a robust and unbiased comparison with the related work, a parametric graph generator was designed to generate weighted directed acyclic graphs with various characteristics. The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithms significantly surpass previous approaches in terms of both quality and cost of schedules, which are mainly presented with schedule length ratio, speedup, frequency of best results, and average scheduling time metrics."
]
}
|
1301.4490
|
2953079396
|
Parallel programmers face the often irreconcilable goals of programmability and performance. HPC systems use distributed memory for scalability, thereby sacrificing the programmability advantages of shared memory programming models. Furthermore, the rapid adoption of heterogeneous architectures, often with non-cache-coherent memory systems, has further increased the challenge of supporting shared memory programming models. Our primary objective is to define a memory consistency model that presents the familiar thread-based shared memory programming model, but allows good application performance on non-cache-coherent systems, including distributed memory clusters and accelerator-based systems. We propose regional consistency (RegC), a new consistency model that achieves this objective. Results on up to 256 processors for representative benchmarks demonstrate the potential of RegC in the context of our prototype distributed shared memory system.
|
For a programmer to write correct concurrent applications, the results of memory operations need to be predictable. Memory consistency models describe the rules that guarantee memory accesses will be predictable. There are several memory consistency models that have been proposed, including sequential consistency (SC) @cite_12 , weak consistency (WC) @cite_14 , processor consistency (PC) @cite_24 , release consistency (RC) @cite_17 , entry consistency (EC) @cite_13 , scope consistency (ScC) @cite_16 .
|
{
"cite_N": [
"@cite_14",
"@cite_24",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2177772636",
"2101309537",
"2092816328",
"2117665131",
"2054739713",
"2176864362"
],
"abstract": [
"In highly-pipelined machines, instructions and data are prefetched and buffered in both the processor and the cache. This is done to reduce the average memory access latency and to take advantage of memory interleaving. Lock-up free caches are designed to avoid processor blocking on a cache miss. Write buffers are often included in a pipelined machine to avoid processor waiting on writes. In a shared memory multiprocessor, there are more advantages in buffering memory requests, since each memory access has to traverse the memory- processor interconnection and has to compete with memory requests issued by different processors. Buffering, however, can cause logical problems in multiprocessors. These problems are aggravated if each processor has a private memory in which shared writable data may be present, such as in a cache-based system or in a system with a distributed global memory. In this paper, we analyze the benefits and problems associated with the buffering of memory requests in shared memory multiprocessors. We show that the logical problem of buffering is directly related to the problem of synchronization. A simple model is presented to evaluate the performance improvement resulting from buffering.",
"",
"Systems that maintain coherence at large granularity, such as shared virtual memory systems, suffer from false sharing and extra communication. Relaxed memory consistency models have been used to alleviate these problems, but at a cost in programming complexity. Release Consistency (RC) and Lazy Release Consistency (LRC) are accepted to offer a reasonable tradeoff between performance and programming complexity. Entry Consistency (EC) offers a more relaxed consistency model, but it requires explicit association of shared data objects with synchronization variables. The programming burden of providing such associations can be substantial.",
"The authors describe the motivation, design, and performance of Midway, a programming system for a distributed shared memory multicomputer (DSM) such as an ATM-based cluster, a CM-5, or a Paragon. Midway supports a novel memory consistency model called entry consistency (EC). EC guarantees that shared data become consistent at a processor when the processor acquires a synchronization object known to guard the data. EC is weaker than other models described in the literature, such as processor consistency and release consistency, but it makes possible higher performance implementations of the underlying consistency protocols. Midway programs are written in C, and the association between synchronization objects and data must be made with explicit annotations. As a result, pure entry consistent programs can require more annotations than programs written to other models. Midway also supports the stronger release consistent and processor consistent models at the granularity of individual data items. >",
"Many large sequential computers execute operations in a different order than is specified by the program. A correct execution is achieved if the results produced are the same as would be produced by executing the program steps in order. For a multiprocessor computer, such a correct execution by each processor does not guarantee the correct execution of the entire program. Additional conditions are given which do guarantee that a computer correctly executes multiprocess programs.",
"Scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. In addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the fast processors. Unless carefully controlled, such architectural optimizations can cause memory accesses to be executed in an order different from what the programmer expects. The set of allowable memory access orderings forms the memory consistency model or event ordering model for an architecture. This paper introduces a new model of memory consistency, called release consistency , that allows for more buffering and pipelining than previously proposed models. A framework for classifying shared accesses and reasoning about event ordering is developed. The release consistency model is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization. Possible performance gains from the less strict constraints of the release consistency model are explored. Finally, practical implementation issues are discussed, concentrating on issues relevant to scalable architectures."
]
}
|
1301.4490
|
2953079396
|
Parallel programmers face the often irreconcilable goals of programmability and performance. HPC systems use distributed memory for scalability, thereby sacrificing the programmability advantages of shared memory programming models. Furthermore, the rapid adoption of heterogeneous architectures, often with non-cache-coherent memory systems, has further increased the challenge of supporting shared memory programming models. Our primary objective is to define a memory consistency model that presents the familiar thread-based shared memory programming model, but allows good application performance on non-cache-coherent systems, including distributed memory clusters and accelerator-based systems. We propose regional consistency (RegC), a new consistency model that achieves this objective. Results on up to 256 processors for representative benchmarks demonstrate the potential of RegC in the context of our prototype distributed shared memory system.
|
Though association of shared data with synchronization primitives reduces the overhead of data transfer among processors, EC is hindered by the increased complexity of explicitly associating shared data with synchronization primitives. Programming using EC is complicated and can be error prone. Scope consistency (ScC) alleviates the explicit association of shared data with synchronization primitives. ScC detects the association dynamically at the granularity of pages, thus providing a simpler programming model. The implicit association of memory accesses to synchronization primitives is termed the . The ScC model defines the following rules: (1) before a new session of a consistency scope is allowed to be open at a processor, all previous writes performed with respect to the scope need to be performed at the processor and (2) access to memory is allowed at a processor only after all the associated consistency scopes have been successfully opened. Though ScC presents a relaxed consistency model, the programming model exposed to the user is complex when compared to RC or LRC. @cite_16 mention that precautions need to be taken to ensure that a program runs correctly under ScC, the primary challenge being that all accesses to shared data must be made inside critical sections.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2092816328"
],
"abstract": [
"Systems that maintain coherence at large granularity, such as shared virtual memory systems, suffer from false sharing and extra communication. Relaxed memory consistency models have been used to alleviate these problems, but at a cost in programming complexity. Release Consistency (RC) and Lazy Release Consistency (LRC) are accepted to offer a reasonable tradeoff between performance and programming complexity. Entry Consistency (EC) offers a more relaxed consistency model, but it requires explicit association of shared data objects with synchronization variables. The programming burden of providing such associations can be substantial."
]
}
|
1301.4451
|
1515170449
|
For a finite binary string @math its logical depth @math for significance @math is the shortest running time of a program for @math of length @math . There is another definition of logical depth. We give a new proof that the two versions are close. There is an infinite sequence of strings of consecutive lengths such that for every string there is a @math such that incrementing @math by 1 makes the associated depths go from incomputable to computable. The maximal gap between depths resulting from incrementing appropriate @math 's by 1 is incomputable. The size of this gap is upper bounded by the Busy Beaver function. Both the upper and the lower bound hold for the depth with significance 0. As a consequence, the minimal computation time of the associated shortest programs rises faster than any computable function but not so fast as the Busy Beaver function.
|
The minimum time to compute a string by a @math -incompressible program was first considered in @cite_9 . This minimum time is called the logical depth at significance @math of the string concerned. Definitions, variations, discussion and early results can be found in the given reference. A more formal treatment as well as an intuitive approach was given in the textbook @cite_6 , Section 7.7. In @cite_8 the notion of computational depth is defined as @math (see definitions below). This would equal the negative logarithm of the expression @math in Definition if the following were proved. Since @cite_7 proved in the so-called Coding Theorem that @math up to a constant additive term it remains to prove @math up to a small additive term. The last equality is a major open problem in Kolmogorov complexity theory, see @cite_6 Exercises 7.6.3 and 7.6.4.
|
{
"cite_N": [
"@cite_9",
"@cite_7",
"@cite_6",
"@cite_8"
],
"mid": [
"2281664623",
"33555989",
"1638203394",
"2072053479"
],
"abstract": [
"Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Lof random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.",
"",
"The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.",
"We introduce Computational Depth, a measure for the amount of \"nonrandom\" or \"useful\" information in a string by considering the difference of various Kolmogorov complexity measures. We investigate three instantiations of Computational Depth: • Basic Computational Depth, a clean notion capturing the spirit of Bennett's Logical Depth. We show that a Turing machine M runs in time polynomial on average over the time-bounded universal distribution if and only if for all inputs x, M uses time exponential in the basic computational depth of x. • Sublinear-time Computational Depth and the resulting concept of Shallow Sets, a generalization of sparse and random sets based on low depth properties of their characteristic sequences. We show that every computable set that is reducible to a shallow set has polynomial-size circuits. • Distinguishing Computational Depth, measuring when strings are easier to recognize than to produce. We show that if a Boolean formula has a nonnegligible fraction of its satisfying assignments with low depth, then we can find a satisfying assignment efficiently."
]
}
|
1301.4546
|
2069739090
|
Abstract In the coming era of exascale supercomputing, in-situ visualization will be a crucial approach for reducing the output data size. A problem of in-situ visualization is that it loses interactivity if a steering method is not adopted. In this paper, we propose a new method for the interactive analysis of in-situ visualization images produced by a batch simulation job. A key idea is to apply numerous (thousands to millions) in-situ visualizations simultaneously. The viewer then analyzes the image database interactively during postprocessing. If each movie can be compressed to 100 MB, one million movies will only require 100 TB, which is smaller than the size of the raw numerical data in exascale supercomputing. We performed a feasibility study using the proposed method. Multiple movie files were produced by a simulation and they were analyzed using a specially designed movie player. The user could change the viewing angle, the visualization method, and the parameters interactively by retrieving an appropriate sequence of images from the movie dataset.
|
In-situ visualization has a long history. @cite_8 conducted a general review of steering simulation and visualization. Examples of in-situ visualization in peta-scale simulations and their technical challenges (especially those caused by massively parallel processing) are summarized in @cite_3 @cite_2 . They show that in-situ visualization is a promising solution for peta and exascale simulations. A natural extension of the online dynamical control of simulation is the end-to-end approach @cite_12 @cite_10 , where even mesh generation, which is usually conducted in the preprocessing stage, is performed on supercomputers. They developed a steering simulation for seismic wave propagation where the visualization images were shown in real time.
|
{
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_10",
"@cite_12"
],
"mid": [
"2155758369",
"2096945381",
"2141005805",
"",
"2149025543"
],
"abstract": [
"Most researchers who perform data analysis and visualization do so only after everything else is finished, which often means that they don't discover errors invalidating the results of their simulation until post-processing. A better approach would be to improve the integration of simulation and visualization into the entire process so that they can make adjustments along the way. This approach, called computational steering, is the capacity to control all aspects of the computational science pipeline. Recently, several tools and environments for computational steering have begun to emerge. These tools range from those that modify an application's performance characteristics (either by automated means or by user interaction) to those that modify the underlying computational application. A refined problem-solving environment should facilitate everything from algorithm development to application steering. The authors discuss some tools that provide a mechanism to integrate modeling, simulation, data analysis and visualization.",
"The growing power of parallel supercomputers gives scientists the ability to simulate more complex problems at higher fidelity, leading to many high-impact scientific advances. To maximize the utilization of the vast amount of data generated by these simulations, scientists also need scalable solutions for studying their data to different extents and at different abstraction levels. As we move into peta- and exa-scale computing, simply dumping as much raw simulation data as the storage capacity allows for post-processing analysis and visualization is no longer a viable approach. A common practice is to use a separate parallel computer to prepare data for subsequent analysis and visualization. A naive realization of this strategy not only limits the amount of data that can be saved, but also turns I O into a performance bottleneck when using a large parallel system. We conjecture that the most plausible solution for the peta- and exa-scale data problem is to reduce or transform the data in-situ as it is being generated, so the amount of data that must be transferred over the network is kept to a minimum. In this paper, we discuss different approaches to in-situ processing and visualization as well as the results of our preliminary study using large-scale simulation codes on massively parallel supercomputers.",
"In situ visualization is clearly a promising solution for ultrascale simulations. We've seen some success in realizing this solution and in ongoing efforts to add support for in situ visualization to open source visualization toolkits such as ParaView and Visit. However, for others to adopt this approach, we need further research and experimental studies to derive a set of guidelines and usable visualization software components. If this research is successful, it will lead to a new visualization and data-understanding infrastructure, potentially change how scientists work, and accelerate scientific discovery. This paper discusses critical issues in realizing in situ visualization and suggest important research directions.",
"",
"Parallel supercomputing has traditionally focused on the inner kernel of scientific simulations: the solver. The front and back ends of the simulation pipeline - problem description and interpretation of the output - have taken a back seat to the solver when it comes to attention paid to scalability and performance, and are often relegated to offline, sequential computation. As the largest simulations move beyond the realm of the terascale and into the petascale, this decomposition in tasks and platforms becomes increasingly untenable. We propose an end-to-end approach in which all simulation components - meshing, partitioning, solver, and visualization - are tightly coupled and execute in parallel with shared data structures and no intermediate I O. We present our implementation of this new approach in the context of octree-based finite element simulation of earthquake ground motion. Performance evaluation on up to 2048 processors demonstrates the ability of the end-to-end approach to overcome the scalability bottlenecks of the traditional approach"
]
}
|
1301.4546
|
2069739090
|
Abstract In the coming era of exascale supercomputing, in-situ visualization will be a crucial approach for reducing the output data size. A problem of in-situ visualization is that it loses interactivity if a steering method is not adopted. In this paper, we propose a new method for the interactive analysis of in-situ visualization images produced by a batch simulation job. A key idea is to apply numerous (thousands to millions) in-situ visualizations simultaneously. The viewer then analyzes the image database interactively during postprocessing. If each movie can be compressed to 100 MB, one million movies will only require 100 TB, which is smaller than the size of the raw numerical data in exascale supercomputing. We performed a feasibility study using the proposed method. Multiple movie files were produced by a simulation and they were analyzed using a specially designed movie player. The user could change the viewing angle, the visualization method, and the parameters interactively by retrieving an appropriate sequence of images from the movie dataset.
|
developed a steering simulation and visualization framework for environmental science where dynamical control over the Internet was implemented @cite_11 . Their user interface was constructed on web browsers. developed an in-situ visualization system on the Colombia supercomputer for running a weather forecasting model @cite_13 . (Our experimental system described in section is similar to their system because multiple MPEG files are generated by multiple in-situ visualizations. However, their movies are shown separately in each panel of a tiled display system, whereas ours are used as a database.) developed an in-situ visualization system where the parallel visualization processing can run on a different computer system from the simulation computer system @cite_4 . In their development, special emphasis was placed on producing a steering environment with existing simulation codes.
|
{
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_11"
],
"mid": [
"2103416281",
"2104789871",
"2143390037"
],
"abstract": [
"We describe a concurrent visualization pipeline designed for operation in a production supercomputing environment. The facility was initially developed on the NASA Ames \"Columbia\" supercomputer for a massively parallel forecast model (GEOS4). During the 2005 Atlantic hurricane season, GEOS4 was run 4 times a day under tight time constraints so that its output could be included in an ensemble prediction that was made available to forecasters at the National Hurricane Center. Given this time-critical context, we designed a configurable concurrent pipeline to visualize multiple global fields without significantly affecting the runtime model performance or reliability. We use MPEG compression of the accruing images to facilitate live low-bandwidth distribution of multiple visualization streams to remote sites. We also describe the use of our concurrent visualization framework with a global ocean circulation model, which provides a 864-fold increase in the temporal resolution of practically achievable animations. In both the atmospheric and oceanic circulation models, the application scientists gained new insights into their model dynamics, due to the high temporal resolution animations attainable",
"In the context of scientific computing, the computational steering consists in the coupling of numerical simulations with 3D visualization systems through the network. This allows scientists to monitor online the intermediate results of their computations in a more interactive way than the batch mode, and allows them to modify the simulation parameters on-the-fly. While most of existing computational steering environments support parallel simulations, they are often limited to sequential visualization systems. This may lead to an important bottleneck and increased rendering time. To achieve the required performance for online visualization, we have designed the EPSN framework, a computational steering environment that enables to interconnect legacy parallel simulations with parallel visualization systems. For this, we have introduced a redistribution algorithm for unstructured data, that is well adapted to the context of M times N computational steering. Then, we focus on the design of our parallel viewer and present some experimental results obtained with a particle-based simulation in astrophysics",
"This paper introduces a novel visualization approach that can effectively facilitate the analysis, control, and refinement of dynamic environmental simulations on the World Wide Web. This approach overcomes drawbacks of current Internet Geographic Information System (GIS) technologies by providing an effective and efficient mechanism for two-way and sustained communication and synchronization between the visualization and the modeling processes. The critical aspect of this approach is the establishment of a virtual environment on the Internet using the applet-servlet-socket architecture that supports real-time interactive and collaborative visualization of an environmental modeling process. In this virtual environment, the model residing in the simulation application server is fed with real-time rainfall data from a remote data server through a socket connection. The computational modeling and visualization can take place simultaneously on the application server and the client sides. As modeling computations proceed on the server, modeling results and outputs stream into the client side continuously. The client interface is updated with live 3D displays (not in the sense of predefined 3D animations provided by AVI or dynamic GIF files). Meanwhile, these 3D graphics can act as a support for further user interactions. A hydrological model, TOPMODEL, is implemented using the proposed web-based visualization environment to demonstrate the applicability of the proposed approach in facilitating environmental modeling and simulation."
]
}
|
1301.4546
|
2069739090
|
Abstract In the coming era of exascale supercomputing, in-situ visualization will be a crucial approach for reducing the output data size. A problem of in-situ visualization is that it loses interactivity if a steering method is not adopted. In this paper, we propose a new method for the interactive analysis of in-situ visualization images produced by a batch simulation job. A key idea is to apply numerous (thousands to millions) in-situ visualizations simultaneously. The viewer then analyzes the image database interactively during postprocessing. If each movie can be compressed to 100 MB, one million movies will only require 100 TB, which is smaller than the size of the raw numerical data in exascale supercomputing. We performed a feasibility study using the proposed method. Multiple movie files were produced by a simulation and they were analyzed using a specially designed movie player. The user could change the viewing angle, the visualization method, and the parameters interactively by retrieving an appropriate sequence of images from the movie dataset.
|
Many in-situ visualizations, with or without steering, have been designed and developed for particular simulation problems. However, general visualization frameworks with high parallel scalability have also become available recently. @cite_7 developed a library that facilitates in-situ visualizations using VisIt, which is one of the most sophisticated parallel visualization tools available today. Their paper also contained a concise review of the history and the latest status of the in-situ visualization research. @cite_1 reported the development of a coprocessing library for , which is another sophisticated parallel visualization tool. Using that library, it is possible to utilize the various visualization functions provided by during runtime, which is decoupled from the simulation. The latest case studies of in-situ visualization using and can be found in @cite_5 .
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_7"
],
"mid": [
"",
"1988100325",
"2141981986"
],
"abstract": [
"",
"As high performance computing approaches exascale, CPU capability far outpaces disk write speed, and in situ visualization becomes an essential part of an analyst's workflow. In this paper, we describe the ParaView Coprocessing Library, a framework for in situ visualization and analysis coprocessing. We describe how coprocessing algorithms (building on many from VTK) can be linked and executed directly from within a scientific simulation or other applications that need visualization and analysis. We also describe how the ParaView Coprocessing Library can write out partially processed, compressed, or extracted data readable by a traditional visualization application for interactive post-processing. Finally, we will demonstrate the library's scalability in a number of real-world scenarios.",
"There is a widening gap between compute performance and the ability to store computation results. Complex scientific codes are the most affected since they must save massive files containing meshes and fields for offline analysis. Time and storage costs instead dictate that data analysis and visualization be combined with the simulations themselves, being done in situ so data are transformed to a manageable size before they are stored. Earlier approaches to in situ processing involved combining specific visualization algorithms into the simulation code, limiting flexibility. We introduce a new library which instead allows a fully-featured visualization tool, VisIt, to request data as needed from the simulation and apply visualization algorithms in situ with minimal modification to the application code."
]
}
|
1301.4728
|
2950171160
|
We propose a novel relay augmentation strategy for extending the lifetime of a certain class of wireless sensor networks. In this class sensors are located at fixed and pre-determined positions and all communication takes place via multi-hop paths in a fixed routing tree rooted at the base station. It is assumed that no accumulation of data takes place along the communication paths and that there is no restriction on where additional relays may be located. Under these assumptions the optimal extension of network lifetime is modelled as the Euclidean @math -bottleneck Steiner tree problem. Only two approximation algorithms for this NP-hard problem exist in the literature: a minimum spanning tree heuristic (MSTH) with performance ratio 2, and a probabilistic 3-regular hypergraph heuristic (3RHH) with performance ratio @math . We present a new iterative heuristic that incorporates MSTH and show via simulation that our algorithm performs better than MSTH in extending lifetime, and outperforms 3RHH in terms of efficiency.
|
Sensors are often overtaxed due to an uneven distribution of traffic flow. Many lifetime extending strategies in the literature therefore strive to dynamically adjust the routing topology in order to relieve the burden on these sensors; see @cite_8 , @cite_14 , @cite_25 . Unfortunately, these techniques are only germane if multiple available paths exist between nodes in the network. In WSNs with low connectivity or where routing protocols are limited by other application-specific or topological factors, deployment based strategies may be more valuable.
|
{
"cite_N": [
"@cite_14",
"@cite_25",
"@cite_8"
],
"mid": [
"1965271273",
"2124415567",
"2141573624"
],
"abstract": [
"Performing tasks energy efficiently in a wireless sensor network (WSN) is a critical issue for the successful deployment and operation of such networks. Gathering data from all the sensors to a base station, especially with in-network aggregation, is an important problem that has received a lot of attention recently. The Maximum Lifetime Data Gathering with Aggregation (MLDA) problem deals with maximizing the system lifetime T so that we can perform T rounds of data gathering with in-network aggregation, given the initial available energy of the sensors. A solution of value T to the MLDA problem consists of a collection of aggregation trees together with the number of rounds each such tree should be used in order to achieve lifetime T. We describe a combinatorial iterative algorithm for finding an optimal continuous solution to the MLDA problem that consists of up to n-1 aggregation trees and achieves lifetime T\"o, which depends on the network topology and initial energy available at the sensors. We obtain an @a-approximate optimal integral solution by simply rounding down the optimal continuous solution, where @a=(T\"o-n+1) T\"o. Since in practice T\"[email protected]?n,@a 1. We get asymptotically optimal integral solutions to the MLDA problem whenever the optimal continuous solution is @w(n). Furthermore, we demonstrate the efficiency and effectiveness of the proposed algorithm via extensive experimental results.",
"Unbalanced energy consumption is an inherent problem in wireless sensor networks characterized by multihop routing and many-to-one traffic pattern, and this uneven energy dissipation can significantly reduce network lifetime. In this paper, we study the problem of maximizing network lifetime through balancing energy consumption for uniformly deployed data-gathering sensor networks. We formulate the energy consumption balancing problem as an optimal transmitting data distribution problem by combining the ideas of corona-based network division and mixed-routing strategy together with data aggregation. We first propose a localized zone-based routing scheme that guarantees balanced energy consumption among nodes within each corona. We then design an offline centralized algorithm with time complexity O(n) (n is the number of coronas) to solve the transmitting data distribution problem aimed at balancing energy consumption among nodes in different coronas. The approach for computing the optimal number of coronas in terms of maximizing network lifetime is also presented. Based on the mathematical model, an energy-balanced data gathering (EBDG) protocol is designed and the solution for extending EBDG to large-scale data-gathering sensor networks is also presented. Simulation results demonstrate that EBDG significantly outperforms conventional multihop transmission schemes, direct transmission schemes, and cluster-head rotation schemes in terms of network lifetime.",
"Wireless sensor networks (WSNs) require protocols that make judicious use of the limited energy capacity of the sensor nodes. In this paper, the potential performance improvement gained by balancing the traffic throughout the WSN is investigated. We show that sending the traffic generated by each sensor node through multiple paths, instead of a single path, allows significant energy conservation. A new analytical model for load-balanced systems is complemented by simulation to quantitatively evaluate the benefits of the proposed load-balancing technique. Specifically, we derive the set of paths to be used by each sensor node and the associated weights (i.e., the proportion of utilization) that maximize the network lifetime."
]
}
|
1301.4728
|
2950171160
|
We propose a novel relay augmentation strategy for extending the lifetime of a certain class of wireless sensor networks. In this class sensors are located at fixed and pre-determined positions and all communication takes place via multi-hop paths in a fixed routing tree rooted at the base station. It is assumed that no accumulation of data takes place along the communication paths and that there is no restriction on where additional relays may be located. Under these assumptions the optimal extension of network lifetime is modelled as the Euclidean @math -bottleneck Steiner tree problem. Only two approximation algorithms for this NP-hard problem exist in the literature: a minimum spanning tree heuristic (MSTH) with performance ratio 2, and a probabilistic 3-regular hypergraph heuristic (3RHH) with performance ratio @math . We present a new iterative heuristic that incorporates MSTH and show via simulation that our algorithm performs better than MSTH in extending lifetime, and outperforms 3RHH in terms of efficiency.
|
Often the most highly burdened sensors are within close proximity to the base station @cite_9 , hence the well-investigated objective of doughnut" or energy-hole" mitigation around the base station. Deploying additional relays close to the base station is one possible solution to the energy-hole problem, and this method has an analogue in density-varying random deployment strategies @cite_0 . This approach, however, is only appropriate where there is a significant accumulation of data close to the base station. Given the assumption of a uniform data transmission rate for the networks dealt with in the current paper, these energy-hole algorithms" are not comparable to our algorithm.
|
{
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2167360551",
"2170106967"
],
"abstract": [
"In a sensor network, usually a large number of sensors transport data messages to a limited number of sinks. Due to this multipoint-to-point communications pattern in general homogeneous sensor networks, the closer a sensor to the sink, the quicker it will deplete its battery. This unbalanced energy depletion phenomenon has become the bottleneck problem to elongate the lifetime of sensor networks. In this paper, we consider the effects of joint relay node deployment and transmission power control on network lifetime. Contrary to the intuition the relay nodes considered are even simpler devices than the sensor nodes with limited capabilities. We show that the network lifetime can be extended significantly with the addition of relay nodes to the network. In addition, for the same expected network lifetime goal, the number of relay nodes required can be reduced by employing efficient transmission power control while leaving the network connectivity level unchanged. The solution suggests that it is sufficient to deploy relay nodes only with a specific probabilistic distribution rather than the specifying the exact places. Furthermore, the solution does not require any change on the protocols (such as routing) used in the network.",
"In a typical sensor network, nodes around the sink consume more energy than those further away. It is not unusual that limited energy resources available at the nodes around the sink become the bottleneck which confines the performance of the whole network. In this letter, we firstly present our considered bottleneck zone in a general sensor network scenario. Then, the effect of the bottleneck zone on network performance is investigated by deducing performance bounds imposed by the energy resources available inside the bottleneck zone. In this letter, both the performance bound in terms of network lifetime and the performance bound in terms of information collection are explored. Finally, the ways by which network deployment variables may affect the performance bounds are analyzed."
]
}
|
1301.4728
|
2950171160
|
We propose a novel relay augmentation strategy for extending the lifetime of a certain class of wireless sensor networks. In this class sensors are located at fixed and pre-determined positions and all communication takes place via multi-hop paths in a fixed routing tree rooted at the base station. It is assumed that no accumulation of data takes place along the communication paths and that there is no restriction on where additional relays may be located. Under these assumptions the optimal extension of network lifetime is modelled as the Euclidean @math -bottleneck Steiner tree problem. Only two approximation algorithms for this NP-hard problem exist in the literature: a minimum spanning tree heuristic (MSTH) with performance ratio 2, and a probabilistic 3-regular hypergraph heuristic (3RHH) with performance ratio @math . We present a new iterative heuristic that incorporates MSTH and show via simulation that our algorithm performs better than MSTH in extending lifetime, and outperforms 3RHH in terms of efficiency.
|
Augmenting a network by deploying additional (non-sensing) relays is a powerful and relatively inexpensive method of optimising many topology dependent WSN objectives. This concept is not new to the literature, and has been considered under various network models and objectives. Relay augmentation strategies exist that supplement the primary objective of lifetime extension with objectives related to coverage @cite_7 , connectivity @cite_6 and balanced traffic flow @cite_29 , @cite_1 .
|
{
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_7"
],
"mid": [
"2005747091",
"",
"2156919687",
"2165255169"
],
"abstract": [
"Energy consumption is a crucially important issue in battery-driven wireless sensor networks (WSNs). In most sensor networks, the sensors near the data collector (i.e. the sink) become drained more quickly than those elsewhere in the network since they are required to relay all of the data collected in the network to the sink. Therefore more balanced data paths to the sink should be established in order to extend the lifetime of the sensor network. Accordingly, a novel relay deployment scheme for WSNs based on the Voronoi diagram is proposed. The proposed scheme is applicable to both two-dimensional and three-dimensional network topologies and establishes effective routing paths that balance the traffic load within the sensor network and alleviate the burden on the sensors around the sink. Simulation results indicate that the number of relays deployed in the proposed scheme is similar to that deployed in the predetermined location scheme and is significantly less than that deployed in the minimum set cover scheme. Furthermore, the lifetime of the sensor network containing relay nodes deployed using the current scheme is longer than that achieved using either the predetermined location scheme or the minimum set cover scheme.",
"",
"In a heterogeneous wireless sensor network (WSN), relay nodes (RNs) are adopted to relay data packets from sensor nodes (SNs) to the base station (BS). The deployment of the RNs can have a significant impact on connectivity and lifetime of a WSN system. This paper studies the effects of random deployment strategies. We first discuss the biased energy consumption rate problem associated with uniform random deployment. This problem leads to insufficient energy utilization and shortened network lifetime. To overcome this problem, we propose two new random deployment strategies, namely, the lifetime-oriented deployment and hybrid deployment. The former solely aims at balancing the energy consumption rates of RNs across the network, thus extending the system lifetime. However, this deployment scheme may not provide sufficient connectivity to SNs when the given number of RNs is relatively small. The latter reconciles the concerns of connectivity and lifetime extension. Both single-hop and multihop communication models are considered in this paper. With a combination of theoretical analysis and simulated evaluation, this study explores the trade-off between connectivity and lifetime extension in the problem of RN deployment. It also provides a guideline for efficient deployment of RNs in a large-scale heterogeneous WSN.",
"To achieve better performance, we adopt a Heterogeneous Sensor Network (HSN) model. In many applications, the locations of some sensor nodes are controllable. In this paper, first we propose a novel density-varying deployment scheme for high-end sensors (H-sensors) in an HSN. The scheme solves the bottleneck problem in typical many-to-one sensor networks. We then study the optimal placement of H-sensors whose locations are controllable. The goal is to use the minimum number of H-sensors for ensuring successful data delivery, coverage and connectivity in a network for a given lifetime. We present an effective H-sensor placement scheme that can simultaneously achieve coverage, connectivity and data relay requirements while uses a small number of H-sensors. Both theoretical proofs and simulation results demonstrate that the proposed H-sensor placement scheme achieves very good performance."
]
}
|
1301.4728
|
2950171160
|
We propose a novel relay augmentation strategy for extending the lifetime of a certain class of wireless sensor networks. In this class sensors are located at fixed and pre-determined positions and all communication takes place via multi-hop paths in a fixed routing tree rooted at the base station. It is assumed that no accumulation of data takes place along the communication paths and that there is no restriction on where additional relays may be located. Under these assumptions the optimal extension of network lifetime is modelled as the Euclidean @math -bottleneck Steiner tree problem. Only two approximation algorithms for this NP-hard problem exist in the literature: a minimum spanning tree heuristic (MSTH) with performance ratio 2, and a probabilistic 3-regular hypergraph heuristic (3RHH) with performance ratio @math . We present a new iterative heuristic that incorporates MSTH and show via simulation that our algorithm performs better than MSTH in extending lifetime, and outperforms 3RHH in terms of efficiency.
|
As mentioned above, here we model optimal lifetime extension as a @math -bottleneck Steiner problem. This problem was introduced by Saraffzedeh and Wong @cite_11 , and various authors have considered it in the context of facility location science @cite_26 , @cite_23 . The problem is NP-hard; in fact, unless P=NP, no polynomial-time algorithm exists for the problem in the Euclidean plane with a less than @math , where this ratio is defined as the theoretically largest possible value attained when dividing the length of the bottleneck edge produced by a given polynomial-time algorithm by the length of the bottleneck edge in an optimal solution . Wang and Du, the authors who demonstrated this complexity bound, present in @cite_24 the first deterministic approximation algorithm for the @math -bottleneck problem in both the rectilinear and Euclidean planes. They also prove that the performance ratio of their algorithm, the afore-mentioned minimum spanning tree heuristic, is bounded above by @math for each of these metrics. @cite_2 describe a probabilistic @math -regular hypergraph heuristic ( @math RHH) with performance ratio @math which runs in @math time, where @math is the number of given terminals. Recently @cite_22 , @cite_5 developed exact algorithms with exponential complexity.
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_11"
],
"mid": [
"1990630682",
"2144383545",
"2171603211",
"",
"2975802624",
"2073061830",
"2161071733"
],
"abstract": [
"This paper presents a new method of locating n new facilities among m destinations in accordance with the minimax criterion; that is, the facilities are located to minimize the maximum weighted distance in the system. Distances may be rectangular, Euclidean, or general (lp). The method involves the numerical integration of ordinary differential equations and is computationally superior to methods using nonlinear programming.",
"Given n points, called terminals, in the plane ℝ2 and a positive integer k, the bottleneck Steiner tree problem is to find k Steiner points from ℝ2 and a spanning tree on the n+k points that minimizes its longest edge length. Edge length is measured by an underlying distance function on ℝ2, usually, the Euclidean or the L 1 metric. This problem is known to be NP-hard. In this paper, we study this problem in the L p metric for any 1≤p≤∞, and aim to find an exact algorithm which is efficient for small fixed k. We present the first fixed-parameter tractable algorithm running in f(k)⋅nlog 2 n time for the L 1 and the L ∞ metrics, and the first exact algorithm for the L p metric for any fixed rational p with 1<p<∞ whose time complexity is f(k)⋅(n k +nlog n), where f(k) is a function dependent only on k. Note that prior to this paper there was no known exact algorithm even for the L 2 metric.",
"In the design of wireless communication networks, due to a budget limit, suppose we could put totally n+k stations in the plane. However, n of them must be located at given points. Of course, one would like to have the distance between stations as small as possible. The problem is how to choose locations for other k stations to minimize the longest distance between stations. This problem is NP-hard. We show that if NP P , no polynomial-time approximation for the problem in the rectilinear plane has a performance ratio less than 2 and no polynomial-time approximation for the problem in the Euclidean plane has a performance ratio less than 2 and that there exists a polynomial-time approximation with performance ratio 2 for the problem in both the rectilinear plane and the Euclidean plane.",
"",
"We study variations of Steiner tree problem. Let P = p1, p2, ...,p n be a set of n terminals in the Euclidean plane. For a positive integer k, the bottleneck Steiner tree problem (BSTP for short) is to find a Steiner tree with at most k Steiner points such that the length of the longest edges in the tree is minimized. For a positive constant R, the Steiner tree problem with minimum number of Steiner points (STP - MSP for short) asks for a Steiner tree such that each edge in the tree has length at most R and the number of Steiner points is minimized. In this paper, we give (1) a ratio-√3 + e approximation algorithm for BSTP, where e is an arbitrary positive number; (2) a ratio-3 approximation algorithm for STP-MSP with running time O(n 3 ); (3) a ratio-5 2 approximation algorithm for STP-MSP.",
"We study the Euclidean bottleneck Steiner tree problem: given a set P of n points in the Euclidean plane and a positive integer k, find a Steiner tree with at most k Steiner points such that the length of the longest edge in the tree is minimized. This problem is known to be NP-hard even to approximate within ratio 2 and there was no known exact algorithm even for k=1 prior to this work. In this paper, we focus on finding exact solutions to the problem for a small constant k. Based on geometric properties of optimal location of Steiner points, we present an optimal @Q(nlogn)-time exact algorithm for k=1 and an O(n^2)-time algorithm for k=2. Also, we present an optimal @Q(nlogn)-time exact algorithm for any constant k for a special case where there is no edge between Steiner points.",
"A Steiner tree with maximum-weight edge minimized is called a bottleneck Steiner tree (BST). The authors propose a Theta ( mod rho mod log mod rho mod ) time algorithm for constructing a BST on a point set rho , with points labeled as Steiner or demand; a lower bound, in the linear decision tree model, is also established. It is shown that if it is desired to minimize further the number of used Steiner points, then the problem becomes NP-complete. It is shown that when locations of Steiner points are not fixed the problem remains NP-complete; however, if the topology of the final tree is given, then the problem can be solved in Theta ( mod rho mod log mod rho mod ) time. The BST problem can be used, for example, in VLSI layout, communication network design, and (facility) location problems. >"
]
}
|
1301.3791
|
2949925098
|
Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2x on the repair disk I O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14 more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.
|
Optimizing code designs for efficient repair is a topic that has recently attracted significant attention due to its relevance to distributed systems. There is a substantial volume of work and we only try to give a high-level overview here. The interested reader can refer to @cite_13 and references therein. The first important distinction in the literature is between and repair. Functional repair means that when a block is lost, a different block is created that maintains the @math fault tolerance of the code. The main problem with functional repair is that when a systematic block is lost, it will be replaced with a parity block. While global fault tolerance to @math erasures remains, reading a single block would now require access to @math blocks. While this could be useful for archival systems with rare reads, it is not practical for our workloads. Therefore, we are interested only in codes with repair so that we can maintain the code systematic.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2058863419"
],
"abstract": [
"Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic."
]
}
|
1301.3791
|
2949925098
|
Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2x on the repair disk I O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14 more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.
|
Dimakis @cite_0 showed that it is possible to repair codes with network traffic smaller than the naive scheme that reads and transfers @math blocks. The first regenerating codes @cite_0 provided only functional repair and the existence of exact regenerating codes matching the information theoretic bounds remained open.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2105185344"
],
"abstract": [
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff."
]
}
|
1301.3791
|
2949925098
|
Distributed storage systems for large clusters typically use replication to provide reliability. Recently, erasure codes have been used to reduce the large storage overhead of three-replicated systems. Reed-Solomon codes are the standard design choice and their high repair cost is often considered an unavoidable price to pay for high storage efficiency and high reliability. This paper shows how to overcome this limitation. We present a novel family of erasure codes that are efficiently repairable and offer higher reliability compared to Reed-Solomon codes. We show analytically that our codes are optimal on a recently identified tradeoff between locality and minimum distance. We implement our new codes in Hadoop HDFS and compare to a currently deployed HDFS module that uses Reed-Solomon codes. Our modified HDFS implementation shows a reduction of approximately 2x on the repair disk I O and repair network traffic. The disadvantage of the new coding scheme is that it requires 14 more storage compared to Reed-Solomon codes, an overhead shown to be information theoretically optimal to obtain locality. Because the new codes repair failures faster, this provides higher reliability, which is orders of magnitude higher compared to replication.
|
A substantial volume of work (e.g. @cite_13 @cite_3 @cite_2 and references therein) subsequently showed that exact repair is possible, matching the information theoretic bound of @cite_0 . The code constructions are separated into exact codes for low rates @math and high rates @math . For rates below @math ( storage overheads above @math ) beautiful combinatorial constructions of exact regenerating codes were recently discovered @cite_15 @cite_14 . Since replication has a storage overhead of three, for our applications storage overheads around @math are of most interest, which ruled out the use of low rate exact regenerating codes.
|
{
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"",
"2105185344",
"2141222886",
"2150777202",
"2058863419"
],
"abstract": [
"",
"",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
"MDS array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct then the rebuilding ratio is 1 (access all the remaining information). However, the interesting (and more practical) case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between 1 over 2 and 3 over 4, however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a 2-erasure correcting code, the rebuilding ratio is 1 over 2. In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of 1 over r in the case of a single erasure. Our array codes have efficient encoding and decoding algorithms (for the case r = 2 they use a finite field of size 3) and an optimal update property.",
"Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1].",
"Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic."
]
}
|
1301.3527
|
1594793942
|
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L @math norm, however its optimization is NP-hard. Mixed norms, such as L @math L @math measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L @math norm. However, present algorithms designed for optimizing the mixed norm L @math L @math are slow and other formulations for sparse NMF have been proposed such as those based on L @math and L @math norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
Other SNMF formulations have been considered by Hoyer2002 @cite_4 , Morupl0 @cite_5 , Kim2007 @cite_21 , Pascual06 @cite_6 (nsNMF) and Peharz2011 @cite_17 . SNMF formulations using similar sparsity measures as used in this paper have been considered for applications in speech and audio recordings @cite_15 @cite_22 .
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"",
"2150415460",
"2165685007",
"2146913572",
"2170293972",
"2147437692",
""
],
"abstract": [
"",
"An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain. Each sound source, in turn, is modeled as a sum of one or more components. The parameters of the components are estimated by minimizing the reconstruction error between the input spectrogram and the model, while restricting the component spectrograms to be nonnegative and favoring components whose gains are slowly varying and sparse. Temporal continuity is favored by using a cost term which is the sum of squared differences between the gains in adjacent frames, and sparseness is favored by penalizing nonzero gains. The proposed iterative estimation algorithm is initialized with random values, and the gains and the spectra are then alternatively updated using multiplicative update rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was compared with independent subspace analysis and basic nonnegative matrix factorization, which are based on the same linear model. According to these simulations, the proposed method enables a better separation quality than the previous algorithms. Especially, the temporal continuity criterion improved the detection of pitched musical sounds. The sparseness criterion did not produce significant improvements",
"Motivation: Many practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approximating high-dimensional data in a lower dimensional space. Results: In this article, we introduce a novel formulation of sparse NMF and show how the new formulation leads to a convergent sparse NMF algorithm via alternating non-negativity-constrained least squares. We apply our sparse NMF algorithm to cancer-class discovery and gene expression data analysis and offer biological analysis of the results obtained. Our experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms. Availability: The software is available as supplementary material. Contact:hskim@cc.gatech.edu, hpark@acc.gatech.edu Supplementary information: Supplementary data are available at Bioinformatics online.",
"We propose a novel nonnegative matrix factorization model that aims at finding localized, part-based, representations of nonnegative multivariate data items. Unlike the classical nonnegative matrix factorization (NMF) technique, this new model, denoted \"nonsmooth nonnegative matrix factorization\" (nsNMF), corresponds to the optimization of an unambiguous cost function designed to explicitly represent sparseness, in the form of nonsmoothness, which is controlled by a single parameter. In general, this method produces a set of basis and encoding vectors that are not only capable of representing the original data, but they also extract highly focalized patterns, which generally lend themselves to improved interpretability. The properties of this new method are illustrated with several data sets. Comparisons to previously published methods show that the new nsNMF method has some advantages in keeping faithfulness to the data in the achieving a high degree of sparseness for both the estimated basis and the encoding vectors and in better interpretability of the factors.",
"Non-negative matrix factorization (NMF), i.e. V ap WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However, imposing sparseness both improves the uniqueness of the decomposition and favors part based representation. Sparseness in the form of attaining as many zero elements in the solution as possible is appealing from a conceptional point of view and corresponds to minimizing reconstruction error with an L0 norm constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxation to regularization by the L1 norm is often considered, i.e., minimizing (1 2||V - WH||F 2 + lambda||H||1).An open problem is to control the degree of sparsity lambda imposed. We here demonstrate that a full regularization path for the L1 norm regularized least squares NMF for fixed W can be calculated at the cost of an ordinary least squares solution based on a modification of the least angle regression and selection (LARS) algorithm forming a non-negativity constrained LARS (NLARS). With the full regularization path, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download.",
"In this paper, we propose a semi-supervised algorithm based on sparse non-negative matrix factorization (NMF) to improve separation of speech from background music in monaural signals. In our approach, fixed speech basis vectors are obtained from training data whereas music bases are estimated on-the-fly to cope with spectral variability while preserving small NMF dimensionality for decreased computation effort. In a large-scale experimental evaluation with 168 speakers from the TIMIT database, we compare the semi-supervised method to supervised NMF with an explicit background music model. Our results reveal that the semi-supervised method outperforms supervised NMF at low speech-to-music ratios, and that sparsity constraints on the music spectra to enforce harmonicity can improve separation performance.",
""
]
}
|
1301.3527
|
1594793942
|
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L @math norm, however its optimization is NP-hard. Mixed norms, such as L @math L @math measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L @math norm. However, present algorithms designed for optimizing the mixed norm L @math L @math are slow and other formulations for sparse NMF have been proposed such as those based on L @math and L @math norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
|
We note that our sparsity measure has all the desirable properties, extensively discussed by Hurley2009 @cite_27 , except for one ( cloning''). Cloning property is satisfied when two vectors of same sparsity when concatenated maintain their sparsity value. Dimensions in our optimization problem are fixed and thus violating the cloning property is not an issue. Compare this with the L @math norm that satisfies only one of these properties (namely rising tide''). Rising tide is the property where adding a constant to the elements of a vector decreases the sparsity of the vector. Nevertheless, the measure used in Kim2007 is based on the L @math norm. The properties satisfied by the measure in Pascual06 are unclear because of the implicit nature of the sparsity formulation.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2111854888"
],
"abstract": [
"Sparsity of representations of signals has been shown to be a key concept of fundamental importance in fields such as blind source separation, compression, sampling and signal analysis. The aim of this paper is to compare several commonly-used sparsity measures based on intuitive attributes. Intuitively, a sparse representation is one in which a small number of coefficients contain a large proportion of the energy. In this paper, six properties are discussed: (Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), each of which a sparsity measure should have. The main contributions of this paper are the proofs and the associated summary table which classify commonly-used sparsity measures based on whether or not they satisfy these six propositions. Only two of these measures satisfy all six: the pq-mean with p les 1, q > 1 and the Gini index."
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
The work most similar to ours is that by @cite_0 . They map fMRI scans of people thinking about certain words into a space of manually designed features and then classify using these features. They are able to predict semantic features even for words for which they have not seen scans and experiment with differentiating between several zero-shot classes. However, the do not classify new test instances into both seen and unseen classes. We extend their approach to allow for this setup using outlier detection.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2150295085"
],
"abstract": [
"We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words."
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
@cite_6 describe the unseen zero-shot classes by a canonical'' example or use ground truth human labeling of attributes.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"27961112"
],
"abstract": [
"We introduce the problem of zero-data learning, where a model must generalize to classes or tasks for which no training data are available and only a description of the classes or tasks are provided. Zero-data learning is useful for problems where the set of classes to distinguish or tasks to solve is very large and is not entirely covered by the training data. The main contributions of this work lie in the presentation of a general formalization of zero-data learning, in an experimental analysis of its properties and in empirical evidence showing that generalization is possible and significant in this context. The experimental work of this paper addresses two classification problems of character recognition and a multitask ranking problem in the context of drug discovery. Finally, we conclude by discussing how this new framework could lead to a novel perspective on how to extend machine learning towards AI, where an agent can be given a specification for a learning problem before attempting to solve it (with very few or even zero examples)."
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
and @cite_12 @cite_23 were two of the first to use well-designed visual attributes of unseen classes to classify them. This is different to our setting since we only have distributional features of words learned from unsupervised, non-parallel corpora and can classify between categories that have thousands or zero training images. @cite_7 learn when to transfer knowledge from one category to another for each instance.
|
{
"cite_N": [
"@cite_23",
"@cite_7",
"@cite_12"
],
"mid": [
"2098411764",
"2163847983",
""
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"In recent years, knowledge transfer algorithms have become one of most the active research areas in learning visual concepts. Most of the existing learning algorithms focuses on leveraging the knowledge transfer process which is specific to a given category. However, in many cases, such a process may not be very effective when a particular target category has very few samples. In such cases, it is interesting to examine, whether it is feasible to use cross-category knowledge for improving the learning process by exploring the knowledge in correlated categories. Such a task can be quite challenging due to variations in semantic similarities and differences between categories, which could either help or hinder the cross-category learning process. In order to address this challenge, we develop a cross-category label propagation algorithm, which can directly propagate the inter-category knowledge at instance level between the source and the target categories. Furthermore, this algorithm can automatically detect conditions under which the transfer process can be detrimental to the learning process. This provides us a way to know when the transfer of cross-category knowledge is both useful and desirable. We present experimental results on real image and video data sets in order to demonstrate the effectiveness of our approach.",
""
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
Domain adaptation is useful in situations in which there is a lot of training data in one domain but little to none in another. For instance, in sentiment analysis one could train a classifier for movie reviews and then adapt from that domain to book reviews @cite_10 @cite_14 . While related, this line of work is different since there is data for each but the features may differ between domains.
|
{
"cite_N": [
"@cite_14",
"@cite_10"
],
"mid": [
"22861983",
"2163302275"
],
"abstract": [
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
"Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30 over the original SCL algorithm and 46 over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains."
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
Multimodal embeddings relate information from multiple sources such as sound and video @cite_17 or images and text. @cite_24 project words and image regions into a common space using kernelized canonical correlation analysis to obtain state of the art performance in annotation and segmentation. Similar to our work, they use unsupervised large text corpora to learn semantic word representations. Their model does require a small amount of training data however for each class. Among other recent work is that by Srivastava and Salakhutdinov @cite_4 who developed multimodal Deep Boltzmann Machines. Similar to their work, we use techniques from the broad field of deep learning to represent images and words.
|
{
"cite_N": [
"@cite_24",
"@cite_4",
"@cite_17"
],
"mid": [
"2062955551",
"154472438",
"2184188583"
],
"abstract": [
"We propose a semi-supervised model which segments and annotates images using very few labeled images and a large unaligned text corpus to relate image regions to text labels. Given photos of a sports event, all that is necessary to provide a pixel-level labeling of objects and background is a set of newspaper articles about this sport and one to five labeled images. Our model is motivated by the observation that words in text corpora share certain context and feature similarities with visual objects. We describe images using visual words, a new region-based representation. The proposed model is based on kernelized canonical correlation analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. Kernels are derived from context and adjective features inside the respective visual and textual domains. We apply our method to a challenging dataset and rely on articles of the New York Times for textual features. Our model outperforms the state-of-the-art in annotation. In segmentation it compares favorably with other methods that use significantly more labeled training data.",
"Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.",
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning."
]
}
|
1301.3666
|
2950276680
|
This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
|
Some work has been done on multimodal distributional methods @cite_9 @cite_2 . Most recently, @cite_3 worked on perceptually grounding word meaning and showed that joint models are better able to predict the color of concrete objects.
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"155596317",
"2137735870",
"2126274282"
],
"abstract": [
"The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.",
"Our research aims at building computational models of word meaning that are perceptually grounded. Using computer vision techniques, we build visual and multimodal distributional models and compare them to standard textual models. Our results show that, while visual models with state-of-the-art computer vision techniques perform worse than textual models in general tasks (accounting for semantic relatedness), they are as good or better models of the meaning of words with visual correlates such as color terms, even in a nontrivial task that involves nonliteral uses of such words. Moreover, we show that visual and textual information are tapping on different aspects of meaning, and indeed combining them in multimodal models often improves performance.",
"Traditional approaches to semantic relatedness are often restricted to text-based methods, which typically disregard other multimodal knowledge sources. In this paper, we propose a novel image-based metric to estimate the relatedness of words, and demonstrate the promise of this method through comparative evaluations on three standard datasets. We also show that a hybrid image-text approach can lead to improvements in word relatedness, confirming the applicability of visual cues as a possible orthogonal information source."
]
}
|
1301.3600
|
2016507978
|
We consider a system governed by the wave equation with index of refraction @math , taken to be variable within a bounded region @math and constant in @math . The solution of the time-dependent wave equation with initial data, which is localized in @math , spreads and decays with advancing time. This rate of decay can be measured (for @math , and more generally, @math odd) in terms of the eigenvalues of the scattering resonance problem, a non--self-adjoint eigenvalue problem governing the time-harmonic solutions of the wave (Helmholtz) equation which are outgoing at @math . Specifically, the rate of energy escape from @math is governed by the complex scattering eigenfrequency, which is closest to the real axis. We study the structural design problem: Find a refractive index profile @math within an admissible class which has a scattering frequency with minimal imaginary part. The admissible class is defined in terms of the compact suppo...
|
Results on the existence of optimal scattering resonances and general bounds on the imaginary parts of scattering resonances for Schr "odinger operators can be found in @cite_21 @cite_5 @cite_37 . Very recently, optimal designs have been considered in @cite_26 . Our results for the Helmholtz equation make use of some of the arguments introduced in these papers.
|
{
"cite_N": [
"@cite_5",
"@cite_37",
"@cite_21",
"@cite_26"
],
"mid": [
"2161984222",
"2052358828",
"1996291660",
"1853774868"
],
"abstract": [
"On considere des potentiels de mecanique quantique constitues d'un fond fixe et un morceau additionnel contraint d'avoir une hauteur finie et d'etre a support dans une region finie donnee en dimension d≤3. On caracterise les potentiels de cette classe qui produisent les resonances les plus etroites",
"",
"Lower bounds are derived for the magnitude of the imaginary parts of the resonance eigenvalues of a Schrodinger operator",
"The paper is devoted to optimization of resonances associated with 1-D wave equations in inhomogeneous media. The medium's structure is represented by a nonnegative function B. The problem is to design for a given @math a medium that generates a resonance on the line @math with a minimal possible modulus of the imaginary part. We consider an admissible family of mediums that arises in a problem of optimal design for photonic crystals. This admissible family is defined by the constraints @math with certain constants @math . The paper gives an accurate definition of optimal structures that ensures their existence. We prove that optimal structures are piecewise constant functions taking only two extreme possible values @math and @math . This result explains an effect recently observed in numerical experiments. Then we show that intervals of constancy of an optimal structure are tied to the phase of the corresponding resonant mode and write this connection as a nonlinear eigenvalue problem."
]
}
|
1301.3600
|
2016507978
|
We consider a system governed by the wave equation with index of refraction @math , taken to be variable within a bounded region @math and constant in @math . The solution of the time-dependent wave equation with initial data, which is localized in @math , spreads and decays with advancing time. This rate of decay can be measured (for @math , and more generally, @math odd) in terms of the eigenvalues of the scattering resonance problem, a non--self-adjoint eigenvalue problem governing the time-harmonic solutions of the wave (Helmholtz) equation which are outgoing at @math . Specifically, the rate of energy escape from @math is governed by the complex scattering eigenfrequency, which is closest to the real axis. We study the structural design problem: Find a refractive index profile @math within an admissible class which has a scattering frequency with minimal imaginary part. The admissible class is defined in terms of the compact suppo...
|
The problem of maximizing the lifetime of a state trapped within a leaky cavity can be framed in several ways. The figure of merit can be taken to be the minimization of energy flux through the boundary @cite_15 or a measure of mode localization @cite_13 @cite_30 . In @cite_27 , the problem of minimizing @math for a chosen resonance was investigated computationally in both one- and two-dimensions. The 1-d problem considered here was also studied computationally in @cite_12 . This work focuses on the optimization of @math to minimize @math which satisfies outgoing solutions of the equation @math In particular, the variations @math and @math are formally computed. In @cite_45 , transfer matrix methods were used to design low-loss 2D resonators with radial symmetry. In each of these papers, gradient-based optimization methods were used to solve the optimization problem.
|
{
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_45",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2007031422",
"2084295936",
"1971261253",
"2003171465",
"2041221658",
"2104763307"
],
"abstract": [
"We formulate the problem of designing the low-loss cavity for the International Linear Collider (ILC) as an electromagnetic shape optimization problem involving a Maxwell eigenvalue problem. The objective is to maximize the stored energy of a trapped mode in the end cell while maintaining a specified frequency corresponding to the accelerating mode. A continuous adjoint method is presented for computation of the design gradient of the objective and constraint. The gradients are used within a nonlinear optimization scheme to compute the optimal shape for a simplified model of the ILC in a small multiple of the cost of solving the Maxwell eigenvalue problem.",
"Abstract We consider resonance phenomena for the scalar wave equation in an inhomogeneous medium. Resonance is a solution to the wave equation which is spatially localized while its time dependence is harmonic except for decay due to radiation. The decay rate, which is inversely proportional to the qualify factor, depends on the material properties of the medium. In this work, the problem of designing a resonator which has high quality factor (low loss) is considered. The design variable is the index of refraction of the medium. High quality resonators are desirable in a variety of applications, including photonic band gap devices. Finding resonance in a linear wave equation with radiation boundary condition involves solving a nonlinear eigenvalue problem. The magnitude of the ratio between real and imaginary part of the eigenvalue is proportional to the quality factor Q . The optimization we perform is finding a structure which possesses an eigenvalue with largest possible Q . We present a numerical approach for solving this problem. The method consists of first finding a resonance eigenvalue and eigenfunction for a non-optimal structure. The gradient of Q with respect to index of refraction at that state is calculated. Ascent steps are taken in order to increase the quality factor Q . We demonstrate how this approach can be implemented and present numerical examples of high Q structures.",
"Circular resonators are fundamentally interesting elements that are essential for research involving highly confined fields and strong photon-atom interactions such as cavity QED, as well as for practical applications in optical communication systems as and biochemical sensing. The important characteristics of a ring resonator are the Q-factor, the free spectral range (FSR) and the modal volume, where the last two are primarily determined by the resonator dimensions. The Total-Internal-Reflection (TIR) mechanism employed in \"conventional\" resonators couples between these characteristics and limits the ability to realize compact devices with large FSR, small modal volume and high Q. Recently, we proposed and analyzed a new class of a resonator in an annular geometry that is based on a single defect surrounded by radial Bragg reflectors on both sides. The radial Bragg confinement breaks the link between the characteristics of the mode and paves a new way for the realization of compact and low loss resonators. Such properties as well as the unique mode profile of the ABRs make this class of devices an excellent tool for ultra-sensitive biochemical detection as well as for studies in nonlinear optics and cavity QED.",
"Variational methods are applied to the design of a two-dimensional lossless photonic crystal slab to optimize resonant scattering phenomena. The method is based on varying properties of the transmission coefficient that are connected to resonant behavior. Numerical studies are based on boundary-integral methods for crystals consisting of multiple scatterers. We present an example in which we modify a photonic crystal consisting of an array of dielectric rods in air so that a weak transmission anomaly is transformed into a sharp resonance.",
"The problem of creating eigenfunctions which are localized arises in the study of photonic bandgap structures. A model problem, that of finding material inhomogeneity in a domain so that one of its Dirichlet eigenfunctions is localized, is considered in this work. The most difficult aspect, that of formulating the problem, is described, and well-posed variational problems are given. A computational approach, based on gradient descent with projection and trajectory continuation, is devised to solve the optimization problem. Numerical examples are provided which demonstrate the capability of the computational method.",
"The increasing use of micro- and nano-scale components in optical, electrical, and mechanical systems makes the understanding of loss mechanisms and their quantification issues of fundamental importance. In many situations, performance-limiting loss is due to scattering and radiation of waves into the surrounding structure. In this paper, we study the problem of systematically improving a structure by altering its design so as to decrease the loss. We use sensitivity analysis and local gradient optimization, applied to the scattering resonance problem, to reduce the loss within the class of piecewise constant structures. For a class of optimization problems where the material parameters are constrained by upper and lower bounds, it is observed that an optimal structure is piecewise constant with values achieving the bounds."
]
}
|
1301.3600
|
2016507978
|
We consider a system governed by the wave equation with index of refraction @math , taken to be variable within a bounded region @math and constant in @math . The solution of the time-dependent wave equation with initial data, which is localized in @math , spreads and decays with advancing time. This rate of decay can be measured (for @math , and more generally, @math odd) in terms of the eigenvalues of the scattering resonance problem, a non--self-adjoint eigenvalue problem governing the time-harmonic solutions of the wave (Helmholtz) equation which are outgoing at @math . Specifically, the rate of energy escape from @math is governed by the complex scattering eigenfrequency, which is closest to the real axis. We study the structural design problem: Find a refractive index profile @math within an admissible class which has a scattering frequency with minimal imaginary part. The admissible class is defined in terms of the compact suppo...
|
Genetic algorithms have also been employed to minimize energy flux through the boundary @cite_3 @cite_24 . In @cite_1 @cite_32 @cite_43 the inverse method'' is employed, where a desired mode shape is chosen and then the material properties which produce that mode are found algebraically. In @cite_4 , the time-dependent problem is solved to steady state using a finite-difference method with perfectly matched layers to approximate the outgoing boundary conditions. The design problem is solved using a Nelder-Mead method.
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_43"
],
"mid": [
"2066805669",
"2025497633",
"1992599567",
"2070321614",
"2063873318",
"2079928172"
],
"abstract": [
"Optimization of a truncated, dielectric photonic crystal cavity leads to configurations that are far from truncated crystal cavities, and which have significantly better radiation confinement. Starting from a two-dimensional truncated photonic crystal cavity with optimal Q-factor, moving the rods from the lattice positions can increase the Q-factor by orders of magnitude, e.g., from 130 to 11 000 for a cavity constructed from 18 rods. In the process, parity symmetry breaking occurs. Achieving the same Q-factor with a regular lattice requires 60 rods. Therefore, using optimized irregular structures for photonic cavities can greatly reduce material requirements and device size.",
"We describe a general recipe for designing high-quality factor (Q) photonic crystal cavities with small mode volumes. We first derive a simple expression for out-of-plane losses in terms of the k-space distribution of the cavity mode. Using this, we select a field that will result in a high Q. We then derive an analytical relation between the cavity field and the dielectric constant along a high symmetry direction, and use it to confine our desired mode. By employing this inverse problem approach, we are able to design photonic crystal cavities with Q > 4 ∙ 106 and mode volumes V (λ n)3. Our approach completely eliminates parameter space searches in photonic crystal cavity design, and allows rapid optimization of a large range of photonic crystal cavities. Finally, we study the limit of the out-of-plane cavity Q and mode volume ratio.",
"Photonic band gap (PBG) materials are attractive for cavity QED experiments because they provide extremely small mode volumes and are monolithic, integratable structures. As such, PBG cavities are a promising alternative to Fabry-Perot resonators. However, the cavity requirements imposed by QED experiments, such as the need for high Q (low cavity damping) and small mode volumes, present significant design challenges for photonic band gap materials. Here, we pose the PBG design problem as a mathematical inversion and provide an analytical solution for a two-dimensional (2D) crystal. We then address a planar (2D crystal with finite thickness) structure using numerical techniques.",
"We simulate an evolutionary process in the lab for designing a novel high confinement photonic structure, starting with a set of completely random patterns, with no insight on the initial geometrical pattern. We show a spontaneous emergence of periodical patterns as well as previously unseen high confinement subwavelength bowtie regions. The evolved structure has a Q of 300 and an ultrasmall modal volume of 0:112� �= 2n� 3 . The emergence of the periodic patterns in the structure indicates that periodicity is a principal condition for effective control of the distribution of light. Photonic structures consisting of periodic patterns of high and low index materials can alter the distribution of an electromagnetic field in space and frequency [1,2]. Applications include light emitters, modulators, switches, etc. [3‐7]. Periodic photonic structures have traditionally been hand designed with some insight from the extensive research of crystalline atomic lattice structures, where an analogy between electronic functions in crystalline structures and waves in periodic media with different dielectric functions is drawn. Based on these designs photonic structures that confine light, enhance and inhibit its propagation in specific directions have been demonstrated [8‐11]. It is not clear, however, if the periodicity of photonic structures is a necessary condition for controlling the distribution of light. This question is especially relevant since on one hand localization has been demonstrated in random media [12,13] while on other hand, recent discoveries of periodic photonic structure in biology [14,15] indicate that viable patterns can emerge through blind natural selection, suggesting that the periodicity of the structures is a principal condition for effective light manipulation. In order to address this question, we simulate an evolutionary process in the lab for designing novel photonic structures, starting with a set of completely random patterns, with no information on the initial geometrical pattern.",
"We propose a novel geometry in a silicon planar resonator with an ultra-small modal volume of 0.01(λ 2n)3. The geometry induces strong electric field discontinuities to decrease the modal volume of the cavity below 1(λ 2n)3 The proposed structure and other common resonators such as 1D and 2D photonic crystal resonators are compared for tradeoffs in confinement and quality factors.",
"Most current methods for the engineering of photonic crystal (PhC) cavities rely on cumbersome, computationally demanding trial-and-error procedures. In the present work, we take a different approach to the problem of cavity design, by seeking to establish a direct, semianalytic relationship between the target electromagnetic field distribution and the dielectric constant of the PhC structure supporting it. We find that such a relationship can be derived by expanding the modes of L-N-type cavities as a linear combination of the one-dimensional (1D) Bloch eigenmodes of a PhC W1 waveguide. Thanks to this expansion, we can also ascertain the presence of a well-defined 1D character in the modes of relatively short cavities (e.g., L9-15), thus confirming recent theoretical predictions and experimental findings. Finally, we test our method through the successful design of a cavity supporting a mode with Gaussian envelope function and ultralow radiative losses (quality factor of 17.5 x 10(6))."
]
}
|
1301.3600
|
2016507978
|
We consider a system governed by the wave equation with index of refraction @math , taken to be variable within a bounded region @math and constant in @math . The solution of the time-dependent wave equation with initial data, which is localized in @math , spreads and decays with advancing time. This rate of decay can be measured (for @math , and more generally, @math odd) in terms of the eigenvalues of the scattering resonance problem, a non--self-adjoint eigenvalue problem governing the time-harmonic solutions of the wave (Helmholtz) equation which are outgoing at @math . Specifically, the rate of energy escape from @math is governed by the complex scattering eigenfrequency, which is closest to the real axis. We study the structural design problem: Find a refractive index profile @math within an admissible class which has a scattering frequency with minimal imaginary part. The admissible class is defined in terms of the compact suppo...
|
An important, related class of problems is to find photonic structures with large spectral band gaps. For the one-dimensional case, see the further discussion in Appendix . Structures with optimally large band gaps have been proven to exist @cite_28 @cite_41 and numerical methods have been applied to finding them @cite_46 @cite_7 @cite_17 . In @cite_34 , topology optimization was used to find photonic crystals with optimally large bandgaps and also which optimally damp or guide waves. In @cite_35 , properties of photonic crystals with optimally large bandgaps are investigated.
|
{
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_28",
"@cite_41",
"@cite_46",
"@cite_34",
"@cite_17"
],
"mid": [
"1987408906",
"2600205728",
"2043362754",
"2031249720",
"2067494951",
"2007371871",
"2156224799"
],
"abstract": [
"Photonic crystals can be designed to control and confine light. Since the introduction of the concept by Yablonovitch and John two decades ago, there has been a quest for the optimal structure, i.e., the periodic arrangement of dielectric and air that maximizes the photonic band gap. Based on numerical optimization studies, we have discovered some surprisingly simple geometric properties of optimal planar band gap structures. We conjecture that optimal structures for gaps between bands n and n + 1 correspond to n elliptic rods with centers defined by the generators of an optimal centroidal Voronoi tessellation (transverse magnetic polarization) and to the walls of this tessellation (transverse electric polarization).",
"This paper provides a review on the optimal design of photonic bandgap structures by inverse problem techniques. An overview of inverse problems techniques is given, with a special focus on topology design methods. A review of first applications of inverse problems techniques to photonic bandgap structures and waveguides is given, as well as some model problems, which provide a deeper insight into the structure of the optimal design problems.",
"Photonic crystals are periodic structures composed of dielectric materials and designed to exhibit band gaps, i.e., ranges of frequencies in which electromagnetic waves cannot propagate, or other interesting spectral behavior. Structures with large band gaps are of great interest for many important applications. In this paper, the problem of designing structures that exhibit maximal band gaps is considered. Admissible structures are constrained to be composed of \"mixtures\" of two given dielectric materials. The optimal design problem is formulated, existence of a solution is proved, a simple optimization algorithm is described, and several numerical examples are presented.",
"Abstract Periodic media are routinely used in optical devices and, in particular, photonic crystals to create spectral gaps, prohibiting the propagation of waves with certain temporal frequencies. In one dimension, Bragg structures, also called quarter-wave stacks, are frequently used because they are relatively easy to manufacture and the spectrum exhibits large spectral gaps at explicitly computable frequencies. In this short work, we use variational methods to demonstrate that within an admissible class of pointwise-bounded, periodic media, the Bragg structure uniquely maximizes the first spectral gap-to-midgap ratio.",
"In the study of photonic crystals, the question arises naturally: Which crystals produce the largest band gaps? This question is investigated by means of an optimization-based evolution algorithm which, given two dielectric materials, seeks to produce a material distribution within the fundamental cell which produces a maximal band gap at a given point in the spectrum. The case of H-polarization in two dimensions is considered. Several numerical examples are presented.",
"Phononic band–gap materials prevent elastic waves in certain frequency ranges from propagating, and they may therefore be used to generate frequency filters, as beam splitters, as sound or vibration protection devices, or as waveguides. In this work we show how topology optimization can be used to design and optimize periodic materials and structures exhibiting phononic band gaps. Firstly, we optimize infinitely periodic band–gap materials by maximizing the relative size of the band gaps. Then, finite structures subjected to periodic loading are optimized in order to either minimize the structural response along boundaries (wave damping) or maximize the response at certain boundary locations (waveguiding).",
"The optimal design of photonic band gaps for two-dimensional square lattices is considered. We use the level set method to represent the interface between two materials with two different dielectric constants. The interface is moved by a generalized gradient ascent method. The biggest gap of GaAs in air that we found is 0.4418 for TM (transverse magnetic field) and 0.2104 for TE (transverse electric field)."
]
}
|
1301.3600
|
2016507978
|
We consider a system governed by the wave equation with index of refraction @math , taken to be variable within a bounded region @math and constant in @math . The solution of the time-dependent wave equation with initial data, which is localized in @math , spreads and decays with advancing time. This rate of decay can be measured (for @math , and more generally, @math odd) in terms of the eigenvalues of the scattering resonance problem, a non--self-adjoint eigenvalue problem governing the time-harmonic solutions of the wave (Helmholtz) equation which are outgoing at @math . Specifically, the rate of energy escape from @math is governed by the complex scattering eigenfrequency, which is closest to the real axis. We study the structural design problem: Find a refractive index profile @math within an admissible class which has a scattering frequency with minimal imaginary part. The admissible class is defined in terms of the compact suppo...
|
One property of optimal structures for is that they are piecewise constant structures which achieve the material bounds, , they are bang-bang controls. This property is also realized in a number of optimization problems for eigenvalues of self-adjoint operators @cite_25 @cite_16 @cite_36 @cite_22 @cite_41 as well as for Schr "odinger resonances @cite_5 . In @cite_10 @cite_8 the authors consider the problem of maximizing the lifetime of a state coupled to radiation by an ionizing perturbation. For this class of problems, optimizers are interior points of the constraint set.
|
{
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_36",
"@cite_41",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"2035724604",
"2265065675",
"2078293634",
"2031249720",
"2161984222",
"",
"2048738839",
""
],
"abstract": [
"We consider the following eigenvalue optimization problem: Given a bounded domain Ω⊂ℝ and numbers α > 0, A∈[ 0, |Ω|], find a subset D⊂Ω of area A for which the first Dirichlet eigenvalue of the operator −Δ+αχ D is as small as possible.",
"",
"Given an open bounded connected set Ω ⊂R N and a prescribed amount of two homogeneous materials of different density, for smallk we characterize the distribution of the two materials in Ω that extremizes thekth eigenvalue of the resulting clamped membrane. We show that these extremizers vary continuously with the proportion of the two constituents. The characterization of the extremizers in terms of level sets of associated eigenfunctions provides geometric information on their respective interfaces. Each of these results generalizes toN dimensions the now classical one-dimensional work of M. G. Krein.",
"Abstract Periodic media are routinely used in optical devices and, in particular, photonic crystals to create spectral gaps, prohibiting the propagation of waves with certain temporal frequencies. In one dimension, Bragg structures, also called quarter-wave stacks, are frequently used because they are relatively easy to manufacture and the spectrum exhibits large spectral gaps at explicitly computable frequencies. In this short work, we use variational methods to demonstrate that within an admissible class of pointwise-bounded, periodic media, the Bragg structure uniquely maximizes the first spectral gap-to-midgap ratio.",
"On considere des potentiels de mecanique quantique constitues d'un fond fixe et un morceau additionnel contraint d'avoir une hauteur finie et d'etre a support dans une region finie donnee en dimension d≤3. On caracterise les potentiels de cette classe qui produisent les resonances les plus etroites",
"",
"Consider a system governed by the time-dependent Schrodinger equation initialized in its ground state. When subjected to weak (size e) parametric forcing by an “ionizing field” (time-varying), the state decays with advancing time due to coupling of the bound state to radiation modes. The decay rate of this metastable state is governed by Fermi’s golden rule, Γ[V], which depends on the potential V and the details of the forcing. We pose the following potential design problem: find Vopt which minimizes Γ[V] (maximizes the lifetime of the state) over an admissible class of potentials with fixed spatial support. We formulate this problem as a constrained optimization problem and prove that an admissible optimal solution exists. Then, using quasi-Newton methods, we compute locally optimal potentials. These have the structure of a truncated periodic potential with a localized defect. In contrast to optimal structures for other spectral optimization problems, our optimizing potentials appear to be interior point...",
""
]
}
|
1301.3758
|
2949516965
|
Concurrently estimating the 6-DOF pose of multiple cameras or robots---cooperative localization---is a core problem in contemporary robotics. Current works focus on a set of mutually observable world landmarks and often require inbuilt egomotion estimates; situations in which both assumptions are violated often arise, for example, robots with erroneous low quality odometry and IMU exploring an unknown environment. In contrast to these existing works in cooperative localization, we propose a cooperative localization method, which we call mutual localization, that uses reciprocal observations of camera-fiducials to obviate the need for egomotion estimates and mutually observable world landmarks. We formulate and solve an algebraic formulation for the pose of the two camera mutual localization setup under these assumptions. Our experiments demonstrate the capabilities of our proposal egomotion-free cooperative localization method: for example, the method achieves 2cm range and 0.7 degree accuracy at 2m sensing for 6-DOF pose. To demonstrate the applicability of the proposed work, we deploy our method on Turtlebots and we compare our results with ARToolKit and Bundler, over which our method achieves a 10 fold improvement in translation estimation accuracy.
|
@cite_22 coined the CLAM (Cooperative Localization and Mapping) where they concluded that as an observer robot observes the explorer robot, it improves the localization of robots by the new constraints of observer to explorer distance. Recognizing that odometry errors can cumulate over time, they suggest using constraints based on cooperative localization to refine the position estimates. Their approach, however, do not utilizes the merits of mutual observation as they propose that one robot explores the world and other robot watches. We show in our experiments, by comparison to ARToolKit @cite_6 and Bundler @cite_37 , that mutual observations of robots can be up to 10 times more accurate than observations by single robot.
|
{
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_6"
],
"mid": [
"2156598602",
"53838231",
"2127972053"
],
"abstract": [
"We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.",
"",
"We describe an augmented reality conferencing system which uses the overlay of virtual images on the real world. Remote collaborators are represented on virtual monitors which can be freely positioned about a user in space. Users can collaboratively view and interact with virtual objects using a shared virtual whiteboard. This is possible through precise virtual image registration using fast and accurate computer vision techniques and head mounted display (HMD) calibration. We propose a method for tracking fiducial markers and a calibration method for optical see-through HMD based on the marker tracking."
]
}
|
1301.3618
|
1771625187
|
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8 .
|
There is a vast amount of work extending knowledge bases using external corpora @cite_12 @cite_4 @cite_0 , among many others. In contrast, little work has been done in extensions based purely on the knowledge base itself. The work closest to ours is that by @cite_14 . We implement their approach and compare to it directly. Our model outperforms it by a significant margin in terms of both accuracy and ranking. Both models can benefit from initialization with unsupervised word vectors.
|
{
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_4",
"@cite_12"
],
"mid": [
"2022166150",
"2156954687",
"2167187514",
"2142086811"
],
"abstract": [
"We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95 . YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.",
"",
"Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-of-the-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the ReVerb Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TextRunner and woepos. More than 30 of ReVerb's extractions are at precision 0.8 or higher---compared to virtually none for earlier systems. The paper concludes with a detailed analysis of ReVerb's errors, suggesting directions for future work.",
"Semantic taxonomies such as WordNet provide a rich source of knowledge for natural language processing applications, but are expensive to build, maintain, and extend. Motivated by the problem of automatically constructing and extending such taxonomies, in this paper we present a new algorithm for automatically learning hypernym (is-a) relations from text. Our method generalizes earlier work that had relied on using small numbers of hand-crafted regular expression patterns to identify hypernym pairs. Using \"dependency path\" features extracted from parse trees, we introduce a general-purpose formalization and generalization of these patterns. Given a training set of text containing known hypernym pairs, our algorithm automatically extracts useful dependency paths and applies them to new corpora to identify novel pairs. On our evaluation task (determining whether two nouns in a news article participate in a hypernym relationship), our automatically extracted database of hypernyms attains both higher precision and higher recall than WordNet."
]
}
|
1301.3618
|
1771625187
|
Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpora. In contrast, here we mainly aim to complete a knowledge base by predicting additional true relationships between entities, based on generalizations that can be discerned in the given knowledgebase. We introduce a neural tensor network (NTN) model which predicts new relationship entries that can be added to the database. This model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text, and when doing this, existing relations can even be queried for entities that were not present in the database. Our model generalizes and outperforms existing models for this problem, and can classify unseen relationships in WordNet with an accuracy of 75.8 .
|
Another related approach is that by @cite_11 who use tensor factorization and Bayesian clustering for learning relational structures. Instead of clustering the entities in a nonparametric Bayesian framework we rely purely on learned entity vectors. Their computation of the truth of a relation can be seen as a special case of our proposed model. Instead of using MCMC for inference, we use standard backpropagation which is modified for the Neural Tensor Network. Lastly, we do not require multiple embeddings for each entity. Instead, we consider the subunits (space separated words) of entity names. This allows more statistical strength to be shared among entities.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2123228027"
],
"abstract": [
"We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us \"understand\" a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: cluster-based models yield more easily interpretable representations, while factorization-based approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data."
]
}
|
1301.3601
|
2950087049
|
In this work, we assess the viability of heterogeneous networks composed of legacy macrocells which are underlaid with self-organizing picocells. Aiming to improve coverage, cell-edge throughput and overall system capacity, self-organizing solutions, such as range expansion bias, almost blank subframe and distributed antenna systems are considered. Herein, stochastic geometry is used to model network deployments, while higher-order statistics through the cumulants concept is utilized to characterize the probability distribution of the received power and aggregate interference at the user of interest. A compre- hensive analytical framework is introduced to evaluate the performance of such self-organizing networks in terms of outage probability and average channel capacity with respect to the tagged receiver. To conduct our studies, we consider a shadowed fading channel model incorporating log-normal shadowing and Nakagami-m fading. Results show that the analytical framework matches well with numerical results obtained from Monte Carlo simulations. We also observed that by simply using almost blank subframes the aggregate interference at the tagged receiver is reduced by about 12dB. Although more elaborated interference control techniques such as, downlink bitmap and distributed antennas systems become needed, when the density of picocells in the underlaid tier gets high.
|
The design and implementation of self-organizing functionalities in HN is a topic of significant interest as evidenced by the number of recent publications @cite_4 @cite_24 @cite_25 @cite_15 @cite_6 @cite_22 @cite_7 . For instance, the self-organization concept is used to devise cognitive radio resource management schemes to mitigate cross-tier interference and guarantee users QoS in distinct heterogeneous deployments scenarios @cite_43 . More recently, the REB concept is discussed within 3GPP as a baseline solution to boost the offloading potential of heterogeneous deployments. In that regard, Authors in @cite_42 investigate the cell range expansion and interference mitigation in heterogeneous networks. Following the same lines, G " u ven c instigates the capacity and fairness of heterogeneous networks with range expansion and interference coordination @cite_14 . In @cite_0 , Jo use the SG framework to assess how the biased cell association procedure performs in heterogeneous networks by means of the outage probability. black In multi-tier heterogeneous networks where the locations of BS are modeled as independent PPP , the joint distribution of the downlink SINR at the tagged receiver is derived when the serving BS is selected as either the nearest or the strongest with respect to the user of interest @cite_21 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_42",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_43",
"@cite_0",
"@cite_15",
"@cite_25"
],
"mid": [
"2030019404",
"2083839608",
"",
"",
"",
"1995900307",
"",
"",
"2162417209",
"1978640720",
"",
""
],
"abstract": [
"Range expansion and inter-cell interference coordination (ICIC) can improve the capacity and fairness of heterogeneous network (HetNet) deployments by off-loading macrocell users to low-power nodes. Due to difficulties in analytical treatment, current studies for range expansion and ICIC in HetNets rely mostly on simulations. In this letter, first, off-loading benefits of range expansion in HetNets are captured through cumulative distribution functions (CDFs) of the downlink signal to interference plus noise ratio (SINR) difference between the macrocell and strongest picocell signals. Then, these CDFs are used to investigate the system capacity and fairness as a continuous function of the range expansion bias, and benefits of using ICIC with range expansion are demonstrated through numerical results.",
"The trend toward ubiquitous wireless communication demands for a higher level of self-organization in networks. This article gives an introduction and overview on this topic and investigates the fundamental question: What are design paradigms for developing a self-organized network function? We propose four paradigms and show how they are reflected in current protocols: design local interactions that achieve global properties, exploit implicit coordination, minimize the maintained state, and design protocols that adapt to changes. Finally, we suggest a general design process for self-organized network functions.",
"",
"",
"",
"For an arbitrarily-located user terminal (UE) in a multi-tier heterogeneous cellular wireless network, the joint distribution of the downlink SINR at the UE from the candidate serving base stations (BSs) in the accessible tiers of the network has been derived in closed form for the cases where the candidate serving BS from each accessible tier is chosen as either the one nearest to the UE [1] or the one that is received strongest (equivalently, with maximum SINR) at the UE [2], when the locations of the BSs in the tiers are modeled by independent Poisson Point Processes, and the fading on all links is assumed independent identically distributed (iid) and Rayleigh. The actual serving BS for the UE is chosen as the nearest strongest max-SINR candidate serving BS after imposing selection bias across the tiers. The above joint distributions can be used to yield the distribution of the actual SINR at the UE (i.e., when receiving from this serving BS) when no selection bias exists across tiers. However, for the practically important case of selection bias, analytical calculation of the distribution of the actual SINR presents significant challenges. This work derives and summarizes the distribution of actual downlink SINR for all the above criteria for selection of the serving BS accounting for selection bias. We then explore some implications of these results for design and operation of a heterogeneous network.",
"",
"",
"To successfully deploy femtocells overlaying the Macrocell as a two-tier that had been shown greatly benefiting communications quality in various manners, it requires to mitigate cross-tier interference between the Macrocell and femtocells, and intra-tier interference among femtocells, as well as to provide Quality-of-Service (QoS) guarantees. Existing solutions therefore assign orthogonal radio resources in frequency and spatial domains to each network, however, infeasible for dense femtocells deployments. It is also difficult to apply centralized resource managements facing challenges of scalability to the two-tier. Considering the infeasibility of imposing any modification on existing infrastructures, we leverage the cognitive radio technology to propose the cognitive radio resource management scheme for femtocells to mitigate cross-tier interference. Under such cognitive framework, a strategic game is further developed for the intra-tier interference mitigation. Through the concept of effective capacity, proposed radio resource management schemes are appropriately controlled to achieve required statistical delay guarantees while yielding an efficient radio resources utilization in femtocells. Performance evaluation results show that a considerable performance improvement can be generally achieved by our solution, as compared with that of state-of-the-art techniques, to facilitate the deployment of femtocells.",
"In this paper we develop a tractable framework for SINR analysis in downlink heterogeneous cellular networks (HCNs) with flexible cell association. The HCN is modeled as a multi-tier cellular network where each tier's base stations (BSs) are randomly located and have a unique transmit power, path loss exponent, spatial density, and bias towards admitting users. We implicitly assume every BS has full queues. From this model, we derive the outage probability of a typical user in the network, which can be viewed as a spatial average of SINR over all users in the network. We observe that deploying more or less BSs does not change the outage probability in interference-limited HCN with unbiased cell association, and observe how biasing affects the metric.",
"",
""
]
}
|
1301.3342
|
1490600648
|
The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data objects, and it uses a variant of the Barnes-Hut algorithm - an algorithm used by astronomers to perform N-body simulations - to approximate the forces between the corresponding points in the embedding. Our experiments show that the new algorithm, called Barnes-Hut-SNE, leads to substantial computational advantages over standard t-SNE, and that it makes it possible to learn embeddings of data sets with millions of objects.
|
A large body of previous work has focused on decreasing the computational complexity of algorithms that scale quadratically in the amount of data when implemented naively. Most of these studies focus on speeding up nearest-neighbor searches using space-partitioning (metric) trees (, B-trees @cite_21 , cover trees @cite_29 , and vantage-point trees @cite_3 ) or using locality sensitive hashing approaches (, @cite_15 @cite_13 ). Motivated by their strong performance reported in earlier work in @cite_6 , we opt to use metric trees to approximate the similarities of the input objects in our algorithm.
|
{
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_15",
"@cite_13"
],
"mid": [
"2133296809",
"1539221976",
"2097921974",
"2170605888",
"2147717514",
""
],
"abstract": [
"We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"Organization and maintenance of an index for a dynamic random access file is considered. It is assumed that the index must be kept on some pseudo random access backup store like a disc or a drum. The index organization described allows retrieval, insertion, and deletion of keys in time proportional to logk I where I is the size of the index and k is a device dependent natural number such that the performance of the scheme becomes near optimal. Storage utilization is at least 50 but generally much higher. The pages of the index are organized in a special data-structure, so-called B-trees. The scheme is analyzed, performance bounds are obtained, and a near optimal k is computed. Experiments have been performed with indexes up to 100000 keys. An index of size 15000 (100000) can be maintained with an average of 9 (at least 4) transactions per second on an IBM 360 44 with a 2311 disc.",
"We consider the computational problem of finding nearest neighbors in general metric spaces. Of particular interest are spaces that may not be conveniently embedded or approximated in Euclidian space, or where the dimensionality of a Euclidian representation 1s very high. Also relevant are high-dimensional Euclidian settings in which the distribution of data is in some sense of lower dimension and embedded in the space. The up-tree (vantage point tree) is introduced in several forms, together‘ with &&ciated algorithms, as an improved method for these difficult search nroblems. Tree construcI tion executes in O(nlog(n i ) time, and search is under certain circumstances and in the imit, O(log(n)) expected time. The theoretical basis for this approach is developed and the results of several experiments are reported. In Euclidian cases, kd-tree performance is compared.",
"This paper concerns approximate nearest neighbor searching algorithms, which have become increasingly important, especially in high dimensional perception areas such as computer vision, with dozens of publications in recent years. Much of this enthusiasm is due to a successful new approximate nearest neighbor approach called Locality Sensitive Hashing (LSH). In this paper we ask the question: can earlier spatial data structure approaches to exact nearest neighbor, such as metric trees, be altered to provide approximate answers to proximity queries and if so, how? We introduce a new kind of metric tree that allows overlap: certain datapoints may appear in both the children of a parent. We also introduce new approximate k-NN search algorithms on this structure. We show why these structures should be able to exploit the same random-projection-based approximations that LSH enjoys, but with a simpler algorithm and perhaps with greater efficiency. We then provide a detailed empirical evaluation on five large, high dimensional datasets which show up to 31-fold accelerations over LSH. This result holds true throughout the spectrum of approximation levels.",
"We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.",
""
]
}
|
1301.2857
|
2114838678
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
There are many available natural language processing packages available for researchers under open source licenses or non-commercial ones. However, this section is not meant to review the literature of named entity recognition research as this is already available in @cite_1 . We are trying to discuss the most popular solutions and the ones we think are interesting to present.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2020278455"
],
"abstract": [
"This survey covers fifteen years of research in the Named Entity Recognition and Classification (NERC) field, from 1991 to 2006. We report observations about languages, named entity types, domains and textual genres studied in the literature. From the start, NERC systems have been developed using hand-made rules, but now machine learning techniques are widely used. These techniques are surveyed along with other critical aspects of NERC such as features and evaluation methods. Features are word-level, dictionary-level and corpus-level representations of words in a document. Evaluation techniques, ranging from intuitive exact match to very complex matching techniques with adjustable cost of errors, are an indisputable key to progress."
]
}
|
1301.2857
|
2114838678
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
Stanford NLP pipeline @cite_2 @cite_18 @cite_6 @cite_10 @cite_7 is one of the most popular and used NLP packages. The pipeline is rich in features, flexible for tweaking and supports many natural languages. Despite being written in Java, there are many other programming language bindings that are maintained by the community. The pipeline offers a tokenization, POS tagging, named entity recognition, parsing and co-referencing resolution. The pipeline requirements of memory and computation are non-trivial. To accommodate the various computational resources, the pipeline offers several models for each task that vary in speed, memory consumption and accuracy. In general, to achieve good performance in terms of speed, the user has to increase the memory available to the pipeline to 1-3 GiBs and choose the faster but less accurate models.
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_6",
"@cite_2",
"@cite_10"
],
"mid": [
"1996430422",
"2129657639",
"2041614298",
"2003458432",
"2096765155"
],
"abstract": [
"We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24 accuracy on the Penn Treebank WSJ, an error reduction of 4.4 on the best previous single automatically learned tagging result.",
"This paper details the coreference resolution system submitted by Stanford at the CoNLL-2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.",
"We discuss two named-entity recognition models which use characters and character n-grams either exclusively or as an important part of their data representation. The first model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substantially richer context features. Our best model achieves an overall F1 of 86.07 on the English test data (92.31 on the development data). This number represents a 25 error reduction over the same model without word-internal (substring) features.",
"This paper presents results for a maximum-entropy-based part of speech tagger, which achieves superior performance principally by enriching the information sources used for tagging. In particular, we get improved results by incorporating these features: (i) more extensive treatment of capitalization for unknown words; (ii) features for the disambiguation of the tense forms of verbs; (iii) features for disambiguating particles from prepositions and adverbs. The best resulting accuracy for the tagger on the Penn Treebank is 96.86 overall, and 86.91 on previously unseen words.",
"Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9 over state-of-the-art systems on two established information extraction tasks."
]
}
|
1301.2857
|
2114838678
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
More recent efforts include SENNA pipeline. Even though it lacks a proper tokenizer, it offers POS tagging, named entity recognition, chunking, semantic role labeling @cite_16 and parsing @cite_8 . The pipeline has simple interface, high speed and small memory footprint (less than 190MiB).
|
{
"cite_N": [
"@cite_16",
"@cite_8"
],
"mid": [
"2117130368",
"98255950"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"We propose a new fast purely discriminative algorithm for natural language parsing, based on a “deep” recurrent convolutional graph transformer network (GTN). Assuming a decomposition of a parse tree into a stack of “levels”, the network predicts a level of the tree taking into account predictions of previous levels. Using only few basic text features which leverage word representations from Collobert and Weston (2008), we show similar performance (in F1 score) to existing pure discriminative parsers and existing “benchmark” parsers (like Collins parser, probabilistic context-free grammars based), with a huge speed advantage."
]
}
|
1301.2857
|
2114838678
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
NLTK @cite_11 is a set of tools and interfaces to other NLP packages. Its simple APIs and good documentation makes it a favorable option for students and researchers. Written in Python, NLTK does not offer great speed or close to state-of-art accuracy with its tools. On the other hand, it is well maintained and has great community support.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1521626219"
],
"abstract": [
"This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation. With it, you'll learn how to write Python programs that work with large collections of unstructured text. You'll access richly annotated datasets using a comprehensive range of linguistic data structures, and you'll understand the main algorithms for analyzing the content and structure of written communication. Packed with examples and exercises, Natural Language Processing with Python will help you: Extract information from unstructured text, either to guess the topic or identify \"named entities\" Analyze linguistic structure in text, including parsing and semantic analysis Access popular linguistic databases, including WordNet and treebanks Integrate techniques drawn from fields as diverse as linguistics and artificial intelligence This book will help you gain practical skills in natural language processing using the Python programming language and the Natural Language Toolkit (NLTK) open source library. If you're interested in developing web applications, analyzing multilingual news sources, or documenting endangered languages -- or if you're simply curious to have a programmer's perspective on how human language works -- you'll find Natural Language Processing with Python both fascinating and immensely useful."
]
}
|
1301.2857
|
2114838678
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
WikipediaMiner @cite_17 detects conceptual words and named entities; it also disambiguates the word senses. This approach can be modified to detect only the words that represent entities, then using the disambiguated sense, it can decide which class the entity belongs to. Its use of the Wikipedia interlinking information is a good example of the power of using knowledge-based systems. Our basic investigation shows that the current system needs large chunks of memory to load all the interlinking graph of Wikipedia and it would be hard to optimize for speed. TAGME @cite_13 is extending the work of WikipediaMiner to annotate short snippets of text. They are presenting a new disambiguation system that is faster and more accurate. Their system is much simpler and takes into account the sparseness of the senses and the possible lack of unambiguous senses in short texts.
|
{
"cite_N": [
"@cite_13",
"@cite_17"
],
"mid": [
"2123142779",
"1960027552"
],
"abstract": [
"We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.",
"This paper describes a new technique for obtaining measures of semantic relatedness. Like other recent approaches, it uses Wikipedia to provide structured world knowledge about the terms of interest. Our approach is unique in that it does so using the hyperlink structure of Wikipedia rather than its category hierarchy or textual content. Evaluation with manually defined measures of semantic relatedness reveals this to be an effective compromise between the ease of computation of the former approach and the accuracy of the latter."
]
}
|
1301.2729
|
2953271103
|
We study the class of languages, denoted by @math , which have @math -prover games where each prover just sends a bit, with completeness @math and soundness error @math . For the case that @math (i.e., for the case of interactive proofs), Goldreich, Vadhan and Wigderson ( Computational Complexity'02 ) demonstrate that @math exactly characterizes languages having 1-bit proof systems with"non-trivial" soundness (i.e., @math ). We demonstrate that for the case that @math , 1-bit @math -prover games exhibit a significantly richer structure: + (Folklore) When @math , @math ; + When @math , @math ; + When @math , @math ; + For @math and sufficiently large @math , @math ; + For @math , @math . As such, 1-bit @math -prover games yield a natural "quantitative" approach to relating complexity classes such as @math , @math , @math , @math , and @math . We leave open the question of whether a more fine-grained hierarchy (between @math and @math ) can be established for the case when @math .
|
The work most closely related to our is the work by Goldreich, Vadhan and Wigderson @cite_3 mentioned above which in turn builds on a work by Goldreich and H stad @cite_14 ; just as we do, both these works investigate the complexity of interactive proofs with laconic'' provers. We have taken the question to an extreme in one direction (namely we focus only on provers that send a single bit); on the other hand, we have generalized the question by considering multi-prover interactive proofs, rather than just a single prover (as is the main focus in the above-mentioned works).
|
{
"cite_N": [
"@cite_14",
"@cite_3"
],
"mid": [
"2040438296",
"2038202352"
],
"abstract": [
"Abstract We investigate the computational complexity of languages which have interactive proof systems of bounded message complexity. In particular, denoting the length of the input by n , we show that • If L has an interactive proof in which the total communication is bounded by c ( n ) bits then L can be recognized by a probabilistic machine in time exponential in rmO ( c ( n ) + log ( n )). • If L has a public-coin interactive proof in which the prover sends c ( n ) bits then L can be recognized by a probabilistic machine in time exponential in rmO ( c ( n ) · log ( c ( n )) + log ( n )). • If L has an interactive proof in which the prover sends c ( n ) bits then L can be recognized by a probabilistic machine with an NP-oracle in time exponential in rmO ( c ( n ) · log ( c ( n )) + log ( n )).",
"We continue the investigation of interactive proofs with bounded communication, as initiated by Goldreich & Hastad (1998). Let L be a language that has an interactive proof in which the prover sends few (say b) bits to the verifier. We prove that the complement L has a constant-round interactive proof of complexity that depends only exponentially on b. This provides the first evidence that for NP- complete languages, we cannot expect interactive provers to be much more \"laconic\" than the standard NP proof. When the proof system is further restricted (e.g., when b = 1, or when we have perfect completeness), we get significantly better upper bounds on the complexity of L."
]
}
|
1301.2729
|
2953271103
|
We study the class of languages, denoted by @math , which have @math -prover games where each prover just sends a bit, with completeness @math and soundness error @math . For the case that @math (i.e., for the case of interactive proofs), Goldreich, Vadhan and Wigderson ( Computational Complexity'02 ) demonstrate that @math exactly characterizes languages having 1-bit proof systems with"non-trivial" soundness (i.e., @math ). We demonstrate that for the case that @math , 1-bit @math -prover games exhibit a significantly richer structure: + (Folklore) When @math , @math ; + When @math , @math ; + When @math , @math ; + For @math and sufficiently large @math , @math ; + For @math , @math . As such, 1-bit @math -prover games yield a natural "quantitative" approach to relating complexity classes such as @math , @math , @math , @math , and @math . We leave open the question of whether a more fine-grained hierarchy (between @math and @math ) can be established for the case when @math .
|
We also mention the recent work by Drucker @cite_15 that provides a @math -type characterization of @math ; his result is incomparable to our main theorem as he focuses on polynomial-length PCP proofs.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2950705004"
],
"abstract": [
"We introduce a 2-round stochastic constraint-satisfaction problem, and show that its approximation version is complete for (the promise version of) the complexity class AM. This gives a PCP characterization' of AM analogous to the PCP Theorem for NP. Similar characterizations have been given for higher levels of the Polynomial Hierarchy, and for PSPACE; however, we suggest that the result for AM might be of particular significance for attempts to derandomize this class. To test this notion, we pose some Randomized Optimization Hypotheses' related to our stochastic CSPs that (in light of our result) would imply collapse results for AM. Unfortunately, the hypotheses appear over-strong, and we present evidence against them. In the process we show that, if some language in NP is hard-on-average against circuits of size 2^ Omega(n) , then there exist hard-on-average optimization problems of a particularly elegant form. All our proofs use a powerful form of PCPs known as Probabilistically Checkable Proofs of Proximity, and demonstrate their versatility. We also use known results on randomness-efficient soundness- and hardness-amplification. In particular, we make essential use of the Impagliazzo-Wigderson generator; our analysis relies on a recent Chernoff-type theorem for expander walks."
]
}
|
1301.2218
|
33509416
|
We analyze a distributed algorithm for estimation of scalar parameters belonging to nodes in a mobile network from noisy relative measurements. The motivation comes from the problem of clock skew and offset estimation for the purpose of time synchronization. The time variation of the network was modeled as a Markov chain. The estimates are shown to be mean square convergent under fairly weak assumptions on the Markov chain, as long as the union of the graphs is connected. Expressions for the asymptotic mean and correlation are also provided. The Markovian switching topology model of mobile networks is justified for certain node mobility models through empirically estimated conditional entropy measures.
|
Recently, a number of fully distributed global synchronization algorithms have been proposed that do not need spanning tree computation. Distributed protocols are therefore more readily applicable to mobile networks than tree-based protocols. Among the distributed synchronization protocols proposed, some are based on estimation of the skew and or offset of each clock with respect to a reference clock (called ). The algorithms proposed in @cite_2 @cite_27 @cite_29 @cite_26 @cite_32 belong to this category. Another class of protocols estimate a common global time that may not be related to the time of any clock in the network. The algorithms proposed in @cite_33 @cite_15 @cite_14 belong to this category, which we call .
|
{
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_29",
"@cite_32",
"@cite_27",
"@cite_2",
"@cite_15"
],
"mid": [
"2147335306",
"",
"2146108350",
"2154451686",
"2118537895",
"2591090203",
"2033002530",
"2022645219"
],
"abstract": [
"In a network of clocks, we consider a given reference node to determine the time evolution t. We introduce and analyze a stochastic model for clocks, in which the relative speedup of a clock, called the skew, is characterized by some given stochastic process. We study the problem of synchronizing clocks in a network, which amounts to estimating the instantaneous relative skews and relative offsets by exchange of time-stamped packets across the links of the network. We present a scheme for obtaining measurements in a communication link. We develop an algorithm for optimal filtering of measurements across a link (i; j) in order to estimate the logarithm of the relative speedup of node j with respect to node i, and we further study some implementation issues. We also present a scheme for pairwise offset estimation based on skew estimates. We study the properties of our algorithms and provide theoretical guarantees on their performance. We also develop an online centralized model-based asynchronous algorithm for optimal filtering of the time-stamps in the entire network, and an efficient distributed suboptimal scheme.",
"",
"In this paper a distributed algorithm for clock synchronization is proposed. This algorithm is based on an extension of the consensus algorithm able to synchronize a family of double integrators. Since the various clocks may have different drifts, the algorithm needs to be designed so that it can work also in case of heterogeneous double integrators. Through a robust control analysis it is possible to determine the maximum admissible level of heterogeneity yielding synchronization. The first part of the paper is devoted to the analysis of an unrealistic synchronous implementation of the algorithm. However, in the last part of the paper we propose a realistic pseudo-synchronous implementation which is proved to be a perturbation of the synchronous one. From arguments related the center manifold theorem, the stability of the pseudo-synchronous is finally proved.",
"A distributed algorithm to achieve accurate time synchronization in large multihop wireless networks is presented. The central idea is to exploit the large number of global constraints that have to be satisfied by a common notion of time in a multihop network. If, at a certain instant, Oij is the clock offset between two neighboring nodes i and j, then for any loop i1, i2, i3 , ..., in, in + 1 - i1 in the multihop network, these offsets must satisfy the global constraint Sigma k = 1 nOik, ik + 1 = 0. Noisy estimates Ocirc ij of Oij are usually arrived at by bilateral exchanges of timestamped messages or local broadcasts. By imposing the large number of global constraints for all the loops in the multihop network, these estimates can be smoothed and made more accurate. A fully distributed and asynchronous algorithm which functions by simple local broadcasts is designed. Changing the time reference node for synchronization is also easy, consisting simply of one node switching on adaptation, and another switching it off. Implementation results on a forty node network, and comparative evaluation against a leading algorithm, are presented",
"In this paper, we study the global clock synchronization problem for wireless sensor networks. Based on belief propagation, we propose a fully distributed algorithm which has low overhead and can achieve scalable synchronization. It is also shown analytically that the proposed algorithm always converges for strongly connected networks. Simulation results show that the proposed algorithm achieves better accuracy than consensus algorithms. Furthermore, the belief obtained at each sensor provides an accurate prediction on the algorithm's performance in terms of MSE.",
"We consider the problem of estimating vector-valued variables from noisy relative measurements. The measurement model can be expressed in terms of a graph, whose nodes correspond to the variables being estimated and the edges to noisy measurements of the difference between the two variables. This type of measurement model appears in several sensor network problems, such as sensor localization and time synchronization. We consider the optimal estimate for the unknown variables obtained by applying the classical Best Linear Unbiased Estimator, which achieves the minimum variance among all linear unbiased estimators. We propose a new algorithm to compute the optimal estimate in an iterative manner, the Overlapping Subgraph Estimator algorithm. The algorithm is distributed, asynchronous, robust to temporary communication failures, and is guaranteed to converges to the optimal estimate even with temporary communication failures. Simulations for a realistic example show that the algorithm can reduce energy consumption by a factor of two compared to previous algorithms, while achieving the same accuracy.",
"We consider the problem of estimating vector-valued variables from noisy \"relative\" measurements. The measurement model can be expressed in terms of a graph, whose nodes correspond to the variables being estimated and the edges to noisy measurements of the difference between the two variables. We take the value of one particular variable as a reference and consider the optimal estimator for the differences between the remaining variables and the reference. This type of measurement model appears in several sensor network problems, such as sensor localization and time synchronization. Two algorithms are proposed to compute the optimal estimate in a distributed, iterative manner. The first algorithm implements the Jacobi method to iteratively compute the optimal estimate, assuming all communication is perfect. The second algorithm is robust to temporary communication failures, and converges to the optimal estimate when certain mild conditions on the failure rate are satisfied. It also employs an initialization scheme to improve accuracy in spite of the slow convergence of the Jacobi method",
"In this paper a distributed clock synchronization algorithm is proposed. The algorithm requires asymmetric gossip communications between the nodes of the network, and is based on an PI-like consensus protocol where the proportional part compensates the different clock speeds while the integral part eliminates the different clock offsets. Convergence of the algorithm is proved and analyzed with respect to the controller parameter, when the underlying graph is the complete graph. Simulations results show the effectiveness of the proposed strategy also for more general communication topologies."
]
}
|
1301.1590
|
2950513089
|
It has been shown that minimum free energy structure for RNAs and RNA-RNA interaction is often incorrect due to inaccuracies in the energy parameters and inherent limitations of the energy model. In contrast, ensemble based quantities such as melting temperature and equilibrium concentrations can be more reliably predicted. Even structure prediction by sampling from the ensemble and clustering those structures by Sfold [7] has proven to be more reliable than minimum free energy structure prediction. The main obstacle for ensemble based approaches is the computational complexity of the partition function and base pairing probabilities. For instance, the space complexity of the partition function for RNA-RNA interaction is @math and the time complexity is @math which are prohibitively large [4,12]. Our goal in this paper is to give a fast algorithm, based on sparse folding, to calculate an upper bound on the partition function. Our work is based on the recent algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is the same as that of sparse folding algorithms, and the time complexity of our algorithm is @math for single RNA and @math for RNA-RNA interaction in practice, in which @math is the running time of sparse folding and @math ( @math ) is a sequence dependent parameter.
|
Methods to the partition function for interacting RNAs have been proposed in the literature. Instead, methods for comutation of the partition function have been developed, having high both time and space complexity. Most notably, @cite_5 developed an @math --time and @math --space dynamic programming algorithm that computes the partition function of RNA--RNA interaction complexes, thereby providing detailed insights into their thermodynamic properties. @cite_4 has developed a algorithm that produces a Boltzmann weighted ensemble of RNA–-RNA interaction structures for the calculation of (and not the partition function) for any given interval on the target RNAs.
|
{
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2166268906",
"2099306789"
],
"abstract": [
"Recent interests, such as RNA interference and antisense RNA regulation, strongly motivate the problem of predicting whether two nucleic acid strands interact. Motivation: Regulatory non-coding RNAs (ncRNAs) such as microRNAs play an important role in gene regulation. Studies on both prokaryotic and eukaryotic cells show that such ncRNAs usually bind to their target mRNA to regulate the translation of corresponding genes. The specificity of these interactions depends on the stability of intermolecular and intramolecular base pairing. While methods like deep sequencing allow to discover an ever increasing set of ncRNAs, there are no high-throughput methods available to detect their associated targets. Hence, there is an increasing need for precise computational target prediction. In order to predict base-pairing probability of any two bases in interacting nucleic acids, it is necessary to compute the interaction partition function over the whole ensemble. The partition function is a scalar value from which various thermodynamic quantities can be derived. For example, the equilibrium concentration of each complex nucleic acid species and also the melting temperature of interacting nucleic acids can be calculated based on the partition function of the complex. Results: We present a model for analyzing the thermodynamics of two interacting nucleic acid strands considering the most general type of interactions studied in the literature. We also present a corresponding dynamic programming algorithm that computes the partition function over (almost) all physically possible joint secondary structures formed by two interacting nucleic acids in O(n6) time. We verify the predictive power of our algorithm by computing (i) the melting temperature for interacting RNA pairs studied in the literature and (ii) the equilibrium concentration for several variants of the OxyS–fhlA complex. In both experiments, our algorithm shows high accuracy and outperforms competitors. Availability: Software and web server is available at http: compbio.cs.sfu.ca taverna pirna Contact:cenk@cs.sfu.ca; backofen@informatik.uni-freiburg.de Supplementary information:Supplementary data are avaliable at Bioinformatics online.",
"Motivation: It has been proven that the accessibility of the target sites has a critical influence on RNA–RNA binding, in general and the specificity and efficiency of miRNAs and siRNAs, in particular. Recently, O(N6) time and O(N4) space dynamic programming (DP) algorithms have become available that compute the partition function of RNA–RNA interaction complexes, thereby providing detailed insights into their thermodynamic properties. Results: Modifications to the grammars underlying earlier approaches enables the calculation of interaction probabilities for any given interval on the target RNA. The computation of the ‘hybrid probabilities’ is complemented by a stochastic sampling algorithm that produces a Boltzmann weighted ensemble of RNA–RNA interaction structures. The sampling of k structures requires only negligible additional memory resources and runs in O(k·N3). Availability: The algorithms described here are implemented in C as part of the rip package. The source code of rip2 can be downloaded from http: www.combinatorics.cn cbpc rip.html and http: www.bioinf.uni-leipzig.de Software rip.html. Contact: duck@santafe.edu Supplementary information:Supplementary data are available at Bioinformatics online."
]
}
|
1301.1590
|
2950513089
|
It has been shown that minimum free energy structure for RNAs and RNA-RNA interaction is often incorrect due to inaccuracies in the energy parameters and inherent limitations of the energy model. In contrast, ensemble based quantities such as melting temperature and equilibrium concentrations can be more reliably predicted. Even structure prediction by sampling from the ensemble and clustering those structures by Sfold [7] has proven to be more reliable than minimum free energy structure prediction. The main obstacle for ensemble based approaches is the computational complexity of the partition function and base pairing probabilities. For instance, the space complexity of the partition function for RNA-RNA interaction is @math and the time complexity is @math which are prohibitively large [4,12]. Our goal in this paper is to give a fast algorithm, based on sparse folding, to calculate an upper bound on the partition function. Our work is based on the recent algorithm of Hazan and Jaakkola [10]. The space complexity of our algorithm is the same as that of sparse folding algorithms, and the time complexity of our algorithm is @math for single RNA and @math for RNA-RNA interaction in practice, in which @math is the running time of sparse folding and @math ( @math ) is a sequence dependent parameter.
|
In the context of RNA secondary structure prediction, @cite_16 devised a Metropolis Monte Carlo algorithm, called Wang and Landau'' algorithm @cite_8 , to approximate the partition function as well as density of states. Although the computation of the partition function over all secondary structures and over all pseudoknot-free hybridizations can be done by efficient dynamic programming algorithms, the real advantage of @cite_16 is in approximating the partition function where pseudoknotted structures are allowed; a context known to be NP-complete @cite_21 .
|
{
"cite_N": [
"@cite_16",
"@cite_21",
"@cite_8"
],
"mid": [
"2149512902",
"2076756368",
""
],
"abstract": [
"Motivation: Thermodynamics-based dynamic programming RNA secondary structure algorithms have been of immense importance in molecular biology, where applications range from the detection of novel selenoproteins using expressed sequence tag (EST) data, to the determination of microRNA genes and their targets. Dynamic programming algorithms have been developed to compute the minimum free energy secondary structure and partition function of a given RNA sequence, the minimum free-energy and partition function for the hybridization of two RNA molecules, etc. However, the applicability of dynamic programming methods depends on disallowing certain types of interactions (pseudoknots, zig-zags, etc.), as their inclusion renders structure prediction an nondeterministic polynomial time (NP)-complete problem. Nevertheless, such interactions have been observed in X-ray structures. Results: A non-Boltzmannian Monte Carlo algorithm was designed by Wang and Landau to estimate the density of states for complex systems, such as the Ising model, that exhibit a phase transition. In this article, we apply the Wang-Landau (WL) method to compute the density of states for secondary structures of a given RNA sequence, and for hybridizations of two RNA sequences. Our method is shown to be much faster than existent software, such as RNAsubopt. From density of states, we compute the partition function over all secondary structures and over all pseudoknot-free hybridizations. The advantage of the WL method is that by adding a function to evaluate the free energy of arbitary pseudoknotted structures and of arbitrary hybridizations, we can estimate thermodynamic parameters for situations known to be NP-complete. This extension to pseudoknots will be made in the sequel to this article; in contrast, the current article describes the WL algorithm applied to pseudoknot-free secondary structures and hybridizations. Availability: The WL RNA hybridization web server is under construction at http: bioinformatics.bc.edu clotelab . Contact: clote@bc.edu",
"RNA molecules are sequences of nucleotides that serve as more than mere intermediaries between DNA and proteins, e.g., as catalytic molecules. Computational prediction of RNA secondary structure is among the few structure prediction problems that can be solved satisfactorily in polynomial time. Most work has been done to predict structures that do not contain pseudoknots. Allowing pseudoknots introduces modeling and computational problems. In this paper we consider the problem of predicting RNA secondary structures with pseudoknots based on free energy minimization. We first give a brief comparison of energy-based methods for predicting RNA secondary structures with pseudoknots. We then prove that the general problem of predicting RNA secondary structures containing pseudoknots is NP complete for a large class of reasonable models of pseudoknots.",
""
]
}
|
1301.1332
|
1890314517
|
The discovery, representation and reconstruction of (technical) integration networks from Network Mining (NM) raw data is a difficult problem for enterprises. This is due to large and complex IT landscapes within and across enterprise boundaries, heterogeneous technology stacks, and fragmented data. To remain competitive, visibility into the enterprise and partner IT networks on different, interrelated abstraction levels is desirable. We present an approach to represent and reconstruct the integration networks from NM raw data using logic programming based on first-order logic. The raw data expressed as integration network model is represented as facts, on which rules are applied to reconstruct the network. We have built a system that is used to apply this approach to real-world enterprise landscapes and we report on our experience with this system.
|
In terms of the meta-model for integration network, @cite_7 represents closest known related work, in which a path algebra is defined that is used to traverse arbitrary graphs. Similarly we define nodes and edges with inbound and outbound connectors, however different in terms of meaning and usage.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2148031245"
],
"abstract": [
"A multi-relational graph maintains two or more relations over a vertex set. This article defines an algebra for traversing such graphs that is based on an n-ary relational algebra, a concatenative single-relational path algebra, and a tensor-based multi-relational algebra. The presented algebra provides a monoid, automata, and formal language theoretic foundation for the construction of a multi-relational graph traversal engine."
]
}
|
1301.1003
|
2951974970
|
An uncertain database is defined as a relational database in which primary keys need not be satisfied. A repair (or possible world) of such database is obtained by selecting a maximal number of tuples without ever selecting two distinct tuples with the same primary key value. For a Boolean query q, the decision problem CERTAINTY(q) takes as input an uncertain database db and asks whether q is satisfied by every repair of db. Our main focus is on acyclic Boolean conjunctive queries without self-join. Previous work has introduced the notion of (directed) attack graph of such queries, and has proved that CERTAINTY(q) is first-order expressible if and only if the attack graph of q is acyclic. The current paper investigates the boundary between tractability and intractability of CERTAINTY(q). We first classify cycles in attack graphs as either weak or strong, and then prove among others the following. If the attack graph of a query q contains a strong cycle, then CERTAINTY(q) is coNP-complete. If the attack graph of q contains no strong cycle and every weak cycle of it is terminal (i.e., no edge leads from a vertex in the cycle to a vertex outside the cycle), then CERTAINTY(q) is in P. We then partially address the only remaining open case, i.e., when the attack graph contains some nonterminal cycle and no strong cycle. Finally, we establish a relationship between the complexities of CERTAINTY(q) and evaluating q on probabilistic databases.
|
Kolaitis and Pema @cite_3 recently showed that for every query @math with exactly two atoms, @math is either in or -complete, and it is decidable which of the two is the case. If @math is in and not first-order expressible, then it can be reduced in polynomial time to the problem of finding maximal (with respect to cardinality) independent sets of vertices in claw-free graphs. The latter problem can be solved in polynomial time by an ingenious algorithm of Minty @cite_6 . Unfortunately, the proposed reduction is not applicable on queries with more than two atoms.
|
{
"cite_N": [
"@cite_6",
"@cite_3"
],
"mid": [
"1963707128",
"1994353670"
],
"abstract": [
"Abstract A graph is claw-free if: whenever three (distinct) vertices are joined to a single vertex, those three vertices are a nonindependent (nonstable) set. Given a finite claw-free graph with real numbers (weights) assigned to the vertices, we exhibit an algorithm for producing an independent set of vertices of maximum total weight. This algorithm is “efficient” in the sense of J. Edmonds, that is to say, the number of computational steps required is of polynomial (not exponential or factorial) order in n , the number of vertices of the graph. This problem was solved earlier by Edmonds for the special case of “edge-graphs”; our solution is by reducing the more general problem to the earlier-solved special case. Separate attention is given to the case in which all weights are (+1) and thus an independent set is sought which is maximal in the sense of its cardinality.",
"We establish a dichotomy in the complexity of computing the consistent answers of a Boolean conjunctive query with exactly two atoms and without self-joins. Specifically, we show that the problem of computing the consistent answers of such a query either is in P or it is coNP-complete. Moreover, we give an efficiently checkable criterion for determining which of these two possibilities holds for a given query."
]
}
|
1301.1003
|
2951974970
|
An uncertain database is defined as a relational database in which primary keys need not be satisfied. A repair (or possible world) of such database is obtained by selecting a maximal number of tuples without ever selecting two distinct tuples with the same primary key value. For a Boolean query q, the decision problem CERTAINTY(q) takes as input an uncertain database db and asks whether q is satisfied by every repair of db. Our main focus is on acyclic Boolean conjunctive queries without self-join. Previous work has introduced the notion of (directed) attack graph of such queries, and has proved that CERTAINTY(q) is first-order expressible if and only if the attack graph of q is acyclic. The current paper investigates the boundary between tractability and intractability of CERTAINTY(q). We first classify cycles in attack graphs as either weak or strong, and then prove among others the following. If the attack graph of a query q contains a strong cycle, then CERTAINTY(q) is coNP-complete. If the attack graph of q contains no strong cycle and every weak cycle of it is terminal (i.e., no edge leads from a vertex in the cycle to a vertex outside the cycle), then CERTAINTY(q) is in P. We then partially address the only remaining open case, i.e., when the attack graph contains some nonterminal cycle and no strong cycle. Finally, we establish a relationship between the complexities of CERTAINTY(q) and evaluating q on probabilistic databases.
|
As observed in , uncertain databases are a restricted case of block-independent-disjoint (BID) probabilistic databases @cite_8 @cite_19 . This observation will be elaborated in .
|
{
"cite_N": [
"@cite_19",
"@cite_8"
],
"mid": [
"2136658073",
"2093149131"
],
"abstract": [
"We review in this paper some recent yet fundamental results on evaluating queries over probabilistic databases. While one can see this problem as a special instance of general purpose probabilistic inference, we describe in this paper two key database specific techniques that significantly reduce the complexity of query evaluation on probabilistic databases. The first is the separation of the query and the data: we show here that by doing so, one can identify queries whose data complexity is #P-hard, and queries whose data complexity is in PTIME. The second is the aggressive use of previously computed query results (materialized views): in particular, by rewriting a query in terms of views, one can reduce its complexity from #P-complete to PTIME. We describe a notion of a partial representation for views, and show that, once computed and stored, this partial representation can be used to answer subsequent queries on the probabilistic databases. evaluation.",
"A wide range of applications have recently emerged that need to manage large, imprecise data sets. The reasons for imprecision in data are as diverse as the applications themselves: in sensor and RFID data, imprecision is due to measurement errors [15, 34]; in information extraction, imprecision comes from the inherent ambiguity in natural-language text [20, 26]; and in business intelligence, imprecision is tolerated because of the high cost of data cleaning [5]. In some applications, such as privacy, it is a requirement that the data be less precise. For example, imprecision is purposely inserted to hide sensitive attributes of individuals so that the data may be published [30]. Imprecise data has no place in traditional, precise database applications like payroll and inventory, and so, current database management systems are not prepared to deal with it. In contrast, the newly emerging applications offer value precisely because they query, search, and aggregate large volumes of imprecise data to find the “diamonds in the dirt”. This wide-variety of new applications points to the need for generic tools to manage imprecise data. In this paper, we survey the state of the art of techniques that handle imprecise data, by modeling it as probabilistic data [2–4,7,12,15,23,27,36]. A probabilistic database management system, or ProbDMS, is a system that stores large volumes of probabilistic data and supports complex queries. A ProbDMS may also need to perform some additional tasks, such as updates or recovery, but these do not differ from those in conventional database management systems and will not be discussed here. The major challenge in a ProbDMS is that it needs both to scale to large data volumes, a core competence of database management systems, and to do probabilistic inference, which is a problem studied in AI. While many scalable data management systems exists, probabilistic inference is a hard problem [35], and current systems do not scale to the same extent as data management systems do. To address this challenge, researchers have focused on the specific"
]
}
|
1301.1003
|
2951974970
|
An uncertain database is defined as a relational database in which primary keys need not be satisfied. A repair (or possible world) of such database is obtained by selecting a maximal number of tuples without ever selecting two distinct tuples with the same primary key value. For a Boolean query q, the decision problem CERTAINTY(q) takes as input an uncertain database db and asks whether q is satisfied by every repair of db. Our main focus is on acyclic Boolean conjunctive queries without self-join. Previous work has introduced the notion of (directed) attack graph of such queries, and has proved that CERTAINTY(q) is first-order expressible if and only if the attack graph of q is acyclic. The current paper investigates the boundary between tractability and intractability of CERTAINTY(q). We first classify cycles in attack graphs as either weak or strong, and then prove among others the following. If the attack graph of a query q contains a strong cycle, then CERTAINTY(q) is coNP-complete. If the attack graph of q contains no strong cycle and every weak cycle of it is terminal (i.e., no edge leads from a vertex in the cycle to a vertex outside the cycle), then CERTAINTY(q) is in P. We then partially address the only remaining open case, i.e., when the attack graph contains some nonterminal cycle and no strong cycle. Finally, we establish a relationship between the complexities of CERTAINTY(q) and evaluating q on probabilistic databases.
|
All aforementioned results assume queries without self-join. For queries @math with self-joins, only fragmentary results about the complexity of @math are known @cite_16 @cite_5 . The extension to unions of conjunctive queries has been studied in @cite_4 .
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_4"
],
"mid": [
"2062180302",
"",
"2019599098"
],
"abstract": [
"This article deals with the computation of consistent answers to queries on relational databases that violate primary key constraints. A repair of such inconsistent database is obtained by selecting a maximal number of tuples from each relation without ever selecting two distinct tuples that agree on the primary key. We are interested in the following problem: Given a Boolean conjunctive query q, compute a Boolean first-order (FO) query @j such that for every database db, @j evaluates to true on db if and only if q evaluates to true on every repair of db. Such @j is called a consistent FO rewriting of q. We use novel techniques to characterize classes of queries that have a consistent FO rewriting. In this way, we are able to extend previously known classes and discover new ones. Finally, we use an Ehrenfeucht-Fraisse game to show the non-existence of a consistent FO rewriting for @[email protected]?y(R([email protected]?,y)@?R([email protected]?,c)), where c is a constant and the first coordinate of R is the primary key.",
"",
"Research in consistent query answering studies the definition and computation of \"meaningful\" answers to queries posed to inconsistent databases, i.e., databases whose data do not satisfy the integrity constraints (ICs) declared on their schema. Computing consistent answers to conjunctive queries is generally coNP-hard in data complexity, even in the presence of very restricted forms of ICs (single, unary keys). Recent studies on consistent query answering for database schemas containing only key dependencies have analyzed the possibility of identifying classes of queries whose consistent answers can be obtained by a first-order rewriting of the query, which in turn can be easily formulated in SQL and directly evaluated through any relational DBMS. In this paper we study consistent query answering in the presence of key dependencies and exclusion dependencies. We first prove that even in the presence of only exclusion dependencies the problem is coNP-hard in data complexity, and define a general method for consistent answering of conjunctive queries under key and exclusion dependencies, based on the rewriting of the query in Datalog with negation. Then, we identify a subclass of conjunctive queries that can be first-order rewritten in the presence of key and exclusion dependencies, and define an algorithm for computing the first-order rewriting of a query belonging to such a class of queries. Finally, we compare the relative efficiency of the two methods for processing queries in the subclass above mentioned. Experimental results, conducted on a real and large database of the computer science engineering degrees of the University of Rome \"La Sapienza\", clearly show the computational advantage of the first-order based technique."
]
}
|
1301.0775
|
1494753761
|
This research aims at using vehicular ad-hoc networks as infra-structure for an urban cyber-physical system in order to gather data about a city. In this scenario, all nodes are data sources and there is a gateway as ultimate destination for all packets. Because of the volatility of the network connections and uncertainty of actual node placement, we argue that a broadcast-based protocol is the most adequate solution, despite the high overhead. The Urban Data Collector (UDC) protocol has been proposed which uses a distributed election of the forwarding node among the nodes receiving the packet: nodes that are nearer to the gateway have shorter timers and a higher forwarding probabilities. The performance of the UDC protocol has been evaluated with different suppression levels in terms of the amount of collected data from each road segment using NS-3, and our results show that UDC can achieve significantly higher sensing accuracy than to other broadcast-based protocols.
|
Existing solutions for VANET sensing either apply on-demand querying for local dissemination @cite_7 , sometimes keeping the data in the location @cite_15 , or rely on delay-tolerant networking and open Wi-Fi access points for sending the data to the Internet backbone @cite_16 . The first are inefficient for real-time monitoring due to the query overhead and the need to globally access the data, and the latter cannot guarantee up-to-date data.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_7"
],
"mid": [
"2148063475",
"2170918595",
"2125486908"
],
"abstract": [
"Recent advances in wireless inter-vehicle communication systems enable the establishment of vehicular ad-hoc networks (VANET) and create significant opportunities for the deployment of a wide variety of applications and services to vehicles. In this work, we investigate the problem of developing services that can provide car drivers with time-sensitive information about traffic conditions and roadside facilities. We introduce the vehicular information transfer protocol (VITP), a location- aware, application-layer, communication protocol designed to support a distributed service infrastructure over vehicular ad- hoc networks. We describe the key design concepts of the VITP protocol and infrastructure. We provide an extensive simulation study of VITP performance on large-scale vehicular networks under realistic highway and city traffic conditions. Our results demonstrate the viability and effectiveness of VITP in providing location-aware services over VANETs.",
"CarTel is a mobile sensor computing system designed to collect, process, deliver, and visualize data from sensors located on mobile units such as automobiles. A CarTel node is a mobile embedded computer coupled to a set of sensors. Each node gathers and processes sensor readings locally before delivering them to a central portal, where the data is stored in a database for further analysis and visualization. In the automotive context, a variety of on-board and external sensors collect data as users drive.CarTel provides a simple query-oriented programming interface, handles large amounts of heterogeneous data from sensors, and handles intermittent and variable network connectivity. CarTel nodes rely primarily on opportunistic wireless (e.g., Wi-Fi, Bluetooth) connectivity to the Internet, or to \"data mules\" such as other CarTel nodes, mobile phone flash memories, or USB keys-to communicate with the portal. CarTel applications run on the portal, using a delay-tolerant continuous query processor, ICEDB, to specify how the mobile nodes should summarize, filter, and dynamically prioritize data. The portal and the mobile nodes use a delay-tolerant network stack, CafNet, to communicat.CarTel has been deployed on six cars, running on a small scale in Boston and Seattle for over a year. It has been used to analyze commute times, analyze metropolitan Wi-Fi deployments, and for automotive diagnostics.",
"Recent advances in vehicular communications make it possible to realize vehicular sensor networks, i.e., collaborative environments where mobile vehicles that are equipped with sensors of different nature (from toxic detectors to still video cameras) interwork to implement monitoring applications. In particular, there is an increasing interest in proactive urban monitoring, where vehicles continuously sense events from urban streets, autonomously process sensed data (e.g., recognizing license plates), and, possibly, route messages to vehicles in their vicinity to achieve a common goal (e.g., to allow police agents to track the movements of specified cars). This challenging environment requires novel solutions with respect to those of more-traditional wireless sensor nodes. In fact, unlike conventional sensor nodes, vehicles exhibit constrained mobility, have no strict limits on processing power and storage capabilities, and host sensors that may generate sheer amounts of data, thus making already-known solutions for sensor network data reporting inapplicable. This paper describes MobEyes, which is an effective middleware that was specifically designed for proactive urban monitoring and exploits node mobility to opportunistically diffuse sensed data summaries among neighbor vehicles and to create a low-cost index to query monitoring data. We have thoroughly validated the original MobEyes protocols and demonstrated their effectiveness in terms of indexing completeness, harvesting time, and overhead. In particular, this paper includes (1) analytic models for MobEyes protocol performance and their consistency with simulation-based results, (2) evaluation of performance as a function of vehicle mobility, (3) effects of concurrent exploitation of multiple harvesting agents with single multihop communications, (4) evaluation of network overhead and overall system stability, and (5) performance validation of MobEyes in a challenging urban tracking application where the police reconstruct the movements of a suspicious driver, e.g., by specifying the license number of a car."
]
}
|
1301.0775
|
1494753761
|
This research aims at using vehicular ad-hoc networks as infra-structure for an urban cyber-physical system in order to gather data about a city. In this scenario, all nodes are data sources and there is a gateway as ultimate destination for all packets. Because of the volatility of the network connections and uncertainty of actual node placement, we argue that a broadcast-based protocol is the most adequate solution, despite the high overhead. The Urban Data Collector (UDC) protocol has been proposed which uses a distributed election of the forwarding node among the nodes receiving the packet: nodes that are nearer to the gateway have shorter timers and a higher forwarding probabilities. The performance of the UDC protocol has been evaluated with different suppression levels in terms of the amount of collected data from each road segment using NS-3, and our results show that UDC can achieve significantly higher sensing accuracy than to other broadcast-based protocols.
|
On the network level, a wide range of broadcast-based vehicle-to-vehicle unicast routing protocols has been proposed and we categorize them in two classes: sender-oriented and receiver-oriented. Sender-oriented protocols locally exchange beacons to obtain information. It enables the current forwarder to select the next forwarder node among its neighbors. This is the approach taken by protocols like Greedy Perimeter Stateless Routing (GPSR) @cite_17 , Position-based Multi-hop Broadcast Protocol(PMBP) @cite_4 , Emergency Message Dissemination for Vehicular environments (EMDV) @cite_5 , or Cross Layer Broadcast Protocol(CLBP) @cite_2 . Sender-oriented protocols are inefficient for data gathering in urban areas due to the high overhead of continuously exchanging beacons and because additional mechanisms must be added to verify at each hop whether the chosen forwarder actually received and forwarded the packet. Furthermore, explicitly choosing a single forwarder in urban scenarios is not adequate for urban data collection, as we discussed in .
|
{
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_4",
"@cite_17"
],
"mid": [
"2161605375",
"2110505033",
"2111643957",
""
],
"abstract": [
"In order to achieve cooperative driving in vehicular ad hoc networks (VANET), broadcast transmission is usually used for disseminating safety-related information among vehicles. Nevertheless, broadcast over multihop wireless networks poses many challenges due to link unreliability, hidden terminal, message redundancy, and broadcast storm, etc., which greatly degrade the network performance. In this paper, we propose a cross layer broadcast protocol (CLBP) for multihop emergency message dissemination in inter-vehicle communication systems. We first design a novel composite relaying metric for relaying node selection, by jointly considering the geographical locations, physical layer channel conditions, moving velocities of vehicles. Based on the designed metric, we then propose a distributed relay selection scheme to guarantee that a unique relay is selected to reliably forward the emergency message in the desired propagation direction.We further apply IEEE802.11e EDCA to guarantee QoS performance of safety related services. Finally, simulation results are given to demonstrate that CLBP can not only minimize the broadcast message redundancy, but also quickly and reliably disseminate emergency messages in a VANET.",
"Direct radio-based vehicle-to-vehicle communication can help prevent accidents by providing accurate and up-to-date local status and hazard information to the driver. In this paper, we assume that two types of messages are used for traffic safety-related communication: 1) Periodic messages (ldquobeaconsrdquo) that are sent by all vehicles to inform their neighbors about their current status (i.e., position) and 2) event-driven messages that are sent whenever a hazard has been detected. In IEEE 802.11 distributed-coordination-function-based vehicular networks, interferences and packet collisions can lead to the failure of the reception of safety-critical information, in particular when the beaconing load leads to an almost-saturated channel, as it could easily happen in many critical vehicular traffic conditions. In this paper, we demonstrate the importance of transmit power control to avoid saturated channel conditions and ensure the best use of the channel for safety-related purposes. We propose a distributed transmit power control method based on a strict fairness criterion, i.e., distributed fair power adjustment for vehicular environments (D-FPAV), to control the load of periodic messages on the channel. The benefits are twofold: 1) The bandwidth is made available for higher priority data like dissemination of warnings, and 2) beacons from different vehicles are treated with ldquoequal rights,rdquo and therefore, the best possible reception under the available bandwidth constraints is ensured. We formally prove the fairness of the proposed approach. Then, we make use of the ns-2 simulator that was significantly enhanced by realistic highway mobility patterns, improved radio propagation, receiver models, and the IEEE 802.11p specifications to show the beneficial impact of D-FPAV for safety-related communications. We finally put forward a method, i.e., emergency message dissemination for vehicular environments (EMDV), for fast and effective multihop information dissemination of event-driven messages and show that EMDV benefits of the beaconing load control provided by D-FPAV with respect to both probability of reception and latency.",
"Broadcast is an effective approach for safety-related information exchange to achieve cooperative driving in vehicular ad hoc network (VANET). However, it suffers from several fundamental challenges such as message redundancy, link unreliability, hidden terminal and broadcast storm, etc., which degrade the efficiency of the network greatly. To address these issues, this paper proposes a position based multi-hop broadcast protocol (PMBP) for emergency message dissemination in inter-vehicle communications. By adopting a cross-layer approach considering both the MAC and Network layers in the proposed scheme, the candidate vehicle for forwarding an emergency message is selected according to its distance from the source vehicle in the message propagation direction. Analysis and simulation results show that PMBP can not only quickly deliver emergency messages, but also reduce broadcast message redundancy significantly.",
""
]
}
|
1301.0775
|
1494753761
|
This research aims at using vehicular ad-hoc networks as infra-structure for an urban cyber-physical system in order to gather data about a city. In this scenario, all nodes are data sources and there is a gateway as ultimate destination for all packets. Because of the volatility of the network connections and uncertainty of actual node placement, we argue that a broadcast-based protocol is the most adequate solution, despite the high overhead. The Urban Data Collector (UDC) protocol has been proposed which uses a distributed election of the forwarding node among the nodes receiving the packet: nodes that are nearer to the gateway have shorter timers and a higher forwarding probabilities. The performance of the UDC protocol has been evaluated with different suppression levels in terms of the amount of collected data from each road segment using NS-3, and our results show that UDC can achieve significantly higher sensing accuracy than to other broadcast-based protocols.
|
Receiver-oriented protocols forward data packets without prior message exchange and employ mechanisms to reduce the network overhead. Protocols like Contention-based forwarding (CBF) @cite_14 or Efficient Directional Broadcast (EDB) @cite_6 use contention-based forwarding without handshaking. Upon receiving a packet, all nodes wait for a timer proportional to the geographic progress towards the gateway, but both require the exchange of further messages per packet and hop. CBF uses request-to-forward (RTF) and clear-to-forward (CTF) messages to reduce duplicate packets and EDB sends an ACK message before forwarding.
|
{
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"2141661023",
"1986905072"
],
"abstract": [
"Existing position-based unicast routing algorithms which forward packets in the geographic direction of the destination require that the forwarding node knows the positions of all neighbors in its transmission range. This information on direct neighbors is gained by observing beacon messages each node sends out periodically. Due to mobility, the information that a node receives about its neighbors becomes outdated, leading either to a significant decrease in the packet delivery rate or to a steep increase in load on the wireless channel as node mobility increases. In this paper, we propose a mechanism to perform position-based unicast forwarding without the help of beacons. In our contention-based forwarding scheme (CBF) the next hop is selected through a distributed contention process based on the actual positions of all current neighbors. For the contention process, CBF makes use of biased timers. To avoid packet duplication, the first node that is selected suppresses the selection of further nodes. We propose three suppression strategies which vary with respect to forwarding efficiency and suppression characteristics. We analyze the behavior of CBF with all three suppression strategies and compare it to an existing greedy position-based routing approach by means of simulation with ns-2. Our results show that CBF significantly reduces the load on the wireless channel required to achieve a specific delivery rate compared to the load a beacon-based greedy forwarding strategy generates.",
"The topology of a vehicular ad hoc network (VANET) changes rapidly due to high-speed movement of vehicles, so traditional mobile ad hoc network (MANET) broadcast protocol may not work efficiently in VANET. This paper proposes a distance-based broadcast protocol called Efficient Directional Broadcast (EDB) for VANET using directional antennas. In EDB, only the furthest receiver is responsible to forward the packet in the opposite direction where the packet arrives. Besides, a directional repeater located at the intersection helps disseminating the packets to the vehicles on other road segments of different directions. We evaluate the performance of EDB based on a real mobility model generated by live GPS data of taxis in the city of Shanghai. The result shows that EDB is effective and favorable for VANET."
]
}
|
1301.0775
|
1494753761
|
This research aims at using vehicular ad-hoc networks as infra-structure for an urban cyber-physical system in order to gather data about a city. In this scenario, all nodes are data sources and there is a gateway as ultimate destination for all packets. Because of the volatility of the network connections and uncertainty of actual node placement, we argue that a broadcast-based protocol is the most adequate solution, despite the high overhead. The Urban Data Collector (UDC) protocol has been proposed which uses a distributed election of the forwarding node among the nodes receiving the packet: nodes that are nearer to the gateway have shorter timers and a higher forwarding probabilities. The performance of the UDC protocol has been evaluated with different suppression levels in terms of the amount of collected data from each road segment using NS-3, and our results show that UDC can achieve significantly higher sensing accuracy than to other broadcast-based protocols.
|
Finally, @cite_10 proposed three broadcast-based protocols that use basic per-hop forwarding and suppression techniques for data dissemination: Weighted p-Persistence, Slotted-1 Persistence and Slotted p-Persistence broadcasting. We evaluated the performance of these protocols for data collection over VANET @cite_19 and the Slotted-1 Persistence protocol shows better performance than the two others. Hence, we used it for comparison and introduce it here. Slotted-1 persistence uses probabilistic and slotted forwarding. Each node @math , upon receiving a packet from node @math , re-broadcasts the packet at a timeslot @math if it receives the packets for the first time and does not receive any duplicates before the assigned timeslot, otherwise it discards the packets. @math is calculated by @math , where @math is the estimated one hop delay and is the assigned slot number, calculated by: , . where @math is the predetermined number of slots, @math is the distance between nodes @math and @math and @math is average communication range.
|
{
"cite_N": [
"@cite_19",
"@cite_10"
],
"mid": [
"96948785",
"2115009661"
],
"abstract": [
"We propose to use Vehicular ad hoc networks (VANET) as the infrastructure for an urban cyber-physical system for gathering up-to-date data about a city, like traffic conditions or environmental parameters. In this context, it is critical to design a data collection protocol that enables retrieving the data from the vehicles in almost real-time in an efficient way for urban scenarios. We propose Back off-based Per-hop Forwarding (BPF), a broadcast-based receiver-oriented protocol that uses the destination location information to select the forwarding order among the nodes receiving the packet. BFP does not require nodes to exchange periodic messages with their neighbors communicating their locations to keep a low management message overhead. It uses geographic information about the final destination node in the header of each data packet to route it in a hop-by-hop basis. It takes advantage of redundant forwarding to increase packet delivery to a destination, what is more critical in an urban scenario than in a highway, where the road topology does not represent a challenge for forwarding. We evaluate the performance of the BPF protocol using ns-3 and a Manhattan grid topology and compare it with well-known broadcast suppression techniques. Our results show that BPF achieves significantly higher packet delivery rates at a reduced redundancy cost.",
"Several multihop applications developed for vehicular ad hoc networks use broadcast as a means to either discover nearby neighbors or propagate useful traffic information to other vehicles located within a certain geographical area. However, the conventional broadcast mechanism may lead to the so-called broadcast storm problem, a scenario in which there is a high level of contention and collisions at the link layer due to an excessive number of broadcast packets. While this is a well-known problem in mobile ad hoc wireless networks, only a few studies have addressed this issue in the VANET context, where mobile hosts move along the roads in a certain limited set of directions as opposed to randomly moving in arbitrary directions within a bounded area. Unlike other existing works, we quantify the impact of broadcast storms in VANETs in terms of message delay and packet loss rate in addition to conventional metrics such as message reachability and overhead. Given that VANET applications are currently confined to using the DSRC protocol at the data link layer, we propose three probabilistic and timer-based broadcast suppression techniques: weighted p-persistence, slotted 1-persistence, and slotted p-persistence schemes, to be used at the network layer. Our simulation results show that the proposed schemes can significantly reduce contention at the MAC layer by achieving up to 70 percent reduction in packet loss rate while keeping end-to-end delay at acceptable levels for most VANET applications."
]
}
|
1301.1294
|
2950698614
|
Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76 , 80 , and 85 reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.
|
FEC in connection with multiple paths and or multiple servers is a well investigated topic in the literature @cite_14 @cite_2 @cite_10 @cite_3 . However, there is very little attention devoted to the queueing delays. FEC in the context of network coding or coded scheduling has also been a popular topic from the perspectives of throughput (or network utility) maximization and throughput vs. service delay trade-offs @cite_1 @cite_18 @cite_12 @cite_4 . Although some incorporate queuing delay analysis, the treatment is largely for broadcast wireless channels with quite different system characteristics and constraints. FEC has also been extensively studied in the context of distributed storage from the points of high durability and availability while attaining high storage efficiency @cite_19 @cite_0 @cite_6 @cite_20 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_20",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2164817605",
"",
"2097651478",
"2065185800",
"2110555585",
"133843592",
"2105185344",
"",
"",
"2056386972",
""
],
"abstract": [
"",
"In this paper, we propose a novel transport protocol that effectively utilizes available bandwidth and diversity gains provided by heterogeneous, highly lossy paths. Our Multi-Path LOss-Tolerant (MPLOT) protocol can be used to provide significant gains in the goodput of wireless mesh networks, subject to bursty, correlated losses with average loss-rates as high as 50 , and random outage events. MPLOT makes intelligent use of erasure codes to guard against packets losses, and a Hybrid-ARQ FEC scheme to reduce packet recovery latency, where the redundancy is adaptively provisioned into both proactive and reactive FECs. MPLOT uses dynamic packet mapping based on current path characteristics, and does not require packets to be delivered in sequence to ensure reliability. We present a theoretical analysis of the different design choices of MPLOT and show that MPLOT makes an optimal trade-off between goodput and delay constraints. We test MPLOT, through simulations, under a variety of test scenarios and show that it effectively exploits path diversity in addition to aggregating path bandwidths. We also show that MPLOT is fair to single-path protocols like TCP-SACK.",
"",
"In an unreliable packet network setting, we study the performance gains of optimal transmission strategies in the presence and absence of coding capability at the transmitter, where performance is measured in delay and throughput. Although our results apply to a large class of coding strategies including maximum-distance separable (MDS) and Digital Fountain codes, we use random network codes in our discussions because these codes have a greater applicability for complex network topologies. To that end, after introducing a key setting in which performance analysis and comparison can be carried out, we provide closed-form as well as asymptotic expressions for the delay performance with and without network coding. We show that the network coding capability can lead to arbitrarily better delay performance as the system parameters scale when compared to traditional transmission strategies without coding. We further develop a joint scheduling and random-access scheme to extend our results to general wireless network topologies.",
"BitTorrent is probably the most famous file-sharing protocol used in the Internet currently. It represents more than half of the P2P traffic. Various applications are using BitTorrent-like protocols to deliver the resource and implement techniques to perform a reliable data transmission. Forward Error Correction (FEC) is an efficient mechanism used for this goal. This paper proposes a performance evaluation of FEC implemented on BitTorrent protocol. A simulation framework has been developed to evaluate the improvement depending on many factors like the leeches seeds number and capacities, the network nature (homogeneous or heterogeneous), the resource size, and the FEC redundancy ratio. The completion time metric shows that FEC is a method that accelerates the data access in some specific network configurations. On the contrary, this technique can also disrupt the system in some cases since it introduces an overhead.",
"Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from data losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy should be regenerated to compensate such losses. In this context, the general objective is to minimize the volume of actual network traffic caused by such regenerations. A class of codes, called regenerating codes, has been proposed to achieve an optimal trade-off curve between the amount of storage space required for storing redundancy and the network traffic during the regeneration. In this paper, we jointly consider the choices of regenerating codes and network topologies. We propose a new design, referred to as RCTREE, that combines the advantage of regenerating codes with a tree-structured regeneration topology. Our focus is the efficient utilization of network links, in addition to the reduction of the regeneration traffic. With the extensive analysis and quantitative evaluations, we show that RCTREE is able to achieve a both fast and stable regeneration, even with departures of storage nodes during the regeneration.",
"",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
"",
"",
"Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typically, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. This scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites simultaneously. The approach applies naturally to wireless networks and satellite networks as well.",
""
]
}
|
1301.1294
|
2950698614
|
Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76 , 80 , and 85 reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.
|
Two papers @cite_16 @cite_17 concurrent to ours conducted theoretical study of cloud storage systems using FEC in a similar fashion as we do in this paper. Both papers rely on the assumption of exponential task delays, which hardly captures the reality. Therefore, some of their theoretical results are over optimistic and cannot be applied in practice. For example, authors of @cite_17 proved that using larger code lengths always improves delay without reducing system capacity, contradicting with simulation results using real-world measurements presented in this paper.
|
{
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2166114772",
"2051263915"
],
"abstract": [
"In this paper, we quantify how much codes can reduce the data retrieval latency in storage systems. By combining a simple linear code with a novel request scheduling algorithm, which we call Blocking-one Scheduling (BoS), we show analytically that it is possible to use codes to reduce data retrieval delay by up to 17 over currently popular replication-based strategies. Although in this work we focus on a simplified setting where the storage system stores a single content, the methodology developed can be applied to more general settings with multiple contents. The results also offer insightful guidance to the design of storage systems in data centers and content distribution networks.",
"In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication but at a significantly lower storage cost. In particular, it is well known that Maximum-Distance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called \"cold data\"), is well understood, the role of codes in the storage of more frequently accessed and active \"hot data\", where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the \"MDS queue.\" We analytically characterize the (average) latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and are observed to be quite tight. Extensive simulations are also provided and used to validate our theoretical analysis. We also employ the framework of the MDS queue to analyse different methods of performing so-called degraded reads (reading of partial data) in distributed data storage."
]
}
|
1301.1294
|
2950698614
|
Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76 , 80 , and 85 reductions in mean, 90th, and 99th percentiles for 2 Mbyte files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper we focus on analyzing the delay performance when chunking, FEC, and parallel connections are used together. Based on this analysis, we develop load adaptive algorithms that can pick the best code rate on a per request basis by using off-line computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog based solutions achieve better delay performance at higher percentile values than the greedy solution.
|
Another set of works that is closely related to our work looks directly into the delay performance of storage clouds @cite_9 @cite_7 . The measurements results and interim conclusions in @cite_9 on Amazon S3 motivated our work. The paper presents the throughput-delay tradeoffs in service times as object sizes vary. They establish the skewness and long tails. They recommend to cancel long pending jobs and send a fresh request instead. Although the suggestion would work well for long tails, this would not lead to much delay improvement below 99th percentile. @cite_8 on the other hand focuses more closely on the throughput-service delay tradeoff and devise a data batching scheme. Based on the observed congestion, authors increase or reduce the batching size. Thus, at high congestion, a larger batch size is used to improve the throughput while at low congestion a smaller batch size is adopted to reduce the delay. The chunk size in our work is similar to the batch size considered in @cite_8 and it remains as a future work how to combine these complementary ideas.
|
{
"cite_N": [
"@cite_9",
"@cite_7",
"@cite_8"
],
"mid": [
"2289411471",
"",
"1517941185"
],
"abstract": [
"Amazon.com’s Elastic Compute Cloud (EC2), Simple Storage Service (S3) and Simple Queue Service (SQS) offer enterprise-class computing, storage and coordination facilities to any organization or individual in the world with a valid credit card. This paper details our experience working with these commodity grid computing services between November 2006 and May 2007, including an analysis of the overall system’s API and ease-of-use; an analysis of EC2’s management and security facilities; an end-to-end performance analysis of S3’s throughput and latency as observed from Amazon’s EC2 cluster and other locations on the Internet; and an analysis of the SQS operation and performance. We conclude with a report of our experience moving a large-scale research application from dedicated hardware to the Amazon offering. We find that this collection of AmazonWeb Services (AWS) has great promise but are hobbled by service consistency problems, the lack of a Service Level Agreement (SLA), and a problematic Web Services Licensing Agreement (WSLA).",
"",
"Many of today's applications are delivered as scalable, multi-tier services deployed in large data centers. These services frequently leverage shared, scale-out, key-value storage layers that can deliver low latency under light workloads, but may exhibit significant queuing delay and even dropped requests under high load. Stout is a system that helps these applications adapt to variation in storage-layer performance by treating scalable key-value storage as a shared resource requiring congestion control. Under light workloads, applications using Stout send requests to the store immediately, minimizing delay. Under heavy workloads, Stout automatically batches the application's requests together before sending them to the store, resulting in higher throughput and preventing queuing delay. We show experimentally that Stout's adaptation algorithm converges to an appropriate batch size for workloads that require the batch size to vary by over two orders of magnitude. Compared to a non-adaptive strategy optimized for throughput, Stout delivers over 34× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, Stout can scale to over 3× as many requests."
]
}
|
1301.1085
|
2952178629
|
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Inter net, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
|
Californium (Cf) CoAP framework @cite_10 has proposed a architecture to solve the problem of connecting heterogeneous devices from different manufacturers with diverse functionalities to the Internet of things. Cf @cite_10 puts a in front of the device to work as a proxy. Thin server only provides a low-level API to the elementary functionality of a device. The client applications or IoT middleware system can communicate with the device via the thin server using a RESTful API. All the functionalities are encoded as REST resources.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2065553841"
],
"abstract": [
"Unlike traditional networked embedded systems, the Internet of Things interconnects heterogeneous devices from various manufacturers with diverse functionalities. To foster the emergence of novel applications, this vast infrastructure requires a common application layer. As a single global standard for all device types and application domains is impracticable, we propose an architecture where the infrastructure is agnostic of applications and application development is fully decoupled from the embedded domain. In our design, the application logic of devices is running on application servers, while thin servers embedded into devices export only their elementary functionality using REST resources. In this paper, we present our design goals and preliminary results of this approach, featuring the Californium (Cf) CoAP framework."
]
}
|
1301.1085
|
2952178629
|
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Inter net, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
|
Web Services Gateways @cite_27 is an approach based on Model Driven Architecture (MDA) and Device Profile for Web Services (DPWS). The focus is on connecting industrial devices, where the devices have a lifetime of often more than 40 years, to the client applications in IoT paradigm. They have developed gateways comprises with web services that provide interfaces to access the devices.The web service are generated automatically using predefined models.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2001784314"
],
"abstract": [
"Wireless Sensing and Radio Identification systems have undergone many innovations during the past years. This has led to short product lifetimes for both software and hardware compared to classical industries. However, especially industries dealing with long-term support of products, e.g. of industrial machinery, and product lifetime of 40+ years may especially profit from an Internet of Things. Motivated by a practical industrial servicing use case this paper shows how we hope to make equally sustainable IoT solutions by employing a model driven software development approach based on code generation for multi-protocol web service gateways."
]
}
|
1301.1085
|
2952178629
|
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Inter net, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
|
InterX @cite_1 is a smart phone-based service interoperability gateway for heterogeneous smart objects. It employs a mediator gateway that can transform one protocol to another. For example, InterX enables the communication between Bluetooth based smart object and UPnP based smart object via a gateway in runtime.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2100758080"
],
"abstract": [
"Due to the advances in wireless networking and microelectronics, more and more smart objects are equipped with wireless networking as well as high computing capability. Heterogeneity of service protocols inhibits the interoperation among smart objects using different service protocols. Existing works overcome the heterogeneity by assuming that service protocols used are known in advance so that components necessary for interoperations can be deployed before runtime. This assumption prevents existing works from being applied to situations where a user wants to spontaneously configure her smart objects to interoperate with smart objects found nearby without (or with minimal) user intervention. In this paper, we propose a service interoperability gateway that discovers service protocols used in a given environment, finds smart objects using these protocols, and instantiates proxies for interoperability between smart objects."
]
}
|
1301.1085
|
2952178629
|
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Inter net, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
|
Hydra @cite_7 is an IoT middleware that allows developers to incorporate heterogeneous physical devices into their applications. The interaction between devices and the middleware are enabled though web services. Hydra is based on a semantic Model Driven Architecture for easy programming. This is similar to the Web Services Gateways @cite_27 approach we presented earlier. Even though the performance evaluation of the Hydra middleware is not available in device connection perspective, the paper @cite_27 has raised the similar issues related to employing web services as the gateways for smart devices.
|
{
"cite_N": [
"@cite_27",
"@cite_7"
],
"mid": [
"2001784314",
"2103912076"
],
"abstract": [
"Wireless Sensing and Radio Identification systems have undergone many innovations during the past years. This has led to short product lifetimes for both software and hardware compared to classical industries. However, especially industries dealing with long-term support of products, e.g. of industrial machinery, and product lifetime of 40+ years may especially profit from an Internet of Things. Motivated by a practical industrial servicing use case this paper shows how we hope to make equally sustainable IoT solutions by employing a model driven software development approach based on code generation for multi-protocol web service gateways.",
"The HYDRA project develops middleware for networked embedded systems that allow developers to create ambient intelligence (AmI) applications based on wireless devices and sensors. Through its unique combination of Service-oriented Architecture (SoA) and a semantic-based Model-Driven Architecture, HYDRA will enable the development of generic services based on open standards."
]
}
|
1301.1085
|
2952178629
|
Internet of Things (IoT) will create a cyberphysical world where all the things around us are connected to the Inter net, sense and produce "big data" that has to be stored, processed and communicated with minimum human intervention. With the ever increasing emergence of new sensors, interfaces and mobile devices, the grand challenge is to keep up with this race in developing software drivers and wrappers for IoT things. In this paper, we examine the approaches that automate the process of developing middleware drivers wrappers for the IoT things. We propose ASCM4GSN architecture to address this challenge efficiently and effectively. We demonstrate the proposed approach using Global Sensor Network (GSN) middleware which exemplifies a cluster of data streaming engines. The ASCM4GSN architecture significantly speeds up the wrapper development and sensor configuration process as demonstrated for Android mobile phone based sensors as well as for Sun SPOT sensors.
|
uMiddle @cite_24 is a bridging framework that enables seamless device interaction over diverse middleware platforms. This approach is similar to the InterX @cite_1 . uMiddle transforms one protocol to another in runtime. This middleware is focused on the interoperability between popular protocols such as Bluetooth, UPnP, etc. In contrast, we are more concerned about connecting low level sensors to IoT middleware solutions. uMiddle have identified three essential requirements of an interoperability middleware platform; transport-level bridging, service-level bridging, and device-level bridging. We have also considered these requirements in our approach.
|
{
"cite_N": [
"@cite_24",
"@cite_1"
],
"mid": [
"2142752591",
"2100758080"
],
"abstract": [
"We explore the design patterns and architectural tradeoffs for achieving interoperability across communication middleware platforms, and describe uMiddle, a bridging framework for universal interoperability that enables seamless device interaction over diverse platforms. The proliferation of middleware platforms that cater to specific devices has created isolated islands of devices with no uniform protocol for interoperability across these islands. This void makes it difficult to rapidly prototype pervasive computing applications spanning a wide variety of devices. We discuss the design space of architectural solutions that can address this void, and detail the trade-offs that must be faced when trying to achieve cross-platform interoperability. uMiddle is a framework for achieving such interoperability, and serves as a powerful platform for creating applications that are independent of specific underlying communication platforms.",
"Due to the advances in wireless networking and microelectronics, more and more smart objects are equipped with wireless networking as well as high computing capability. Heterogeneity of service protocols inhibits the interoperation among smart objects using different service protocols. Existing works overcome the heterogeneity by assuming that service protocols used are known in advance so that components necessary for interoperations can be deployed before runtime. This assumption prevents existing works from being applied to situations where a user wants to spontaneously configure her smart objects to interoperate with smart objects found nearby without (or with minimal) user intervention. In this paper, we propose a service interoperability gateway that discovers service protocols used in a given environment, finds smart objects using these protocols, and instantiates proxies for interoperability between smart objects."
]
}
|
1301.0128
|
1944647127
|
We describe arithmetic computations in terms of operations on some well known free algebras (S1S, S2S and ordered rooted binary trees) while emphasizing the common structure present in all them when seen as isomorphic with the set of natural numbers. Constructors and deconstructors seen through an initial algebra semantics are generalized to recursively defined functions obeying similar laws. Implementation using Scala's apply and unapply are discussed together with an application to a realistic arbitrary size arithmetic package written in Scala, based on the free algebra of rooted ordered binary trees, which also supports rational number operations through an extension to signed rationals of the Calkin-Wilf bijection.
|
A very nice functional pearl @cite_3 has explored in the past (using Haskell code) algorithms related to the Calkin-Wilf bijection @cite_13 . While using the same underlying mathematics, our Scala-based package works on terms of the AlgT free algebra rather than conventional numbers, and provides a complete package of arbitrary size rational arithmetic operations taking advantage of our generalized constructors.
|
{
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"1982442868",
"2171866795"
],
"abstract": [
"2. The function values f(n) actually count something nice. In fact, f(n) is the number of ways of writing the integer n as a sum of powers of 2, each power being used at most twice (i.e., once more than the legal limit for binary expansions). For instance, we can write 5 = 4 + 1 = 2 + 2 + 1, so there are two such ways to write 5, and therefore f(5) = 2. Let’s say that f(n) is the number of hyperbinary representations of the integer n.",
"Every lazy functional programmer knows about the following approach to enumerating the positive rationals: generate a two-dimensional matrix (an infinite list of infinite lists), then traverse its finite diagonals (an infinite list of finite lists)."
]
}
|
1212.5959
|
2951497545
|
The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing habits of users and their online shopping behavior. Towards this end, we analyze data of 88,637 users who have bought more in total half a milion products from the retailer sites Amazon and Walmart. Our results indicate that even coarse-grained Internet browsing behavior has predictive power in terms of what users will buy online. Furthermore, we discover both surprising (e.g., "expensive products do not come with more effort in terms of purchase") and expected (e.g., "the more loyal a user is to an online shop, the less effort they spend shopping") facts. Given the lack of large-scale studies linking online browsing and online shopping behavior, we believe that this work is of general interest to people working in related areas.
|
One of the early related works to this research is done by (1999). They studied the predictors of online buying behavior of 10,180 people who completed their survey that included 62 questions about online behavior and attitudes about Internet. They reported a wired lifestyle for buyers whose main characteristics are searching for product information on the Internet, receiving a large number of email messages every day, having Internet access in their offices @cite_21 .
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2069722994"
],
"abstract": [
"Consumers worldwide can shop online 24 hours a day, seven days a week, 365 days a year. Some market sectors, including insurance, financial services, computer hardware and software, travel, books, music, video, flowers, and automobiles, are experiencing rapid growth in online sales. For example, in Jan. 1999, Dell Computer Corp. was selling an average of @math 294 billion by 2002, online retailing raises many questions about how to market on the Net."
]
}
|
1212.6225
|
2016379596
|
In this paper, we propose a novel class of Nash problems for Cognitive Radio (CR) networks composed of multiple primary users (PUs) and secondary users (SUs) wherein each SU (player) competes against the others to maximize his own opportunistic throughput by choosing jointly the sensing duration, the detection thresholds, and the vector power allocation over a multichannel link. In addition to power budget constraints, several (deterministic or probabilistic) interference constraints can be accommodated in the proposed general formulation, such as constraints on the maximum individual aggregate (probabilistic) interference tolerable from the PUs. To keep the optimization as decentralized as possible, global interference constraints, when present, are imposed via pricing; the prices are thus additional variables to be optimized. The resulting players' optimization problems are nonconvex and there are price clearance conditions associated with the nonconvex global interference constraints to be satisfied by the equilibria of the game, which make the analysis of the proposed game a challenging task; none of classical results in the game theory literature can be successfully applied. To deal with the nonconvexity of the game, we introduce a relaxed equilibrium concept - the Quasi-Nash Equilibrium (QNE)- and study its main properties, performance, and connection with local Nash equilibria. Quite interestingly, the proposed game theoretical formulations yield a considerable performance improvement with respect to current centralized and decentralized designs of CR systems, which validates the concept of QNE.
|
@cite_32 @cite_37 @cite_33 @cite_9 @cite_25 @cite_17 In @cite_32 (or @cite_37 @cite_33 ), the sensing time and the transmit power (or the power allocation @cite_37 @cite_33 over multi-channel links) of one SU were jointly optimized while keeping the detection probability (and thus the decision threshold) fixed to a target value. In @cite_9 @cite_25 , the authors focused on the joint optimization of the power allocation and the equi-false alarm rate of one SU @cite_9 over multi-channel links, for a sensing time. The case of multiple SUs and one PU was considered in @cite_25 (and more recently in @cite_17 ), under the same assumptions of @cite_9 ; however no formal analysis of the proposed formulation was provided. Moreover, these papers @cite_9 @cite_25 @cite_17 did not consider the sensing overhead as part of the optimization, leaving unclear how to optimally choose the sensing time.
|
{
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_9",
"@cite_32",
"@cite_25",
"@cite_17"
],
"mid": [
"2121762046",
"2157564789",
"2163822459",
"2113303683",
"",
"2064008241"
],
"abstract": [
"In this paper, we consider a wideband cognitive radio network (CRN) which can simultaneously sense multiple narrowband channels and thus aggregate the perceived available channels for transmission. We study the problem of designing the optimal spectrum sensing time and power allocation schemes so as to maximize the average achievable throughput of the CRN subject to the constraints of probability of detection and the total transmit power. The optimal sensing time and power allocation strategies are developed under two different total power constraints, namely, instantaneous power constraint and average power constraint. Finally, numerical results show that, under both cases, for a CRN with three 6 MHz channels, if the frame duration is 100 ms and the target probability of detection is 90 for the worst case signal-to-noise ratio of primary users being -12 dB, -15 dB and -20 dB, respectively, the optimal sensing time is around 6 ms and it is almost insensitive to the total transmit power.",
"Cognitive radio is an emerging technology that aims for efficient spectrum usage by allowing unlicensed (secondary) users to access licensed frequency bands under the condition of protecting the licensed (primary) users from harmful interference. The latter condition constraints the achievable throughput of a cognitive radio network, which should therefore access a wideband spectrum in order to provide reliable and efficient services to its users. In this paper, we study the problem of designing the optimal sensing time and power allocation strategy, in order to maximize the ergodic throughput of a cognitive radio that employs simultaneous multiband detection and operates under two different schemes, namely the wideband sensing-based spectrum sharing (WSSS) and the wideband opportunistic spectrum access (WOSA) scheme. We consider average transmit and interference power constraints for both schemes, in order to effectively protect the primary users from harmful interference, propose two algorithms that acquire the optimal sensing time and power allocation under imperfect spectrum sensing for the two schemes and discuss the effect of the average transmit and interference power constraint on the optimal sensing time. Finally, we provide simulation results to compare the two schemes and validate our theoretical analysis.",
"In cognitive radio (CR) networks, the radio access from opportunistic users depends on their capabilities to detect spectrum holes. Typically, the detection problem has been considered separately from the optimization of the transmission strategy. However, in a CR context, the detection phase has an impact on network efficiency as well as on the undesired interference generated towards licensed users. In this work, we consider the joint optimization of detection thresholds and power allocation across multichannel links, in order to maximize the aggregated opportunistic throughput, under a constraint on the interference generated towards the primary users.",
"In this paper, a new spectrum-sharing model called sensing-based spectrum sharing is proposed for cognitive radio networks. This model consists of two phases: In the first phase, the secondary user (SU) listens to the spectrum allocated to the primary user (PU) to detect the state of the PU; in the second phase, the SU adapts its transit power based on the sensing results. If the PU is inactive, the SU allocates the transmit power based on its own benefit. However, if the PU is active, the interference power constraint is imposed to protect the PU. Under this new model, the evaluation of the ergodic capacity of the SU is formulated as an optimization problem over the transmit power and the sensing time. Due to the complexity of this problem, two simplified versions, which are referred to as the perfect sensing case and the imperfect sensing case, are studied in this paper. For the perfect sensing case, the Lagrange dual decomposition is applied to derive the optimal power allocation policy to achieve the ergodic capacity. For the imperfect sensing case, an iterative algorithm is developed to obtain the optimal sensing time and the corresponding power allocation strategy. Finally, numerical results are presented to validate the proposed studies. It is shown that the SU can achieve a significant capacity gain under the proposed model, compared with that under the opportunistic spectrum access or the conventional spectrum sharing model.",
"",
"In this paper, we consider a sensing-based spectrum sharing scenario and present an efficient decentralized algorithm to maximize the total throughput of the cognitive radio users by optimizing jointly both the detection operation and the power allocation, taking into account the influence of the sensing accuracy. This optimization problem can be formulated as a distributed non-cooperative power allocation game, which can be solved by using an alternating direction optimization method. The transmit power budget of the cognitive radio users and the constraint related to the rate-loss of the primary user due to the interference are considered in the scheme. Finally, we use variational inequality theory in order to find the existence and uniqueness of the Nash equilibrium for our proposed distributed non-cooperative game."
]
}
|
1212.5720
|
1589578128
|
This paper proposes a novel framework for multi-group shape analysis relying on a hierarchical graphical statistical model on shapes within a population.The framework represents individual shapes as point setsmodulo translation, rotation, and scale, following the notion in Kendall shape space.While individual shapes are derived from their group shape model, each group shape model is derived from a single population shape model. The hierarchical model follows the natural organization of population data and the top level in the hierarchy provides a common frame of reference for multigroup shape analysis, e.g. classification and hypothesis testing. Unlike typical shape-modeling approaches, the proposed model is a generative model that defines a joint distribution of object-boundary data and the shape-model variables. Furthermore, it naturally enforces optimal correspondences during the process of model fitting and thereby subsumes the so-called correspondence problem. The proposed inference scheme employs an expectation maximization (EM) algorithm that treats the individual and group shape variables as hidden random variables and integrates them out before estimating the parameters (population mean and variance and the group variances). The underpinning of the EM algorithm is the sampling of pointsets, in Kendall shape space, from their posterior distribution, for which we exploit a highly-efficient scheme based on Hamiltonian Monte Carlo simulation. Experiments in this paper use the fitted hierarchical model to perform (1) hypothesis testing for comparison between pairs of groups using permutation testing and (2) classification for image retrieval. The paper validates the proposed framework on simulated data and demonstrates results on real data.
|
Shape modeling and analysis is an important problem in a variety of fields including biology, medical image analysis, and computer vision @cite_15 @cite_5 @cite_0 @cite_6 that has received considerable attention over the last few decades. Kendall @cite_6 defines ph shape as an equivalence class of pointsets under the similarities generated by translation, rotation, and scaling. Objects in biological images or anatomical structures in medical images often possess shape as the sole identifying characteristic instead of color or texture. Applications of shape analysis beyond natural contexts include handwriting analysis and character recognition. In the medical context, shapes of human anatomical structures can provide crucial cues in diagnosis of pathologies or disorders. The key problems in this context lie in the fitting of shape models to population image data followed by statistical analysis such as hypothesis testing to compare groups, classification of unseen shapes in one of the groups for which the shape models are learnt, or unsupervised clustering of shapes.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_6"
],
"mid": [
"",
"334078648",
"2114261252",
"2091804476"
],
"abstract": [
"",
"Preliminaries: Size Measures and Shape Coordinates. Preliminaries: Planar Procrustes Analysis. Shape Space and Distance. General Procrustes Methods. Shape Models for Two Dimensional Data. Tangent Space Inference. Size--and--Shape. Distributions for Higher Dimensions. Deformations and Describing Shape Change. Shape in Images. Additional Topics. References and Author Index. Index.",
"One. Introduction: On the Absence of Geometry from Morphometrics.- First Part. The Measurement of Biological Shape.- Two. Shapes and Measures of Shape.- A. Properties of the Euclidean Plane and Euclidean Space.- B. Outlines and Homologous Landmarks.- C. Definitions of Shape, Shape Change, Shape Measurement.- D. Shapes as Data.- Three. Critique of an Applied Field: Conventional Cephalometrics.- A. Landmarks, Curvature, and Growth.- B. Registration.- Four. New Statistical Methods for Shape.- A. Analysis of the Tangent Angle Function.- 1. History.- 2. Sampling from the tangent angle function.- 3. Conic replacement curves and their estimation Fit of a conic to a point - Geometric interpretation - Estimator for a circle - Estimation for the general conic - Computation of the extremum -Linear constraints.- 4. Conic splining Joint conic fitting under constraint -An example - Application - Analysis of parameters.- B. Extension to Three Dimensions: A Sketch.- C. Skeletons Definition - A multivariate statistical method - Bibliographic note.- Second Part. The Measurement of Shape Change Using Biorthogonal Grids.- Five. The Study of Shape Transformation after D'Arcy Thompson.- A. The Original Method.- 1. Thompson's own work.- 2. Later examples.- 3. Difficulties.- B. Analysis of Growth Gradients.- C. Simulations.- D. Other Morphometric Schemes Vector displacements - Multivariate morphometrics.- Six. The Method of Biorthogonal Grids.- A. Representation of Affine Transformations.- B. General Lines of Growth and Biorthogonal Grids.- C. Summarizing the Grids.- Technical Note 1. Existence and Form of Biorthogonal Grids.- Technical Note 2. Interpolation from Landmark Locations and Arcs The measure of roughness - The vector space and its associated functions - Interpolation from boundary values - Interpolation from boundary values and interior points - Note on computation.- Technical Note 3. Construction of Integral Curves.- Technical Note 4. On Homologous Points.- Seven. Examples of Biorthogonal Analysis.- A. Comparison of Square and Biorthogonal Grids: Thompson's Diodon Figure.- B. Phylogeny and Ontogeny of Primate Crania.- 1. Functional craniology and craniometrics.- 2. Data for this exercise.- 3. Two types of transformations.- 4. Comparative ontogeny of the apes and man.- Eight. Future Directions for Transformation Analysis.- A. Statistical Methods The symmetric tensor field - Concordance -Linear methods.- B. Computation Other kinds of information about homology -Three dimensions.- C. Likely Applications Computed tomography - Orthodontics - Developmental biology.- Nine. Envoi.- Literature Cited.",
"The shape-space l. k m whose points a represent the shapes of not totally degenerate c-ads in IR m is introduced as a quotient space carrying the quotient metric. When m = 1, we find that Y = S k 2 ; when m ^ 3, the shape-space contains singularities. This paper deals mainly with the case m = 2, when the shape-space I* ca n be identified with a version of CP* 2 . Of special importance are the shape-measures induced on CP k 2 by any assigned diffuse law of distribution for the k vertices. We determine several such shape-measures, we resolve some of the technical problems associated with the graphic presentation and statistical analysis of empirical shape distributions, and among applications we discuss the relevance of these ideas to testing for the presence of non-accidental multiple alignments in collections of (i) neolithic stone monuments and (ii) quasars. Finally the recently introduced Ambartzumian density is examined from the present point of view, its norming constant is found, and its connexion with random Crofton polygons is established."
]
}
|
1212.5701
|
6908809
|
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.
|
When gradient descent nears a minima in the cost surface, the parameter values can oscillate back and forth around the minima. One method to prevent this is to slow down the parameter updates by decreasing the learning rate. This can be done manually when the validation accuracy appears to plateau. Alternatively, learning rate schedules have been proposed @cite_6 to automatically anneal the learning rate based on how many epochs through the data have been done. These approaches typically add additional hyperparameters to control how quickly the learning rate decays.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1994616650"
],
"abstract": [
"Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability."
]
}
|
1212.5204
|
2952745280
|
Reversible debuggers have been developed at least since 1970. Such a feature is useful when the cause of a bug is close in time to the bug manifestation. When the cause is far back in time, one resorts to setting appropriate breakpoints in the debugger and beginning a new debugging session. For these cases when the cause of a bug is far in time from its manifestation, bug diagnosis requires a series of debugging sessions with which to narrow down the cause of the bug. For such "difficult" bugs, this work presents an automated tool to search through the process lifetime and locate the cause. As an example, the bug could be related to a program invariant failing. A binary search through the process lifetime suffices, since the invariant expression is true at the beginning of the program execution, and false when the bug is encountered. An algorithm for such a binary search is presented within the FReD (Fast Reversible Debugger) software. It is based on the ability to checkpoint, restart and deterministically replay the multiple processes of a debugging session. It is based on GDB (a debugger), DMTCP (for checkpoint-restart), and a custom deterministic record-replay plugin for DMTCP. FReD supports complex, real-world multithreaded programs, such as MySQL and Firefox. Further, the binary search is robust. It operates on multi-threaded programs, and takes advantage of multi-core architectures during replay.
|
Both IGOR @cite_21 and the work by Boothe @cite_14 support a primitive type of reverse expression watchpoint for single-threaded applications of the form x>0 , where the left-hand side of the expression is a variable and the right-hand side is a constant. x is also a monotone variable. On the other hand, FReD supports general expressions.
|
{
"cite_N": [
"@cite_14",
"@cite_21"
],
"mid": [
"1969550081",
"2082498963"
],
"abstract": [
"This paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. We expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. The efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. These counters areused to precisely identify the desired target event on the fly as thetarget program executes. This is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. For reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. Our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re-execution. Two other important components of this debugger are its I O logging and checkpointing. We log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re-execution used for reverse movements. Short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back.",
"Typical debugging tools are insufficiently powerful to find the most difficult types of program misbehaviors. We have implemented a prototype of a new debugging system, IGOR, which provides a great deal more useful information and offers new abilities that are quite promising. The system runs fast enough to be quite useful while providing many features that are usually available only in an interpreted environment. We describe here some improved facilities (reverse execution, selective searching of execution history, substitution of data and executable parts of the programs) that are needed for serious debugging and are not found in traditional single-thread debugging tools. With a little help from the operating system, we provide these capabilities at reasonable cost without modifying the executable code and running fairly close to full speed. The prototype runs under the DUNE distributed operating system. The current system only supports debugging of single-thread programs. The paper describes planned extensions to make use of extra processors to speed the system and for applying the technique to multi-thread and time dependent executions."
]
}
|
1212.5204
|
2952745280
|
Reversible debuggers have been developed at least since 1970. Such a feature is useful when the cause of a bug is close in time to the bug manifestation. When the cause is far back in time, one resorts to setting appropriate breakpoints in the debugger and beginning a new debugging session. For these cases when the cause of a bug is far in time from its manifestation, bug diagnosis requires a series of debugging sessions with which to narrow down the cause of the bug. For such "difficult" bugs, this work presents an automated tool to search through the process lifetime and locate the cause. As an example, the bug could be related to a program invariant failing. A binary search through the process lifetime suffices, since the invariant expression is true at the beginning of the program execution, and false when the bug is encountered. An algorithm for such a binary search is presented within the FReD (Fast Reversible Debugger) software. It is based on the ability to checkpoint, restart and deterministically replay the multiple processes of a debugging session. It is based on GDB (a debugger), DMTCP (for checkpoint-restart), and a custom deterministic record-replay plugin for DMTCP. FReD supports complex, real-world multithreaded programs, such as MySQL and Firefox. Further, the binary search is robust. It operates on multi-threaded programs, and takes advantage of multi-core architectures during replay.
|
The work of King et al @cite_18 goes back to the last time a variable was modified, by employing virtual machine snapshots and event logging. While the work of King et al detects the last time a variable was modified, FReD takes the user back in time to the last point an expression had a correct value. Similarly to Boothe @cite_14 , the reverse watchpoint is performed in two steps and only the points where the debugger stops are probed.
|
{
"cite_N": [
"@cite_18",
"@cite_14"
],
"mid": [
"2162351670",
"1969550081"
],
"abstract": [
"We develop an availability solution, called SafetyNet, that uses a unified, lightweight checkpoint recovery mechanism to support multiple long-latency fault detection schemes. At an abstract level, SafetyNet logically maintains multiple, globally consistent checkpoints of the state of a shared memory multiprocessor (i.e., processors, memory, and coherence permissions), and it recovers to a pre-fault checkpoint of the system and re-executes if a fault is detected. SafetyNet efficiently coordinates checkpoints across the system in logical time and uses \"logically atomic\" coherence transactions to free checkpoints of transient coherence state. SafetyNet minimizes performance overhead by pipelining checkpoint validation with subsequent parallel execution.We illustrate SafetyNet avoiding system crashes due to either dropped coherence messages or the loss of an interconnection network switch (and its buffered messages). Using full-system simulation of a 16-way multiprocessor running commercial workloads, we find that SafetyNet (a) adds statistically insignificant runtime overhead in the common-case of fault-free execution, and (b) avoids a crash when tolerated faults occur.",
"This paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. We expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. The efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. These counters areused to precisely identify the desired target event on the fly as thetarget program executes. This is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. For reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. Our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re-execution. Two other important components of this debugger are its I O logging and checkpointing. We log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re-execution used for reverse movements. Short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
Most approaches equate the task of shape-matching to the matching of the respective object boundaries. The shape boundaries are discretized into a set of @math landmark points, @math , for easier representation and matching. @cite_10 showed that these points could be located at any place on the object boundary and that they need not be restricted to extrema points on the curve. They also proposed to describe the shape using shape contexts at each of these sampled points. The shape context at each sampled point is given by the relative distribution of the rest of the @math points, which is represented as a 2-D histogram of distances and angles.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2057175746"
],
"abstract": [
"We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by: (1) solving for correspondences between points on the two shapes; (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape contexts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; regularized thin-plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning transform. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits, and the COIL data set."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
@cite_42 tackle the problem of partial similarity and show how objects that have large similar parts (but not completely similar) can be matched. They present a novel approach, which shows how partiality can be quantified using the notion of Pareto optimality. They use inner distance in order to handle non-rigid objects @cite_13 . The notion of Pareto optimality has since been applied by other authors for measuring partiality of shapes @cite_35 .
|
{
"cite_N": [
"@cite_35",
"@cite_42",
"@cite_13"
],
"mid": [
"",
"2166606682",
"1996611764"
],
"abstract": [
"",
"Similarity is one of the most important abstract concepts in human perception of the world. In computer vision, numerous applications deal with comparing objects observed in a scene with some a priori known patterns. Often, it happens that while two objects are not similar, they have large similar parts, that is, they are partially similar. Here, we present a novel approach to quantify partial similarity using the notion of Pareto optimality. We exemplify our approach on the problems of recognizing non-rigid geometric objects, images, and analyzing text sequences.",
"Analysis of deformable two-dimensional shapes is an important problem, encountered in numerous pattern recognition, computer vision and computer graphics applications. In this paper, we address three major problems in the analysis of non-rigid shapes: similarity, partial similarity, and correspondence. We present an axiomatic construction of similarity criteria for deformation-invariant shape comparison, based on intrinsic geometric properties of the shapes, and show that such criteria are related to the Gromov-Hausdorff distance. Next, we extend the problem of similarity computation to shapes which have similar parts but are dissimilar when considered as a whole, and present a construction of set-valued distances, based on the notion of Pareto optimality. Finally, we show that the correspondence between non-rigid shapes can be obtained as a byproduct of the non-rigid similarity problem. As a numerical framework, we use the generalized multidimensional scaling (GMDS) method, which is the numerical core of the three problems addressed in this paper."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
@cite_33 identified that though the use of inner distance provided invariance to articulations, it could not be directly applied to non-ideal" 2-D projections of 3-D objects. If the projection took place using a weak perspective, then not all parts of the 3-D model would get accurately projected onto the 2-D plane. In order to overcome this problem, they modeled an articulating object as a combination of approximate convex parts and performed affine normalization of these parts. They then use inner distance to perform shape matching on the normalized shapes. Their near-convex decomposition algorithm takes as input the contour of the object and splits the object into multiple convex parts. However, such an approach cannot be followed for shapes such as those shown in Figure , since the algorithm would split the object into multiple parts, yielding undesirable results.
|
{
"cite_N": [
"@cite_33"
],
"mid": [
"1589666435"
],
"abstract": [
"Given a set of points corresponding to a 2D projection of a non-planar shape, we would like to obtain a representation invariant to articulations (under no self-occlusions). It is a challenging problem since we need to account for the changes in 2D shape due to 3D articulations, viewpoint variations, as well as the varying effects of imaging process on different regions of the shape due to its non-planarity. By modeling an articulating shape as a combination of approximate convex parts connected by non-convex junctions, we propose to preserve distances between a pair of points by (i) estimating the parts of the shape through approximate convex decomposition, by introducing a robust measure of convexity and (ii) performing part-wise affine normalization by assuming a weak perspective camera model, and then relating the points using the inner distance which is insensitive to planar articulations. We demonstrate the effectiveness of our representation on a dataset with non-planar articulations, and on standard shape retrieval datasets like MPEG-7."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
The Medial Axis Transform (MAT) and its variant, shock graphs, have been used by certain authors for matching shapes @cite_31 @cite_40 . The medial axis, or skeleton, is the locus of the centers of all maximally inscribed circles of the object. While the MAT captures the interior properties of the shape to a large extent, by definition, the generation of a skeleton depends on the boundary of the object. Therefore, the objects shown in Figure will all have vastly different skeletons. @cite_36 , proposed to model shapes using skelet al contexts. Their contexts are calculated at the skeleton endings and the bins are populated by the non-uniformly sampled points from the boundary. Relying on the skeleton, and the boundary points, makes their method susceptible to indentations in the contour. We show, in Section , how our method does not fall prey to such boundary perturbations.
|
{
"cite_N": [
"@cite_36",
"@cite_40",
"@cite_31"
],
"mid": [
"2157375720",
"2114766304",
"2150559991"
],
"abstract": [
"Shape is a significant visual clue for human perception and shape models show considerable promise as a basis for extracting objects from images. This paper proposes a novel approach for shape matching and modeling using the symmetry characterization of shape interior and the spatial relationships of shape structures. Based on the representative skelet al features, we develop a mechanism to generate a coarse segment matching between different instances of an object. Additionally, the natural correspondence of skelet al branches to sequential segments along the shape curves is employed in the matching process to avoid false correspondences across different segments. Point matches within the corresponding segments are then obtained by solving a constrained assignment problem. The validation of the proposed approach is illustrated on various data sets in the presence of considerable deformation and occlusion and the results are compared with those of popular approaches. We also demonstrate the performance of our method on biological objects for shape modeling, showing better models than those obtained by the state-of-the-art shape modeling approaches.",
"This paper presents a novel framework for the recognition of objects based on their silhouettes. The main idea is to measure the distance between two shapes as the minimum extent of deformation necessary for one shape to match the other. Since the space of deformations is very high-dimensional, three steps are taken to make the search practical: 1) define an equivalence class for shapes based on shock-graph topology, 2) define an equivalence class for deformation paths based on shock-graph transitions, and 3) avoid complexity-increasing deformation paths by moving toward shock-graph degeneracy. Despite these steps, which tremendously reduce the search requirement, there still remain numerous deformation paths to consider. To that end, we employ an edit-distance algorithm for shock graphs that finds the optimal deformation path in polynomial time. The proposed approach gives intuitive correspondences for a variety of shapes and is robust in the presence of a wide range of visual transformations. The recognition rates on two distinct databases of 99 and 216 shapes each indicate highly successful within category matches (100 percent in top three matches), which render the framework potentially usable in a range of shape-based recognition applications.",
"We have been developing a theory for the generic representation of 2-D shape, where structural descriptions are derived from the shocks (singularities) of a curve evolution process, acting on bounding contours. We now apply the theory to the problem of shape matching. The shocks are organized into a directed, acyclic shock graph, and complexity is managed by attending to the most significant (central) shape components first. The space of all such graphs is highly structured and can be characterized by the rules of a shock graph grammar. The grammar permits a reduction of a shockgraph to a unique rooted shock tree. We introduce a novel tree matching algorithm which finds the best set of corresponding nodes between two shock trees in polynomial time. Using a diverse database of shapes, we demonstrate our system's performance under articulation, occlusion, and changes in viewpoint."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
Due to the diversity involved in shape-matching, it has become difficult to come up with a single measure that incorporates all the requirements. While the use of Euclidean distance is beneficial for identifying certain classes of objects, the use of inner distance favours some others. As a result, researchers have started to fuse two or more techniques while calculating the distance between two shapes. @cite_37 identified that the use of inner distance was overkill" for certain classes of objects and proposed a technique to balance deformability and discriminability. They calculate the cost between two shapes with the help of various distance measures, parameterised by an aspect weight, and retain the best" cost. However, they still use points sampled from the contour and their algorithm would therefore be susceptible to objects with strong base structures that have indentations in their contours.
|
{
"cite_N": [
"@cite_37"
],
"mid": [
"1552231868"
],
"abstract": [
"We propose a novel framework, aspect space, to balance deformability and discriminability, which are often two competing factors in shape and image representations. In this framework, an object is embedded as a surface in a higher dimensional space with a parameter named aspect weight, which controls the importance of intensity in the embedding. We show that this framework naturally unifies existing important shape and image representations by adjusting the aspect weight and the embedding. More importantly, we find that the aspect weight implicitly controls the degree to which a representation handles deformation. Based on this idea, we present the aspect shape context, which extends shape context-based descriptors and adaptively selects the \"best\" aspect weight for shape comparison. Another observation we have is the proposed descriptor nicely fits context-sensitive shape retrieval. The proposed methods are evaluated on two public datasets, MPEG7-CE-Shape-1 and Tari 1000, in comparison to state-of-the-art solutions. In the standard shape retrieval experiment using the MPEG7 CE-Shape-1 database, the new descriptor with context information achieves a bull's eye score of 95.96 , which surpassed all previous results. In the Tari 1000 dataset, our methods significantly outperform previous tested methods as well."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
All the techniques described above were directed towards the development of a good distance measure between pairs of images, where the similarity of an object was influenced by just one other object. However, recent works have shown that an improvement in the retrieval performance can be achieved if other similar shapes are allowed to influence the pair-wise scores. For a given similarity measure, a new similarity measure is learned through graph transduction @cite_11 . Many methods that focus on improving the transduction algorithms have been proposed in the recent past @cite_17 @cite_26 @cite_23 .
|
{
"cite_N": [
"@cite_26",
"@cite_17",
"@cite_23",
"@cite_11"
],
"mid": [
"",
"1483634300",
"2126491576",
"2105026917"
],
"abstract": [
"",
"This paper considers two major applications of shape matching algorithms: (a) query-by-example, i e retrieving the most similar shapes from a database and (b) finding clusters of shapes, each represented by a single prototype Our approach goes beyond pairwise shape similarity analysis by considering the underlying structure of the shape manifold, which is estimated from the shape similarity scores between all the shapes within a database We propose a modified mutual kNN graph as the underlying representation and demonstrate its performance for the task of shape retrieval We further describe an efficient, unsupervised clustering method which uses the modified mutual kNN graph for initialization Experimental evaluation proves the applicability of our method, e g by achieving the highest ever reported retrieval score of 93.40 on the well known MPEG-7 database.",
"Shape retrieval matching is a very important topic in com- puter vision. The recent progress in this domain has been mostly driven by designing smart features for providing better similarity measure be- tween pairs of shapes. In this paper, we provide a new perspective to this problem by considering the existing shapes as a group, and study their similarity measures to the query shape in a graph structure. Our method is general and can be built on top of any existing shape match- ing algorithms. It learns a better metric through graph transduction by propagating the model through existing shapes, in a way similar to com- puting geodesics in shape manifold. However, the proposed method does not require learning the shape manifold explicitly and it does not require knowing any class labels of existing shapes. The presented experimen- tal results demonstrate that the proposed approach yields significant improvements over the state-of-art shape matching algorithms. We ob- tained a retrieval rate of 91 on the MPEG-7 data set, which is the highest ever reported in the literature.",
"Shape similarity and shape retrieval are very important topics in computer vision. The recent progress in this domain has been mostly driven by designing smart shape descriptors for providing better similarity measure between pairs of shapes. In this paper, we provide a new perspective to this problem by considering the existing shapes as a group, and study their similarity measures to the query shape in a graph structure. Our method is general and can be built on top of any existing shape similarity measure. For a given similarity measure, a new similarity is learned through graph transduction. The new similarity is learned iteratively so that the neighbors of a given shape influence its final similarity to the query. The basic idea here is related to PageRank ranking, which forms a foundation of Google Web search. The presented experimental results demonstrate that the proposed approach yields significant improvements over the state-of-art shape matching algorithms. We obtained a retrieval rate of 91.61 percent on the MPEG-7 data set, which is the highest ever reported in the literature. Moreover, the learned similarity by the proposed method also achieves promising improvements on both shape classification and shape clustering."
]
}
|
1212.4608
|
2951031455
|
In this paper, we identify some of the limitations of current-day shape matching techniques. We provide examples of how contour-based shape matching techniques cannot provide a good match for certain visually similar shapes. To overcome this limitation, we propose a perceptually motivated variant of the well-known shape context descriptor. We identify that the interior properties of the shape play an important role in object recognition and develop a descriptor that captures these interior properties. We show that our method can easily be augmented with any other shape matching algorithm. We also show from our experiments that the use of our descriptor can significantly improve the retrieval rates.
|
Starting the diffusion with a good similarity matrix will lead us to obtain better similarities at the end. A good similarity matrix is one in which similar shapes have high affinity. We show that our method helps in generating a better similarity matrix after the diffusion process. We use the Locally Constrained Diffusion Process (LCDP) @cite_27 to learn the manifold structure of the shapes and show, in Section , that our matrix is able to generate highly competitive retrieval rates.
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2170818070"
],
"abstract": [
"The matching and retrieval of 2D shapes is an important challenge in computer vision. A large number of shape similarity approaches have been developed, with the main focus being the comparison or matching of pairs of shapes. In these approaches, other shapes do not influence the similarity measure of a given pair of shapes. In the proposed approach, other shapes do influence the similarity measure of each pair of shapes, and we show that this influence is beneficial even in the unsupervised setting (without any prior knowledge of shape classes). The influence of other shapes is propagated as a diffusion process on a graph formed by a given set of shapes. However, the classical diffusion process does not perform well in shape space for two reasons: it is unstable in the presence of noise and the underlying local geometry is sparse. We introduce a locally constrained diffusion process which is more stable even if noise is present, and we densify the shape space by adding synthetic points we call 'ghost points'. We present experimental results that demonstrate very significant improvements over state-of-the-art shape matching algorithms. On the MPEG-7 data set, we obtained a bull's-eye retrieval score of 93.32 , which is the highest score ever reported in the literature."
]
}
|
1212.4560
|
1824950334
|
A random matrix is likely to be well conditioned, and motivated by this well known property we employ random matrix multipliers to advance some fundamental matrix computations. This includes numerical stabilization of Gaussian elimination with no pivoting as well as block Gaussian elimination, approximation of the leading and trailing singular spaces of an ill conditioned matrix, associated with its largest and smallest singular values, respectively, and approximation of this matrix by low-rank matrices, with further extensions to computing numerical ranks and the approximation of tensor decomposition. We formally support the efficiency of the proposed techniques where we employ Gaussian random multipliers, but our extensive tests have consistently produced the same outcome where instead we used sparse and structured random multipliers, defined by much fewer random parameters compared to the number of their entries.
|
Preconditioning of linear systems of equations is a classical subject @cite_42 , @cite_18 , @cite_11 . Randomized multiplicative preconditioning for numerical stabilization of GENP was proposed in [Section 12.2] PGMQ and @cite_35 , but with no formal support for this approach. On low-rank approximation we refer the reader to the survey @cite_39 . We cite these and other related works throughout the paper and refer to [Section 11] PQZb on further bibliography. For a natural extension of our present work, one can combine randomized matrix multiplication with randomized augmentation and additive preprocessing of @cite_0 , @cite_7 , @cite_12 , @cite_1 , @cite_25 , @cite_36 , @cite_46 .
|
{
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_7",
"@cite_36",
"@cite_42",
"@cite_1",
"@cite_39",
"@cite_0",
"@cite_46",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"1981220107",
"",
"1506588095",
"",
"",
"2117756735",
"",
"",
"",
"2017951318",
"1560263223"
],
"abstract": [
"",
"This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.",
"",
"Random matrices tend to be well conditioned, and we employ this well known property to advance matrix computations. We prove that our algorithms employing Gaussian random matrices are efficient, but in our tests the algorithms have consistently remained as powerful where we used sparse and structured random matrices, defined by much fewer random parameters. We numerically stabilize Gaussian elimination with no pivoting as well as block Gaussian elimination, precondition an ill conditioned linear system of equations, compute numerical rank of a matrix without orthogonalization and pivoting, approximate the singular spaces of an ill conditioned matrix associated with its largest and smallest singular values, and approximate this matrix with low-rank matrices, with applications to its 2-by-2 block triangulation and to tensor decomposition. Some of our results and techniques can be of independent interest, e.g., our estimates for the condition numbers of random Toeplitz and circulant matrices and our variations of the Sherman--Morrison--Woodbury formula.",
"",
"",
"Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the @math dominant components of the singular value decomposition of an @math matrix. (i) For a dense input matrix, randomized algorithms require @math floating-point operations (flops) in contrast to @math for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to @math passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.",
"",
"",
"",
"Abstract Our randomized preprocessing enables pivoting-free and orthogonalization-free solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the estimated solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove numerical stability of our approach and extend it to solving nonsingular linear systems, inversion and generalized (Moore–Penrose) inversion of general and structured matrices by means of Newton’s iteration, approximation of a matrix by a nearby matrix that has a smaller rank or a smaller displacement rank, matrix eigen-solving, and root-finding for polynomial and secular equations and for polynomial systems of equations. Some by-products and extensions of our study can be of independent technical intersest, e.g., our extensions of the Sherman–Morrison–Woodbury formula for matrix inversion, our estimates for the condition number of randomized matrix products, and preprocessing via augmentation.",
"List of Algorithms Preface 1. Introduction. Brief Overview of the State of the Art Notation Review of Relevant Linear Algebra Part I. Krylov Subspace Approximations. 2. Some Iteration Methods. Simple Iteration Orthomin(1) and Steepest Descent Orthomin(2) and CG Orthodir, MINRES, and GMRES Derivation of MINRES and CG from the Lanczos Algorithm 3. Error Bounds for CG, MINRES, and GMRES. Hermitian Problems-CG and MINRES Non-Hermitian Problems-GMRES 4. Effects of Finite Precision Arithmetic. Some Numerical Examples The Lanczos Algorithm A Hypothetical MINRES CG Implementation A Matrix Completion Problem Orthogonal Polynomials 5. BiCG and Related Methods. The Two-Sided Lanczos Algorithm The Biconjugate Gradient Algorithm The Quasi-Minimal Residual Algorithm Relation Between BiCG and QMR The Conjugate Gradient Squared Algorithm The BiCGSTAB Algorithm Which Method Should I Use? 6. Is There A Short Recurrence for a Near-Optimal Approximation? The Faber and Manteuffel Result Implications 7. Miscellaneous Issues. Symmetrizing the Problem Error Estimation and Stopping Criteria Attainable Accuracy Multiple Right-Hand Sides and Block Methods Computer Implementation Part II. Preconditioners. 8. Overview and Preconditioned Algorithms. 9. Two Example Problems. The Diffusion Equation The Transport Equation 10. Comparison of Preconditioners. Jacobi, Gauss--Seidel, SOR The Perron--Frobenius Theorem Comparison of Regular Splittings Regular Splittings Used with the CG Algorithm Optimal Diagonal and Block Diagonal Preconditioners 11. Incomplete Decompositions. Incomplete Cholesky Decomposition Modified Incomplete Cholesky Decomposition 12. Multigrid and Domain Decomposition Methods. Multigrid Methods Basic Ideas of Domain Decomposition Methods."
]
}
|
1212.4522
|
2950692926
|
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
|
Conceptually, our three-view formulation may be compared to the generative model that attempts to capture the relationships between the image class, annotation tags, and image features. One example of such a model in the literature is @cite_10 . Unlike @cite_10 , though, we do not concern ourselves with the exact generative nature of the dependencies between the three views, but simply assign symmetric roles to them and model the pairwise correlations between them. Also, while @cite_10 tie annotation tags to image regions following @cite_2 @cite_4 , we treat both the image appearance and all the tags assigned to the image as global feature vectors. This allows for much more scalable learning and inference (the approach of @cite_10 is only tested on datasets of under 2,000 images and eight classes each).
|
{
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_2"
],
"mid": [
"2066134726",
"",
"2020842694"
],
"abstract": [
"We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"",
"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval."
]
}
|
1212.4522
|
2950692926
|
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
|
The major goal of our work is learning a joint latent space for images and tags, in which corresponding images and tags are mapped to nearby locations, so that simple nearest-neighbor methods can be used to perform cross-modal tasks, including image-to-image, tag-to-image, and image-to-tag search. A number of successful recent approaches to learning such an embedding rely on Canonical Correlation Analysis (CCA) . @cite_17 and @cite_3 have applied CCA to map images and text to the same space for cross-modal retrieval tasks. @cite_21 @cite_23 have presented a cross-modal retrieval approach that models the relative importance of words based on the order in which they appear in user-provided annotations. @cite_16 have used KCCA to develop a cross-view spectral clustering approach that can be applied to images and associated text. CCA embeddings have also been used in other domains, such as cross-language retrieval . Unlike all the other CCA-based image retrieval and annotation approaches, ours adds a third view that explicitly represents the latent image semantics.
|
{
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_23",
"@cite_16",
"@cite_17"
],
"mid": [
"2163740729",
"2106277773",
"2029163572",
"2123576058",
""
],
"abstract": [
"We introduce a method for image retrieval that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results may more closely match the user’s mental image of the scene being sought. We evaluate our approach on two datasets, and show clear improvements over both an approach relying on image features alone, as well as a baseline that uses words and image features, but ignores the implied importance cues.",
"The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"We introduce an approach to image retrieval and auto-tagging that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results better preserve the aspects a human may find most worth mentioning. We evaluate our approach on three datasets using either keyword tags or natural language descriptions, and quantify results with both ground truth parameters as well as direct tests with human subjects. Our results show clear improvements over approaches that either rely on image features alone, or that use words and image features but ignore the implied importance cues. Overall, our work provides a novel way to incorporate high-level human perception of scenes into visual representations for enhanced image search.",
"We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.",
""
]
}
|
1212.4522
|
2950692926
|
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
|
The third task we are interested in evaluating is image-to-tag search or automatic image annotation . This task has traditionally been addressed with the help of sophisticated generative models such as @cite_2 @cite_9 @cite_0 . More recently, a number of publications have reported better results with simple data-driven schemes based on retrieving database images similar to a query and transferring the annotations from those images . We will adopt this strategy in our experiments and demonstrate that retrieving similar images in our embedded latent space can improve the accuracy of tag transfer.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_2"
],
"mid": [
"2127411609",
"2125238156",
"2020842694"
],
"abstract": [
"We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval.",
"A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning",
"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval."
]
}
|
1212.4522
|
2950692926
|
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
|
The data-driven image annotation approaches of @cite_7 @cite_14 @cite_25 use discriminative learning to obtain a metric or a weighting of different features to improve the relevance of database images retrieved for a query. Unfortunately, the learning stage is very computationally expensive -- for example, in the TagProp method of @cite_7 , it scales quadratically with the number of images. In fact, the standard datasets used for image annotation by @cite_14 @cite_7 @cite_25 consist of 5K-20K images and have 260-290 tags each.By contrast, our datasets (shown in Figure ) range in size from 71K to 270K and have tag vocabularies of size 1K-20K. While it is possible to develop scalable metric learning algorithms using stochastic gradient descent (e.g., @cite_22 ), our work shows that learning a linear embedding using CCA can serve as a simpler attractive alternative.
|
{
"cite_N": [
"@cite_22",
"@cite_14",
"@cite_25",
"@cite_7"
],
"mid": [
"1499991161",
"1877469910",
"2146024151",
"2536305071"
],
"abstract": [
"We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.",
"Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.",
"Automatic image annotation aims at predicting a set of textual labels for an image that describe its semantics. These are usually taken from an annotation vocabulary of few hundred labels. Because of the large vocabulary, there is a high variance in the number of images corresponding to different labels (\"class-imbalance\"). Additionally, due to the limitations of manual annotation, a significant number of available images are not annotated with all the relevant labels (\"weak-labelling\"). These two issues badly affect the performance of most of the existing image annotation models. In this work, we propose 2PKNN, a two-step variant of the classical K-nearest neighbour algorithm, that addresses these two issues in the image annotation task. The first step of 2PKNN uses \"image-to-label\" similarities, while the second step uses \"image-to-image\" similarities; thus combining the benefits of both. Since the performance of nearest-neighbour based methods greatly depends on how features are compared, we also propose a metric learning framework over 2PKNN that learns weights for multiple features as well as distances together. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label prediction. For scalability, we implement it by alternating between stochastic sub-gradient descent and projection steps. Extensive experiments demonstrate that, though conceptually simple, 2PKNN alone performs comparable to the current state-of-the-art on three challenging image annotation datasets, and shows significant improvements after metric learning.",
"Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art."
]
}
|
1212.4522
|
2950692926
|
This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
|
Finally, our work has connections to approaches that use Internet images and accompanying text as auxiliary training data to improve performance on tasks such as image classification, for which cleanly labeled training data may be scarce . In particular, @cite_8 use the multi-task learning framework of @cite_24 to learn a discriminative latent space from Web images and associated captions. We will use this embedding method as one of our baselines, though, unlike our approach, it can only be applied to images, not to tag vectors. Apart from multi-task learning, another popular way to obtain an intermediate embedding space for images is by mapping them to outputs of a bank of concept or attribute classifiers . Once again, unlike our method, this produces an embedding for images only; also, training of a large number of concept classifiers tends to require more supervision and be more computationally intensive than training of a CCA model.
|
{
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2130903752",
"2157487986"
],
"abstract": [
"One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.",
"Current methods for learning visual categories work well when a large amount of labeled data is available, but can run into severe difficulties when the number of labeled examples is small. When labeled data is scarce it may be beneficial to use unlabeled data to learn an image representation that is low-dimensional, but nevertheless captures the information required to discriminate between image categories. This paper describes a method for learning representations from large quantities of unlabeled images which have associated captions; the goal is to improve learning in future image classification problems. Experiments show that our method significantly outperforms (1) a fully-supervised baseline model, (2) a model that ignores the captions and learns a visual representation by performing PCA on the unlabeled images alone and (3) a model that uses the output of word classifiers trained using captions and unlabeled data. Our current work concentrates on captions as the source of meta-data, but more generally other types of meta-data could be used."
]
}
|
1212.2834
|
2016392483
|
Many natural signals exhibit a sparse representation, whenever a suitable describing model is given. Here, a linear generative model is considered, where many sparsity-based signal processing techniques rely on such a simplified model. As this model is often unknown for many classes of the signals, we need to select such a model based on the domain knowledge or using some exemplar signals. This paper presents a new exemplar based approach for the linear model (called the dictionary) selection, for such sparse inverse problems. The problem of dictionary selection, which has also been called the dictionary learning in this setting, is first reformulated as a joint sparsity model. The joint sparsity model here differs from the standard joint sparsity model as it considers an overcompleteness in the representation of each signal, within the range of selected subspaces. The new dictionary selection paradigm is examined with some synthetic and realistic simulations.
|
The problem of dictionary design by combining the atoms of a mother dictionary was considered in @cite_3 @cite_10 @cite_15 . In this setting, an auxiliary sparse matrix combines the mother atoms, to generate a dictionary which fits the given learning samples. The size of dictionary is fixed here and as the learned dictionary is the multiplication of a sparse matrix and a structured matrix (with a possibly fast multiplication), we can implement such a dictionary in two steps, where each of them are cheaper than @math . The dictionary selection problem can be interpreted as a particular case of sparse dictionary learning, when the sparse matrix can have only @math non-zero elements, with one non-zero on each row.
|
{
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_3"
],
"mid": [
"2099321050",
"2086522948",
"2132968352"
],
"abstract": [
"An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ? A, where ? is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but non-efficient and costly to deploy. In this paper, we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3-D image denoising.",
"By solving a linear inverse problem under a sparsity constraint, one can successfully recover the coefficients, if there exists such a sparse approximation for the proposed class of signals. In this framework the dictionary can be adapted to a given set of signals using dictionary learning methods. The learned dictionary often does not have useful structures for a fast implementation, i.e. fast matrix-vector multiplication. This prevents such a dictionary being used for the real applications or large scale problems. The structure can be induced on the dictionary throughout the learning progress. Examples of such structures are shift-invariance and being multi-scale. These dictionaries can be efficiently implemented using a filter bank. In this paper a well-known structure, called compressibility, is adapted to be used in the dictionary learning problem. As a result, the complexity of the implementation of a compressible dictionary can be reduced by wisely choosing a generative model. By some simulations, it has been shown that the learned dictionary provides sparser approximations, while it does not increase the computational complexity of the algorithms, with respect to the pre-designed fast structured dictionaries.",
"We have shown in previous works that overcomplete signal decomposition using matching pursuits is an efficient technique for coding motion-residual images in a hybrid video coder. Others have shown that alternate basis sets may improve the coding efficiency or reduce the encoder complexity. In this work, we introduce for the first time a design methodology which incorporates both coding efficiency and complexity in a systematic way. The key to the method is an algorithm which takes an arbitrary 2-D dictionary and generates approximations of the dictionary which have fast two-stage implementations according to the method of (see Proc. IEEE Int. Conf. Image Processing, p.769-773, 1998). By varying the quality of the approximation, we can explore a systematic tradeoff between the coding efficiency and complexity of the resulting matching pursuit video encoder. As a practical result, we show that complexity reduction factors of up to 1000 are achievable with negligible coding efficiency losses of about 0.1-dB PSNR."
]
}
|
1212.2834
|
2016392483
|
Many natural signals exhibit a sparse representation, whenever a suitable describing model is given. Here, a linear generative model is considered, where many sparsity-based signal processing techniques rely on such a simplified model. As this model is often unknown for many classes of the signals, we need to select such a model based on the domain knowledge or using some exemplar signals. This paper presents a new exemplar based approach for the linear model (called the dictionary) selection, for such sparse inverse problems. The problem of dictionary selection, which has also been called the dictionary learning in this setting, is first reformulated as a joint sparsity model. The joint sparsity model here differs from the standard joint sparsity model as it considers an overcompleteness in the representation of each signal, within the range of selected subspaces. The new dictionary selection paradigm is examined with some synthetic and realistic simulations.
|
The problem of learning a dictionary, when the size of dictionary is not given, has been investigated in @cite_14 . The dictionary selection problem has also a similar approach, by finding smaller size dictionaries from the given larger reference dictionaries. The difference is that the reference dictionary is fixed throughout the learning here, which allows us to handle significantly larger problems and find computationally fast dictionaries.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2113607673"
],
"abstract": [
"Sparse modeling of signals has recently received a lot of attention. Often, a linear under-determined generative model for the signals of interest is proposed and a sparsity constraint imposed on the representation. When the generative model is not given, choosing an appropriate generative model is important, so that the given class of signals has approximate sparse representations. In this paper we introduce a new scheme for dictionary learning and impose an additional constraint to reduce the dictionary size. Small dictionaries are desired for coding applications and more likely to “work” with suboptimal algorithms such as Basis Pursuit. Another benefit of small dictionaries is their faster implementation, e.g. a reduced number of multiplication addition in each matrix vector multiplication, which is the bottleneck in sparse approximation algorithms."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.