aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1510.03435 | 2195757990 | We show that the semi-classical analysis of generic Euclidean path integrals necessarily requires complexification of the action and measure, and consideration of complex saddle solutions. We demonstrate that complex saddle points have a natural interpretation in terms of the Picard-Lefschetz theory. Motivated in part by the semi-classical expansion of QCD with adjoint matter on @math , we study quantum-mechanical systems with bosonic and fermionic (Grassmann) degrees of freedom with harmonic degenerate minima, as well as (related) purely bosonic systems with harmonic non-degenerate minima. We find exact finite action non-BPS bounce and bion solutions to the holomorphic Newton equations. We find not only real solutions, but also complex solution with non-trivial monodromy, and finally complex multi-valued and singular solutions. Complex bions are necessary for obtaining the correct non-perturbative structure of these models. In the supersymmetric limit the complex solutions govern the ground state properties, and their contribution to the semiclassical expansion is necessary to obtain consistency with the supersymmetry algebra. The multi-valuedness of the action is either related to the hidden topological angle or to the resurgent cancellation of ambiguities. We also show that in the approximate multi-instanton description the integration over the complex quasi-zero mode thimble produces the most salient features of the exact solutions. While exact complex saddles are more difficult to construct in quantum field theory, the relation to the approximate thimble construction suggests that such solutions may be underlying some remarkable features of approximate bion saddles in quantum field theories. | There are earlier important works in quantum mechanics in which complexification plays some role. The main distinction between our present study and these works is the following. In our work, we have shown that the ground state properties of a generic quantum mechanical system are governed by complex (sometimes even multi-valued) saddles. In the quantum mechanical papers mentioned below, the ground state properties are always governed by real saddles, such as instantons, but complex classical solutions become important when considering the entire spectrum, for example the spectral resolvent. Gutzwiller @cite_48 @cite_2 pioneered the idea of summing over all classical solutions in semiclassical expansions of Green's functions and spectral problems. This was further developed by many authors, from a general formulation in @cite_11 @cite_55 , to specific analyses of the symmetric double-well @cite_49 @cite_69 @cite_102 and periodic potentials @cite_128 . A complex version of WKB analysis of the pure quartic oscillator also requires inclusion of complex semiclassical configurations in addition to the familiar real Bohr-Sommerfeld ones @cite_84 @cite_41 @cite_23 . | {
"cite_N": [
"@cite_69",
"@cite_128",
"@cite_41",
"@cite_48",
"@cite_55",
"@cite_102",
"@cite_84",
"@cite_49",
"@cite_2",
"@cite_23",
"@cite_11"
],
"mid": [
"2003727568",
"1997249902",
"1673721742",
"1978473840",
"",
"",
"2026677605",
"",
"2005585843",
"55183993",
""
],
"abstract": [
"Abstract Complex saddle points in the double well anharmonic oscillator are derived. The boundary conditions τ ( T )=− ϕ (− T )=1 at finite time T required for the computation of tunneling amplitudes lead to a countable set of saddle points. In the limit T → ∞, these saddle points behave as a superposition of instantons and anti-instantons and their action tends to the action associated with the quasi-solutions used in the standard procedure. Saddle points with periodic boundary conditions are also investigated.",
"Abstract The semiclassical limit of the Green function for a particle in the one-dimensional sine-Gordon potential is obtained by summing over complex classical paths. The results are the same as those obtained in the less physically intuitive WKB approach. In addition to being of practical utility for solving quantum mechanical problems involving tunnelling, the classical path method may show how to deal with dense configurations of instantons.",
"In Euclidean path integrals, quantum mechanical tunneling amplitudes are associated with instanton configurations. We explain how tunneling amplitudes are encoded in real-time Feynman path integrals. The essential steps are borrowed from Picard-Lefschetz theory and resurgence theory.",
"The relation between the solutions of the time‐independent Schrodinger equation and the periodic orbits of the corresponding classical system is examined in the case where neither can be found by the separation of variables. If the quasiclassical approximation for the Green's function is integrated over the coordinates, a response function for the system is obtained which depends only on the energy and whose singularities give the approximate eigenvalues of the energy. This response function is written as a sum over all periodic orbits where each term has a phase factor containing the action integral and the number of conjugate points, as well as an amplitude factor containing the period and the stability exponent of the orbit. In terms of the approximate density of states per unit interval of energy, each stable periodic orbit is shown to yield a series of δ functions whose locations are given by a simple quantum condition: The action integral differs from an integer multiple of h by half the stability a...",
"",
"",
"",
"",
"Abstract In 1967 M.C. Gutzwiller succeeded to derive the semiclassical expression of the quantum energy density of systems exhibiting a chaotic Hamiltonian dynamics in the classical limit. The result is known as the Gutzwiller trace formula. The scope of this review is to present in a self-contained way recent developments in functional determinant theory allowing to revisit the Gutzwiller trace formula in the spirit of field theory. The field theoretic setup permits to work explicitly at every step of the derivation of the trace formula with invariant quantities of classical periodic orbits. R. Forman's theory of functional determinants of linear, nonsingular elliptic operators yields the expression of quantum quadratic fluctuations around classical periodic orbits directly in terms of the monodromy matrix of the periodic orbits. The phase factor associated to quadratic fluctuations, the Maslov phase, is shown to be specified by the Morse index for closed extremals, also known as Conley and Zehnder index.",
"Process for the preparation of 5-formyl-2-aminothiazole hydrobromide and 2-amino-5-formylthiazole which comprises reacting bromomalonaldehyde and thiourea in the substantial absence of acid or base to form the 5-formyl-2-aminothiazole hydrobromide, and then treating the same with base to form the aminothiazole.",
""
]
} |
1510.03354 | 2245956762 | The generalized method to have a parallel solution to a computational problem, is to find a way to use Divide & Conquer paradigm in order to have processors acting on its own data and therefore all can be scheduled in parallel. MapReduce is an example of this approach: Input data is transformed by the mappers, in order to feed the reducers that can run in parallel. In general this schema gives efficient problem solutions, but it stops being true when the replication factor grows. We present another program schema that is useful for describing problem solutions, that can exploit dynamic pipeline parallelism without having to deal with replica- tion factors. We present the schema in an example: counting triangles in graphs, in particular when the graph do not fit in memory. We describe the solution in NiMo, a graphical programming language that implements the implicitly functional parallel data ow model of computation. The solution obtained using NiMo, is architecture agnostic and can be deployed in any parallel distributed architecture adapting dynamically the processor usage to input characteristics. | In @cite_3 the Swift T language is described. Is a language implementing the “implicitly parallel functional dataflow” (IPFD) model centered mainly in being able to have as input a variety of data sources. The programming model is very simple and forces the user to think on in-memory data. The language has the ability of integrating leaf tasks written in other languages. Using this feature has proven very effective in implementing solutions in several domains. But the simplicity of the language does not allow some solutions that fully exploit architectures with a massive number of processors. The promote Many-Task Computing. Swift does 4 important things for you: Makes parallelism more transparent implicitly parallel functional dataflow programming Makes computing location more transparent | {
"cite_N": [
"@cite_3"
],
"mid": [
"2077579791"
],
"abstract": [
"Scientists, engineers, and statisticians must execute domain-specific application programs many times on large collections of file-based data. This activity requires complex orchestration and data management as data is passed to, from, and among application invocations. Distributed and parallel computing resources can accelerate such processing, but their use further increases programming complexity. The Swift parallel scripting language reduces these complexities by making file system structures accessible via language constructs and by allowing ordinary application programs to be composed into powerful parallel scripts that can efficiently utilize parallel and distributed resources. We present Swift's implicitly parallel and deterministic programming model, which applies external applications to file collections using a functional style that abstracts and simplifies distributed parallel execution."
]
} |
1510.02786 | 2784222984 | Community detection is considered for a stochastic block model graph of n vertices, with K vertices in the planted community, edge probability p for pairs of vertices both in the community, and edge probability q for other pairs of vertices. The main focus of the paper is on weak recovery of the community based on the graph G, with o(K) misclassified vertices on average, in the sublinear regime @math A critical parameter is the effective signal-to-noise ratio @math , with @math corresponding to the Kesten-Stigum threshold. We show that a belief propagation algorithm achieves weak recovery if @math , beyond the Kesten-Stigum threshold by a factor of @math The belief propagation algorithm only needs to run for @math iterations, with the total time complexity @math , where @math is the iterated logarithm of @math Conversely, if @math , no local algorithm can asymptotically outperform trivial random guessing. Furthermore, a linear message-passing algorithm that corresponds to applying power iteration to the non-backtracking matrix of the graph is shown to attain weak recovery if and only if @math . In addition, the belief propagation algorithm can be combined with a linear-time voting procedure to achieve the information limit of exact recovery (correctly classify all vertices with high probability) for all @math where @math is a function of @math . | In the special case of @math and @math , the problem of finding one community reduces to the classical planted clique problem @cite_6 . If the clique has size @math for any @math , then it cannot be uniquely determined; if @math , an exhaustive search finds the clique with high probability. In contrast, polynomial-time algorithms are only known to find a clique of size @math for any constant @math @cite_19 @cite_0 @cite_1 @cite_10 , and it is shown in @cite_14 that if @math , the clique can be found in @math time with high probability and @math may be a fundamental limit for solving the planted clique problem in nearly linear time. Recent work Meka15 shows that the degree- @math sum-of-squares (SOS) relaxation cannot find the clique unless @math ; an improved lower bound @math for the degree- @math SOS is proved in @cite_2 . Further improved lower bounds are obtained recently in @cite_21 @cite_3 . | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_10"
],
"mid": [
"2952901144",
"1595409123",
"2569174811",
"2010601719",
"1897704527",
"2600681475",
"2166602562",
"1859301526",
"2116616571"
],
"abstract": [
"We consider two closely related problems: planted clustering and submatrix localization. The planted clustering problem assumes that a random graph is generated based on some underlying clusters of the nodes; the task is to recover these clusters given the graph. The submatrix localization problem concerns locating hidden submatrices with elevated means inside a large real-valued random matrix. Of particular interest is the setting where the number of clusters submatrices is allowed to grow unbounded with the problem size. These formulations cover several classical models such as planted clique, planted densest subgraph, planted partition, planted coloring, and stochastic block model, which are widely used for studying community detection and clustering bi-clustering. For both problems, we show that the space of the model parameters (cluster submatrix size, cluster density, and submatrix mean) can be partitioned into four disjoint regions corresponding to decreasing statistical and computational complexities: (1) the regime, where all algorithms fail; (2) the regime, where the computationally expensive Maximum Likelihood Estimator (MLE) succeeds; (3) the regime, where the polynomial-time convexified MLE succeeds; (4) the regime, where a simple counting thresholding procedure succeeds. Moreover, we show that each of these algorithms provably fails in the previous harder regimes. Our theorems establish the minimax recovery limit, which are tight up to constants and hold with a growing number of clusters submatrices, and provide a stronger performance guarantee than previously known for polynomial-time algorithms. Our study demonstrates the tradeoffs between statistical and computational considerations, and suggests that the minimax recovery limit may not be achievable by polynomial-time algorithms.",
"Preface 1. Events and probability 2. Discrete random variables and expectation 3. Moments and deviations 4. Chernoff bounds 5. Balls, bins and random graphs 6. The probabilistic method 7. Markov chains and random walks 8. Continuous distributions and the Poisson process 9. Entropy, randomness and information 10. The Monte Carlo method 11. Coupling of Markov chains 12. Martingales 13. Pairwise independence and universal hash functions 14. Balanced allocations References.",
"",
"In a random graph on n vertices, the maximum clique is likely to be of size very close to 2 lg n. However, the clique produced by applying the naive “greedy” heuristic to a random graph is unlikely to have size much exceeding lg n. The factor of two separating these estimates motivates the search for more effective heuristics. This article analyzes a heuristic search strategy, the Metropolis process, which is just one step above the greedy one in its level of sophistication. It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic.",
"We give a lower bound of @math for the degree-4 Sum-of-Squares SDP relaxation for the planted clique problem. Specifically, we show that on an Erd \"os-R 'enyi graph @math , with high probability there is a feasible point for the degree-4 SOS relaxation of the clique problem with an objective value of @math , so that the program cannot distinguish between a random graph and a random graph with a planted clique of size @math . This bound is tight. We build on the works of Deshpande and Montanari and , who give lower bounds of @math and @math respectively. We improve on their results by making a perturbation to the SDP solution proposed in their work, then showing that this perturbation remains PSD as the objective value approaches @math . In an independent work, Hopkins, Kothari and Potechin [HKP15] have obtained a similar lower bound for the degree- @math SOS relaxation.",
"In the hidden clique problem, one needs to find the maximum clique in an @math -vertex graph that has a clique of size @math but is otherwise random. An algorithm of Alon, Krivelevich and Sudakov that is based on spectral techniques is known to solve this problem (with high probability over the random choice of input graph) when @math for a sufficiently large constant @math . In this manuscript we present a new algorithm for finding hidden cliques. It too provably works when @math for a sufficiently large constant @math . However, our algorithm has the advantage of being much simpler (no use of spectral techniques), running faster (linear time), and experiments show that the leading constant @math is smaller than in the spectral approach. We also present linear time algorithms that experimentally find even smaller hidden cliques, though it remains open whether any of these algorithms finds hidden cliques of size @math .",
"We consider the following probabilistic model of a graph on n labeled vertices. First choose a random graph G(n, 1 2), and then choose randomly a subset Q of vertices of size k and force it to be a clique by joining every pair of vertices of Q by an edge. The problem is to give a polynomial time algorithm for finding this hidden clique almost surely for various values of k. This question was posed independently, in various variants, by Jerrum and by Kucera. In this paper we present an efficient algorithm for all k>cn0.5, for any fixed c>0, thus improving the trivial case k>cn0.5(log n)0.5. The algorithm is based on the spectral properties of the graph. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 13: 457–466, 1998",
"Given a large data matrix @math , we consider the problem of determining whether its entries are i.i.d. with some known marginal distribution @math , or instead @math contains a principal submatrix @math whose entries have marginal distribution @math . As a special case, the hidden (or planted) clique problem requires to find a planted clique in an otherwise uniformly random graph. Assuming unbounded computational resources, this hypothesis testing problem is statistically solvable provided @math for a suitable constant @math . However, despite substantial effort, no polynomial time algorithm is known that succeeds with high probability when @math . Recently Meka and Wigderson meka2013association , proposed a method to establish lower bounds within the Sum of Squares (SOS) semidefinite hierarchy. Here we consider the degree- @math SOS relaxation, and study the construction of meka2013association to prove that SOS fails unless @math . An argument presented by Barak implies that this lower bound cannot be substantially improved unless the witness construction is changed in the proof. Our proof uses the moments method to bound the spectrum of a certain random association scheme, i.e. a symmetric random matrix whose rows and columns are indexed by the edges of an Erd \"os-Renyi random graph.",
"We consider the problems of finding a maximum clique in a graph and finding a maximum-edge biclique in a bipartite graph. Both problems are NP-hard. We write both problems as matrix-rank minimization and then relax them using the nuclear norm. This technique, which may be regarded as a generalization of compressive sensing, has recently been shown to be an effective way to solve rank optimization problems. In the special case that the input graph has a planted clique or biclique (i.e., a single large clique or biclique plus diversionary edges), our algorithm successfully provides an exact solution to the original instance. For each problem, we provide two analyses of when our algorithm succeeds. In the first analysis, the diversionary edges are placed by an adversary. In the second, they are placed at random. In the case of random edges for the planted clique problem, we obtain the same bound as Alon, Krivelevich and Sudakov as well as Feige and Krauthgamer, but we use different techniques."
]
} |
1510.02786 | 2784222984 | Community detection is considered for a stochastic block model graph of n vertices, with K vertices in the planted community, edge probability p for pairs of vertices both in the community, and edge probability q for other pairs of vertices. The main focus of the paper is on weak recovery of the community based on the graph G, with o(K) misclassified vertices on average, in the sublinear regime @math A critical parameter is the effective signal-to-noise ratio @math , with @math corresponding to the Kesten-Stigum threshold. We show that a belief propagation algorithm achieves weak recovery if @math , beyond the Kesten-Stigum threshold by a factor of @math The belief propagation algorithm only needs to run for @math iterations, with the total time complexity @math , where @math is the iterated logarithm of @math Conversely, if @math , no local algorithm can asymptotically outperform trivial random guessing. Furthermore, a linear message-passing algorithm that corresponds to applying power iteration to the non-backtracking matrix of the graph is shown to attain weak recovery if and only if @math . In addition, the belief propagation algorithm can be combined with a linear-time voting procedure to achieve the information limit of exact recovery (correctly classify all vertices with high probability) for all @math where @math is a function of @math . | Another recent work @cite_22 focuses on the case @math , @math for fixed constants @math and @math , and @math for @math . It is shown that no polynomial-time algorithm can attain the information-theoretic threshold of detecting the planted dense subgraph unless the planted clique problem can be solved in polynomial time (see [Hypothesis 1] HajekWuXu14 for the precise statement). For exact recovery, MLE succeeds with high probability if @math ; however, no randomized polynomial-time solver exists, conditioned on the same planted clique hardness hypothesis. | {
"cite_N": [
"@cite_22"
],
"mid": [
"1613681423"
],
"abstract": [
"This paper studies the problem of detecting the presence of a small dense community planted in a large Erd o s-R 'enyi random graph @math , where the edge probability within the community exceeds @math by a constant factor. Assuming the hardness of the planted clique detection problem, we show that the computational complexity of detecting the community exhibits the following phase transition phenomenon: As the graph size @math grows and the graph becomes sparser according to @math , there exists a critical value of @math , below which there exists a computationally intensive procedure that can detect far smaller communities than any computationally efficient procedure, and above which a linear-time procedure is statistically optimal. The results also lead to the average-case hardness results for recovering the dense community and approximating the densest @math -subgraph."
]
} |
1510.02786 | 2784222984 | Community detection is considered for a stochastic block model graph of n vertices, with K vertices in the planted community, edge probability p for pairs of vertices both in the community, and edge probability q for other pairs of vertices. The main focus of the paper is on weak recovery of the community based on the graph G, with o(K) misclassified vertices on average, in the sublinear regime @math A critical parameter is the effective signal-to-noise ratio @math , with @math corresponding to the Kesten-Stigum threshold. We show that a belief propagation algorithm achieves weak recovery if @math , beyond the Kesten-Stigum threshold by a factor of @math The belief propagation algorithm only needs to run for @math iterations, with the total time complexity @math , where @math is the iterated logarithm of @math Conversely, if @math , no local algorithm can asymptotically outperform trivial random guessing. Furthermore, a linear message-passing algorithm that corresponds to applying power iteration to the non-backtracking matrix of the graph is shown to attain weak recovery if and only if @math . In addition, the belief propagation algorithm can be combined with a linear-time voting procedure to achieve the information limit of exact recovery (correctly classify all vertices with high probability) for all @math where @math is a function of @math . | In sharp contrast to the computational barriers discussed in the previous two paragraphs, in the regime @math and @math for fixed @math and @math for a fixed constant @math , recent work @cite_17 derived a function @math such that if @math , exact recovery is achievable in polynomial-time via semidefinite programming relaxations of ML estimation; if @math , any estimator fails to exactly recover the cluster with probability tending to one regardless of the computational costs. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1825700702"
],
"abstract": [
"The binary symmetric stochastic block model deals with a random graph of @math vertices partitioned into two equal-sized clusters, such that each pair of vertices is connected independently with probability @math within clusters and @math across clusters. In the asymptotic regime of @math and @math for fixed @math and @math , we show that the semidefinite programming relaxation of the maximum likelihood estimator achieves the optimal threshold for exactly recovering the partition from the graph with probability tending to one, resolving a conjecture of Abbe14 . Furthermore, we show that the semidefinite programming relaxation also achieves the optimal recovery threshold in the planted dense subgraph model containing a single cluster of size proportional to @math ."
]
} |
1510.02942 | 2228491634 | Multi-instance multi-label (MIML) learning is a challenging problem in many aspects. Such learning approaches might be useful for many medical diagnosis applications including breast cancer detection and classification. In this study subset of digiPATH dataset (whole slide digital breast cancer histopathology images) are used for training and evaluation of six state-of-the-art MIML methods. At the end, performance comparison of these approaches are given by means of effective evaluation metrics. It is shown that MIML-kNN achieve the best performance that is 65.3 average precision, where most of other methods attain acceptable results as well. | During the past few years, many MIML algorithms have been introduced. To name a few of state-of-the-art MIML methods, Zhou and Zhang @cite_3 proposed MIMLSVM by decomposing MIML to single-instance multi-label learning and MIMLBOOST by decomposing MIML to multi-instance single-label learning. Zhang et.al. @cite_5 proposed M3MIML, (i.e. Maximum Margin Method) which learns from multi-instance multi-label examples by maximum margin strategy and an adaptation of radial basis function (RBF) method @cite_6 MIMLRBF which is a neural network style algorithm is proposed by Zhang et.al. @cite_1 . A MIML nearest neighbor method MIML-kNN is proposed by Zhang and Min-Ling @cite_7 , that uses the popular k-nearest neighbor techniques. Yu-Feng Li et.al. @cite_0 proposed KISAR (Key Instances Sharing Among Related labels) which is a MIML algorithm tries to observe the relation between instance and label by considering the fact that highly relevant labels share some instances. | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_5"
],
"mid": [
"1985035762",
"2071860674",
"1554663460",
"2135533176",
"",
""
],
"abstract": [
"In multi-instance multi-label learning (i.e. MIML), each example is not only represented by multiple instances but also associated with multiple labels. Most existing algorithms solve MIML problem via the intuitive way of identifying its equivalence in degenerated version of MIML. However, this identification process may lose useful information encoded in training examples and therefore be harmful to the learning algorithm's performance. In this paper, a novel algorithm named MIML-kNN is proposed for MIML by utilizing the popular k-nearest neighbor techniques. Given a test example, MIML-kNN not only considers its neighbors, but also considers its citers which regard it as their own neighbors. The label set of the test example is determined by exploiting the labeling information conveyed by its neighbors and citers. Experiments on two real-world MIML tasks, i.e. scene classification and text categorization, show that MIML-kNN achieves superior performance than some existing MIML algorithms.",
"In multi-instance multi-label learning (MIML), each example is not only represented by multiple instances but also associated with multiple class labels. Several learning frameworks, such as the traditional supervised learning, can be regarded as degenerated versions of MIML. Therefore, an intuitive way to solve MIML problem is to identify its equivalence in its degenerated versions. However, this identification process would make useful information encoded in training examples get lost and thus impair the learning algorithm's performance. In this paper, RBF neural networks are adapted to learn from MIML examples. Connections between instances and labels are directly exploited in the process of first layer clustering and second layer optimization. The proposed method demonstrates superior performance on two real-world MIML tasks.",
"From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.",
"In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multi-instance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification.",
"",
""
]
} |
1510.02836 | 2256764323 | Interactive Scores (IS) are a formalism for the design and performance of interactive multimedia scenarios. IS provide temporal relations (TR), but they cannot represent conditional branching and TRs simultaneously. We propose an extension to 's IS model by including a condition on the TRs. We found out that in order to have a coherent model in all possible scenarios, durations must be flexible; however, sometimes it is possible to have fixed durations. To show the relevance of our model, we modeled an existing multimedia installation called Mariona. In Mariona there is choice, random durations and loops. Whether we can represent all the TRs available in 's model into ours, or we have to choose between a timed conditional branching model and a pure temporal model before writing a scenario, still remains as an open question. | An application to define a hierarchy and temporal relations among temporal objects is OpenMusic Maquettes @cite_1 . Unfortunately, OpenMusic is designed for composition and not real-time interaction. | {
"cite_N": [
"@cite_1"
],
"mid": [
"74362393"
],
"abstract": [
"This paper presents the computer-assisted composition environment OpenMusic and introduces OM 5.0, a new cross-platform release. The characteristics of this system will be exposed, with examples of applications in music composition and analysis."
]
} |
1510.02836 | 2256764323 | Interactive Scores (IS) are a formalism for the design and performance of interactive multimedia scenarios. IS provide temporal relations (TR), but they cannot represent conditional branching and TRs simultaneously. We propose an extension to 's IS model by including a condition on the TRs. We found out that in order to have a coherent model in all possible scenarios, durations must be flexible; however, sometimes it is possible to have fixed durations. To show the relevance of our model, we modeled an existing multimedia installation called Mariona. In Mariona there is choice, random durations and loops. Whether we can represent all the TRs available in 's model into ours, or we have to choose between a timed conditional branching model and a pure temporal model before writing a scenario, still remains as an open question. | Another model related to Interactive Scores (IS) is @cite_2 . Such systems follow'' the performance a real instrument and may play multimedia associated to certain notes in the score of the piece. However, to use these systems it is necessary to play a real instrument. On the other hand, using IS the user only has to control some parameters of the piece such as the date of the events, and the system plays the temporal objects described on the score. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2287649557"
],
"abstract": [
"Antescofois a modular anticipatory score following system that holds both instrumental and electronic scores together and is capable of executing electronic scores in synchronization with a live performance and using various controls over time. In its very basic use, it is a classical score following system, but in advanced use it enables concurrent representation and recognition of different audio descriptors (rather than pitch), control over various time scales used in music writing, and enables temporal interaction between the performance and the electronic score. Antescofo comes with a simple score language for flexible writing of time and interaction in computer music."
]
} |
1510.02836 | 2256764323 | Interactive Scores (IS) are a formalism for the design and performance of interactive multimedia scenarios. IS provide temporal relations (TR), but they cannot represent conditional branching and TRs simultaneously. We propose an extension to 's IS model by including a condition on the TRs. We found out that in order to have a coherent model in all possible scenarios, durations must be flexible; however, sometimes it is possible to have fixed durations. To show the relevance of our model, we modeled an existing multimedia installation called Mariona. In Mariona there is choice, random durations and loops. Whether we can represent all the TRs available in 's model into ours, or we have to choose between a timed conditional branching model and a pure temporal model before writing a scenario, still remains as an open question. | An advantage of using formal methods to model interactive multimedia is that they usually have automatic verification techniques. There are numerous studies to verify liveness, fairness, reachability and boundness on Petri Networks @cite_3 . On the other hand, in the last years, calculi similar to ntcc have been subject of study for automatic model checking procedures @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_3"
],
"mid": [
"2031915967",
"1996109622"
],
"abstract": [
"The language Timed Concurrent Constraint (tccp) is the extension over time of the Concurrent Constraint Programming (cc) paradigm that allows us to specify concurrent systems where timing is critical, for example reactive systems . Systems which may have an infinite number of states can be specified in tccp. Model checking is a technique which is able to verify finite-state systems with a huge number of states in an automatic way. In the last years several studies have investigated how to extend model checking techniques to systems with an infinite number of states. In this paper we propose an approach which exploits the computation model of tccp. Constraint based computations allow us to define a methodology for applying a model checking algorithm to (a class of) infinite-state systems. We extend the classical algorithm of model checking for LTL to a specific logic defined for the verification of tccp and to the tccp Structure which we define in this work for modeling the program behavior. We define a restriction on the time in order to get a finite model and then we develop some illustrative examples. To the best of our knowledge this is the first approach that defines a model checking methodology for tccp.",
"Starts with a brief review of the history and the application areas considered in the literature. The author then proceeds with introductory modeling examples, behavioral and structural properties, three methods of analysis, subclasses of Petri nets and their analysis. In particular, one section is devoted to marked graphs, the concurrent system model most amenable to analysis. Introductory discussions on stochastic nets with their application to performance modeling, and on high-level nets with their application to logic programming, are provided. Also included are recent results on reachability criteria. Suggestions are provided for further reading on many subject areas of Petri nets. >"
]
} |
1510.02834 | 1953058852 | Writing multimedia interaction systems is not easy. Their concurrent processes usually access shared resources in a non-deterministic order, often leading to unpredictable behavior. Using Pure Data (Pd) and Max MSP is possible to program concurrency, however, it is difficult to synchronize processes based on multiple criteria. Process calculi such as the Non-deterministic Timed Concurrent Constraint (ntcc) calculus, overcome that problem by representing multiple criteria as constraints. We propose using our framework Ntccrt to manage concurrency in Pd and Max. Ntccrt is a real-time capable interpreter for ntcc. Using Ntccrt externals (binary plugins) in Pd we ran models for machine improvisation and signal processing. | The first attempt to execute a multimedia interaction ntcc model was made by the authors of Lman in 2003. They ran a ntcc model to play a sequence of pitches with fixed durations in Lman. Recently, in 2006, ran A Concurrent Constraint Factor Oracle Model for Music Improvisation'' ( @math ) on Rueda's interpreter @cite_28 . | {
"cite_N": [
"@cite_28"
],
"mid": [
"2627058984"
],
"abstract": [
"Machine improvisation and related style simulation problems usually consider building repre- sentations of time-based media data, such as music, either by explicit coding of rules or applying machine learning methods. Stylistic learning applies such methods to musical sequences in order to capture salient musical features and organize these features into a model. The Stylistic simulation process browses the model in order to generate variant musical sequences that are stylistically consistent with the learned ma- terial. If both the learning process and the simulation process happen in real-time, in an interactive system where the computer “plays” with musicians, then Machine Improvisation is achieved. Improvisation models have to cope with a trade-off between completeness (all the possible patterns and their continuation laws are discovered) and incrementality (the completeness is ensured only asymptotically for infinite sequences). In a previous work we devised a complete and incremental model based on the Factor Oracle Algorithm. In this paper we propose a concurrent constraints model for the Factor Oracle and show how it can be used in a concurrent learning improvisation situation. Our model is based on a non-deterministic concurrent constraint process calculus (NTCC). Such an approach allows the system to respond in a faster and more flexible manner to real-life performance situations. In addition, the declarative nature of constraints greatly simplifies the expansion of the system with improvisation rules at a higher musical level. We also describe the implementation of our model in a NTCC interpreter written in Common Lisp that is capable of real time performance."
]
} |
1510.02886 | 2213805152 | Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more efficient estimation of the costs of paths by associating weights with sub-paths in the road network. The paper provides a solution to a foundational problem in this paradigm, namely that of computing the time-varying cost distribution of a path. The solution consists of several steps. We first learn a set of random variables that capture the joint distributions of sub-paths that are covered by sufficient trajectories. Then, given a departure time and a path, we select an optimal subset of learned random variables such that the random variables' corresponding paths together cover the path. This enables accurate joint distribution estimation of the path, and by transferring the joint distribution into a marginal distribution, the travel cost distribution of the path is obtained. The use of multiple learned random variables contends with data sparseness, the use of multi-dimensional histograms enables compact representation of arbitrary joint distributions that fully capture the travel cost dependencies among the edges in paths. Empirical studies with substantial trajectory data from two different cities offer insight into the design properties of the proposed solution and suggest that the solution is effective in real-world settings. | The problem of estimating deterministic travel costs has been studied extensively. Most such studies focus on accurately estimating the travel costs of individual edges, based on which the travel cost of a path is then computed as the sum of the travel costs of its edges. Some recent studies employ GPS trajectories that generally cover more edges than does loop detector data. However, in many cases, the available trajectories are unable to cover all edges in a road network. To address the sparseness of the data, some methods @cite_14 @cite_15 @cite_11 @cite_1 transfer the travel costs of edges that are covered by trajectories to edges that are not covered by trajectories. In particular, the travel costs of an edge can be transferred to its adjacent edges @cite_15 @cite_14 and to edges that are topologically or geographically similar to the edge @cite_11 @cite_1 . In addition, some proposals consider the temporal context @cite_14 @cite_11 @cite_1 , i.e., peak vs. off-peak hours, while transferring travel costs. However, these methods do not support travel cost distributions, and they do not model the dependencies among edges. Therefore, they do not apply to the problem we consider. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_1",
"@cite_11"
],
"mid": [
"1936915774",
"142583486",
"2144475703",
"2155044259"
],
"abstract": [
"When planning routes, drivers usually consider a multitude of different travel costs, e.g., distances, travel times, and fuel consumption. Different drivers may choose different routes between the same source and destination because they may have different driving preferences (e.g., time-efficient driving v.s. fuel-efficient driving). However, existing routing services support little in modeling multiple travel costs and personalization—they usually deliver the same routes that minimize a single travel cost (e.g., the shortest routes or the fastest routes) to all drivers. We study the problem of how to recommend personalized routes to individual drivers using big trajectory data. First, we provide techniques capable of modeling and updating different drivers' driving preferences from the drivers' trajectories while considering multiple travel costs. To recommend personalized routes, we provide techniques that enable efficient selection of a subset of trajectories from all trajectories according to a driver's preference and the source, destination, and departure time specified by the driver. Next, we provide techniques that enable the construction of a small graph with appropriate edge weights reflecting how the driver would like to use the edges based on the selected trajectories. Finally, we recommend the shortest route in the small graph as the personalized route to the driver. Empirical studies with a large, real trajectory data set from 52,211 taxis in Beijing offer insight into the design properties of the proposed techniques and suggest that they are efficient and effective.",
"This paper addresses the task of trajectory cost prediction, a new learning task for trajectories. The goal of this task is to predict the cost for an arbitrary (possibly unknown) trajectory, based on a set of previous trajectory-cost pairs. A typical example of this task is travel-time prediction on road networks. The main technical challenge here is to infer the costs of trajectories including links with no or little passage history. To tackle this, we introduce a weight propagation mechanism over the links, and show that the problem can be reduced to a simple form of kernel ridge regression. We also show that this new formulation leads us to a unifying view, where a natural choice of the kernel is suggested to an existing kernel-based alternative.",
"In this paper, we propose a citywide and real-time model for estimating the travel time of any path (represented as a sequence of connected road segments) in real time in a city, based on the GPS trajectories of vehicles received in current time slots and over a period of history as well as map data sources. Though this is a strategically important task in many traffic monitoring and routing systems, the problem has not been well solved yet given the following three challenges. The first is the data sparsity problem, i.e., many road segments may not be traveled by any GPS-equipped vehicles in present time slot. In most cases, we cannot find a trajectory exactly traversing a query path either. Second, for the fragment of a path with trajectories, they are multiple ways of using (or combining) the trajectories to estimate the corresponding travel time. Finding an optimal combination is a challenging problem, subject to a tradeoff between the length of a path and the number of trajectories traversing the path (i.e., support). Third, we need to instantly answer users' queries which may occur in any part of a given city. This calls for an efficient, scalable and effective solution that can enable a citywide and real-time travel time estimation. To address these challenges, we model different drivers' travel times on different road segments in different time slots with a three dimension tensor. Combined with geospatial, temporal and historical contexts learned from trajectories and map data, we fill in the tensor's missing values through a context-aware tensor decomposition approach. We then devise and prove an object function to model the aforementioned tradeoff, with which we find the most optimal concatenation of trajectories for an estimate through a dynamic programming solution. In addition, we propose using frequent trajectory patterns (mined from historical trajectories) to scale down the candidates of concatenation and a suffix-tree-based index to manage the trajectories received in the present time slot. We evaluate our method based on extensive experiments, using GPS trajectories generated by more than 32,000 taxis over a period of two months. The results demonstrate the effectiveness, efficiency and scalability of our method beyond baseline approaches.",
"We are witnessing increasing interests in the effective use of road networks. For example, to enable effective vehicle routing, weighted-graph models of transportation networks are used, where the weight of an edge captures some cost associated with traversing the edge, e.g., greenhouse gas (GHG) emissions or travel time. It is a precondition to using a graph model for routing that all edges have weights. Weights that capture travel times and GHG emissions can be extracted from GPS trajectory data collected from the network. However, GPS trajectory data typically lack the coverage needed to assign weights to all edges. This paper formulates and addresses the problem of annotating all edges in a road network with travel cost based weights from a set of trips in the network that cover only a small fraction of the edges, each with an associated ground-truth travel cost. A general framework is proposed to solve the problem. Specifically, the problem is modeled as a regression problem and solved by minimizing a judiciously designed objective function that takes into account the topology of the road network. In particular, the use of weighted PageRank values of edges is explored for assigning appropriate weights to all edges, and the property of directional adjacency of edges is also taken into account to assign weights. Empirical studies with weights capturing travel time and GHG emissions on two road networks (Skagen, Denmark, and North Jutland, Denmark) offer insight into the design properties of the proposed techniques and offer evidence that the techniques are effective."
]
} |
1510.02886 | 2213805152 | Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more efficient estimation of the costs of paths by associating weights with sub-paths in the road network. The paper provides a solution to a foundational problem in this paradigm, namely that of computing the time-varying cost distribution of a path. The solution consists of several steps. We first learn a set of random variables that capture the joint distributions of sub-paths that are covered by sufficient trajectories. Then, given a departure time and a path, we select an optimal subset of learned random variables such that the random variables' corresponding paths together cover the path. This enables accurate joint distribution estimation of the path, and by transferring the joint distribution into a marginal distribution, the travel cost distribution of the path is obtained. The use of multiple learned random variables contends with data sparseness, the use of multi-dimensional histograms enables compact representation of arbitrary joint distributions that fully capture the travel cost dependencies among the edges in paths. Empirical studies with substantial trajectory data from two different cities offer insight into the design properties of the proposed solution and suggest that the solution is effective in real-world settings. | When all edges have travel costs, it is possible to estimate the travel cost of any path, i.e., by summing up the travel costs of the edges in the paths @cite_14 @cite_15 @cite_11 . However, a recent study @cite_1 shows that using the sum of travel costs of edges as the travel cost of a path can be inaccurate because it ignores hard-to-formalize aspects of travel, such as turn costs. Instead, a method is proposed to identify an optimal set of sub-paths that can be concatenated into a path. The path's travel cost is then the sum of travel costs of the sub-paths, which can be obtained from trajectories. However, this method does not support travel cost distributions, and it assumes independence among sub-paths. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_1",
"@cite_11"
],
"mid": [
"1936915774",
"142583486",
"2144475703",
"2155044259"
],
"abstract": [
"When planning routes, drivers usually consider a multitude of different travel costs, e.g., distances, travel times, and fuel consumption. Different drivers may choose different routes between the same source and destination because they may have different driving preferences (e.g., time-efficient driving v.s. fuel-efficient driving). However, existing routing services support little in modeling multiple travel costs and personalization—they usually deliver the same routes that minimize a single travel cost (e.g., the shortest routes or the fastest routes) to all drivers. We study the problem of how to recommend personalized routes to individual drivers using big trajectory data. First, we provide techniques capable of modeling and updating different drivers' driving preferences from the drivers' trajectories while considering multiple travel costs. To recommend personalized routes, we provide techniques that enable efficient selection of a subset of trajectories from all trajectories according to a driver's preference and the source, destination, and departure time specified by the driver. Next, we provide techniques that enable the construction of a small graph with appropriate edge weights reflecting how the driver would like to use the edges based on the selected trajectories. Finally, we recommend the shortest route in the small graph as the personalized route to the driver. Empirical studies with a large, real trajectory data set from 52,211 taxis in Beijing offer insight into the design properties of the proposed techniques and suggest that they are efficient and effective.",
"This paper addresses the task of trajectory cost prediction, a new learning task for trajectories. The goal of this task is to predict the cost for an arbitrary (possibly unknown) trajectory, based on a set of previous trajectory-cost pairs. A typical example of this task is travel-time prediction on road networks. The main technical challenge here is to infer the costs of trajectories including links with no or little passage history. To tackle this, we introduce a weight propagation mechanism over the links, and show that the problem can be reduced to a simple form of kernel ridge regression. We also show that this new formulation leads us to a unifying view, where a natural choice of the kernel is suggested to an existing kernel-based alternative.",
"In this paper, we propose a citywide and real-time model for estimating the travel time of any path (represented as a sequence of connected road segments) in real time in a city, based on the GPS trajectories of vehicles received in current time slots and over a period of history as well as map data sources. Though this is a strategically important task in many traffic monitoring and routing systems, the problem has not been well solved yet given the following three challenges. The first is the data sparsity problem, i.e., many road segments may not be traveled by any GPS-equipped vehicles in present time slot. In most cases, we cannot find a trajectory exactly traversing a query path either. Second, for the fragment of a path with trajectories, they are multiple ways of using (or combining) the trajectories to estimate the corresponding travel time. Finding an optimal combination is a challenging problem, subject to a tradeoff between the length of a path and the number of trajectories traversing the path (i.e., support). Third, we need to instantly answer users' queries which may occur in any part of a given city. This calls for an efficient, scalable and effective solution that can enable a citywide and real-time travel time estimation. To address these challenges, we model different drivers' travel times on different road segments in different time slots with a three dimension tensor. Combined with geospatial, temporal and historical contexts learned from trajectories and map data, we fill in the tensor's missing values through a context-aware tensor decomposition approach. We then devise and prove an object function to model the aforementioned tradeoff, with which we find the most optimal concatenation of trajectories for an estimate through a dynamic programming solution. In addition, we propose using frequent trajectory patterns (mined from historical trajectories) to scale down the candidates of concatenation and a suffix-tree-based index to manage the trajectories received in the present time slot. We evaluate our method based on extensive experiments, using GPS trajectories generated by more than 32,000 taxis over a period of two months. The results demonstrate the effectiveness, efficiency and scalability of our method beyond baseline approaches.",
"We are witnessing increasing interests in the effective use of road networks. For example, to enable effective vehicle routing, weighted-graph models of transportation networks are used, where the weight of an edge captures some cost associated with traversing the edge, e.g., greenhouse gas (GHG) emissions or travel time. It is a precondition to using a graph model for routing that all edges have weights. Weights that capture travel times and GHG emissions can be extracted from GPS trajectory data collected from the network. However, GPS trajectory data typically lack the coverage needed to assign weights to all edges. This paper formulates and addresses the problem of annotating all edges in a road network with travel cost based weights from a set of trips in the network that cover only a small fraction of the edges, each with an associated ground-truth travel cost. A general framework is proposed to solve the problem. Specifically, the problem is modeled as a regression problem and solved by minimizing a judiciously designed objective function that takes into account the topology of the road network. In particular, the use of weighted PageRank values of edges is explored for assigning appropriate weights to all edges, and the property of directional adjacency of edges is also taken into account to assign weights. Empirical studies with weights capturing travel time and GHG emissions on two road networks (Skagen, Denmark, and North Jutland, Denmark) offer insight into the design properties of the proposed techniques and offer evidence that the techniques are effective."
]
} |
1510.02886 | 2213805152 | Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more efficient estimation of the costs of paths by associating weights with sub-paths in the road network. The paper provides a solution to a foundational problem in this paradigm, namely that of computing the time-varying cost distribution of a path. The solution consists of several steps. We first learn a set of random variables that capture the joint distributions of sub-paths that are covered by sufficient trajectories. Then, given a departure time and a path, we select an optimal subset of learned random variables such that the random variables' corresponding paths together cover the path. This enables accurate joint distribution estimation of the path, and by transferring the joint distribution into a marginal distribution, the travel cost distribution of the path is obtained. The use of multiple learned random variables contends with data sparseness, the use of multi-dimensional histograms enables compact representation of arbitrary joint distributions that fully capture the travel cost dependencies among the edges in paths. Empirical studies with substantial trajectory data from two different cities offer insight into the design properties of the proposed solution and suggest that the solution is effective in real-world settings. | Some studies consider the travel cost uncertainty of a path and model this uncertainty. However, their solutions are based on assumptions that do not apply in our setting. First, some studies assume that the travel cost distribution of each edge follows a standard distribution, e.g., a Gaussian distribution. However, the travel cost distribution of a road segment often follows an arbitrary distribution, as shown in recent studies @cite_4 @cite_12 and in Figures (a) and (b) in . We use multi-dimensional histograms to represent arbitrary distributions. | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2005854848",
"2069069662"
],
"abstract": [
"Different uses of a road network call for the consideration of different travel costs: in route planning, travel time and distance are typically considered, and green house gas (GHG) emissions are increasingly being considered. Further, travel costs such as travel time and GHG emissions are time-dependent and uncertain. To support such uses, we propose techniques that enable the construction of a multi-cost, time-dependent, uncertain graph (MTUG) model of a road network based on GPS data from vehicles that traversed the road network. Based on the MTUG, we define stochastic skyline routes that consider multiple costs and time-dependent uncertainty, and we propose efficient algorithms to retrieve stochastic skyline routes for a given source-destination pair and a start time. Empirical studies with three road networks in Denmark and a substantial GPS data set offer insight into the design properties of the MTUG and the efficiency of the stochastic skyline routing algorithms.",
"The monitoring of a system can yield a set of measurements that can be modeled as a collection of time series. These time series are often sparse, due to missing measurements, and spatiotemporally correlated, meaning that spatially close time series exhibit temporal correlation. The analysis of such time series offers insight into the underlying system and enables prediction of system behavior. While the techniques presented in the paper apply more generally, we consider the case of transportation systems and aim to predict travel cost from GPS tracking data from probe vehicles. Specifically, each road segment has an associated travel-cost time series, which is derived from GPS data. We use spatio-temporal hidden Markov models (STHMM) to model correlations among different traffic time series. We provide algorithms that are able to learn the parameters of an STHMM while contending with the sparsity, spatio-temporal correlation, and heterogeneity of the time series. Using the resulting STHMM, near future travel costs in the transportation network, e.g., travel time or greenhouse gas emissions, can be inferred, enabling a variety of routing services, e.g., eco-routing. Empirical studies with a substantial GPS data set offer insight into the design properties of the proposed framework and algorithms, demonstrating the effectiveness and efficiency of travel cost inferencing."
]
} |
1510.02886 | 2213805152 | Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more efficient estimation of the costs of paths by associating weights with sub-paths in the road network. The paper provides a solution to a foundational problem in this paradigm, namely that of computing the time-varying cost distribution of a path. The solution consists of several steps. We first learn a set of random variables that capture the joint distributions of sub-paths that are covered by sufficient trajectories. Then, given a departure time and a path, we select an optimal subset of learned random variables such that the random variables' corresponding paths together cover the path. This enables accurate joint distribution estimation of the path, and by transferring the joint distribution into a marginal distribution, the travel cost distribution of the path is obtained. The use of multiple learned random variables contends with data sparseness, the use of multi-dimensional histograms enables compact representation of arbitrary joint distributions that fully capture the travel cost dependencies among the edges in paths. Empirical studies with substantial trajectory data from two different cities offer insight into the design properties of the proposed solution and suggest that the solution is effective in real-world settings. | Although two recent studies @cite_9 @cite_17 employ histograms to represent travel cost distributions, they only consider travel cost distributions on individual edges, and they assume that the travel cost distributions on edges are independent and cannot capture the dependence among edges. | {
"cite_N": [
"@cite_9",
"@cite_17"
],
"mid": [
"2156301108",
"2075364600"
],
"abstract": [
"Reduction of greenhouse gas (GHG) emissions from transportation is an essential part of the efforts to prevent global warming and climate change. Eco-routing, which enables drivers to use the most environmentally friendly routes, is able to substantially reduce GHG emissions from vehicular transportation. The foundation of eco-routing is a weighted-graph representation of a road network in which road segments, or edges, are associated with eco-weights that capture the GHG emissions caused by traversing the edges. Due to the dynamics of traffic, the eco-weights are typically time dependent and uncertain. We formalize the problem of assigning a time-dependent, uncertain eco-weight to each edge in a road network. In particular, a sequence of histograms are employed to describe the uncertain eco-weight during different time intervals for each edge. Various compression techniques, including histogram merging and buckets reduction, are proposed to maintain compact histograms while achieving good accuracy. Histogram aggregation methods are proposed that use these to accurately estimate GHG emissions for routes. A comprehensive empirical study is conducted based on two years of GPS data from vehicles in order to gain insight into the effectiveness and efficiency of the proposed approach.",
"This paper presents a smart driving direction system leveraging the intelligence of experienced drivers. In this system, GPS-equipped taxis are employed as mobile sensors probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. We propose a time-dependent landmark graph to model the dynamic traffic pattern as well as the intelligence of experienced drivers so as to provide a user with the practically fastest route to a given destination at a given departure time. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest and customized route for end users. We build our system based on a real-world trajectory data set generated by over 33,000 taxis in a period of three months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70 percent of the routes suggested by our method are faster than the competing methods, and 20 percent of the routes share the same results. On average, 50 percent of our routes are at least 20 percent faster than the competing approaches."
]
} |
1510.03145 | 2110806612 | Distributed graph platforms like Pregel have usedvertex-centric programming models to process the growing cor-pus of graph datasets using commodity clusters. However, theirregular structure of graphs causes load imbalances acrossmachines, and this is exacerbated for non-stationary graphalgorithms where not all parts of the graph are active at thesame time. As a result, such graph platforms do not make efficientuse of distributed resources. In this paper, we decouple graphpartitioning from placement on hosts, and introduce strategiesfor elastic placement of graph partitions on Cloud VMs to reducethe cost of execution compared to a static placement, even aswe minimize the increase in makespan. These strategies areinnovative in modeling the graph algorithm's non-stationarybehavior a priori using a metagraph sketch. We validate ourstrategies for several real-world graphs, using runtime tracesfor approximate Betweenness Centrality (BC) algorithm on oursubgraph-centric GoFFish graph platform. Our strategies areable to reduce the cost of execution by up to 54 , comparedto a static placement, while achieving a makespan that is within25 of the optimal. | Pregel has spawned Apache Giraph @cite_8 as an open source implementation, and other optimizations to its programming and execution models. Giraph++ @cite_22 , Blogel @cite_3 and our own work on GoFFish @cite_9 coarsen the programming model to operate on partitions or subgraphs, with Giraph++ using partitions, GoFFish on subgraphs (weakly connected components, WCC) and Blogel on either vertices or blocks (WCC). This gives users more flexible access to graph components that can lead to faster convergence, and also reduces fine-grained vertex-level communication. This paper aims to use elastic Cloud VMs for such component-centric systems. | {
"cite_N": [
"@cite_9",
"@cite_22",
"@cite_3",
"@cite_8"
],
"mid": [
"1893414835",
"217817341",
"2259576664",
""
],
"abstract": [
"Vertex centric models for large scale graph processing are gaining traction due to their simple distributed programming abstraction. However, pure vertex centric algorithms under-perform due to large communication overheads and slow iterative convergence. We introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters, offering the added natural flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model and empirically analyze them for several real world graphs, demonstrating orders of magnitude improvements, in some cases, compared to Apache Giraph’s vertex centric framework.",
"To meet the challenge of processing rapidly growing graph and network data created by modern applications, a number of distributed graph processing systems have emerged, such as Pregel and GraphLab. All these systems divide input graphs into partitions, and employ a \"think like a vertex\" programming model to support iterative graph computation. This vertex-centric model is easy to program and has been proved useful for many graph algorithms. However, this model hides the partitioning information from the users, thus prevents many algorithm-specific optimizations. This often results in longer execution time due to excessive network messages (e.g. in Pregel) or heavy scheduling overhead to ensure data consistency (e.g. in GraphLab). To address this limitation, we propose a new \"think like a graph\" programming paradigm. Under this graph-centric model, the partition structure is opened up to the users, and can be utilized so that communication within a partition can bypass the heavy message passing or scheduling machinery. We implemented this model in a new system, called Giraph++, based on Apache Giraph, an open source implementation of Pregel. We explore the applicability of the graph-centric model to three categories of graph algorithms, and demonstrate its flexibility and superior performance, especially on well-partitioned data. For example, on a web graph with 118 million vertices and 855 million edges, the graph-centric version of connected component detection algorithm runs 63X faster and uses 204X fewer network messages than its vertex-centric counterpart.",
"The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world power-law graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and block-centric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-of-the-art distributed graph computing systems.",
""
]
} |
1510.03145 | 2110806612 | Distributed graph platforms like Pregel have usedvertex-centric programming models to process the growing cor-pus of graph datasets using commodity clusters. However, theirregular structure of graphs causes load imbalances acrossmachines, and this is exacerbated for non-stationary graphalgorithms where not all parts of the graph are active at thesame time. As a result, such graph platforms do not make efficientuse of distributed resources. In this paper, we decouple graphpartitioning from placement on hosts, and introduce strategiesfor elastic placement of graph partitions on Cloud VMs to reducethe cost of execution compared to a static placement, even aswe minimize the increase in makespan. These strategies areinnovative in modeling the graph algorithm's non-stationarybehavior a priori using a metagraph sketch. We validate ourstrategies for several real-world graphs, using runtime tracesfor approximate Betweenness Centrality (BC) algorithm on oursubgraph-centric GoFFish graph platform. Our strategies areable to reduce the cost of execution by up to 54 , comparedto a static placement, while achieving a makespan that is within25 of the optimal. | Distributed graph processing systems divide the graph into a number of partitions which are placed across machines for execution. The quality of partitioning impacts the load on a machine, cost of communication between vertices, and the iterations required to converge. A variety of partitioning techniques have been tried. Giraph's default partitioner hashes vertex IDs to machines, to balance the number vertices per machine. Other approaches that balance the number edges per partition, for algorithms that are edge-bound, have been tried @cite_15 . GoFFish tries to balance the number of vertices per partition while also minimizing the edge cuts between partitions. This gives partitions with well-connected components that suits its subgraph-centric model. Multi-level partitioning schemes have also been identified to improve the CPU utilization @cite_12 . Blogel further uses special 2D partitoners for spatial graphs to improve the convergence time for reachability algorithms. | {
"cite_N": [
"@cite_15",
"@cite_12"
],
"mid": [
"1463623805",
"2221846841"
],
"abstract": [
"The availability of larger and larger graph datasets, growing exponentially over the years, has created several new algorithmic challenges to be addressed. Sequential approaches have become unfeasible, while interest on parallel and distributed algorithms has greatly increased. Appropriately partitioning the graph as a preprocessing step can improve the degree of parallelism of its analysis. A number of heuristic algorithms have been developed to solve this problem, but many of them subdivide the graph on its vertex set, thus obtaining a vertex-partitioned graph. Aim of this paper is to explore a completely different approach based on edge partitioning, in which edges, rather than vertices, are partitioned into disjoint subsets. Contribution of this paper is twofold: first, we introduce a graph processing framework based on edge partitioning, that is flexible enough to be applied to several different graph problems. Second, we show the feasibility of these ideas by presenting a distributed edge partitioning algorithm called d-fep. Our framework is thoroughly evaluated, using both simulations and an Hadoop implementation running on the Amazon EC2 cloud. The experiments show that d-fep is efficient, scalable and obtains consistently good partitions. The resulting edge-partitioned graph can be exploited to obtain more efficient implementations of graph analysis algorithms.",
"Distributed graph processing platforms have helped emerging application domains use commodity clusters and Clouds to analyze large graphs. Vertex-centric programming models like Google Pregel, and their subgraph-centric variants, specify data-parallel application logic for a single vertex or component that execute iteratively. The locality and balancing of components within partitions affects the performance of such platforms. We propose three partitioning strategies for a subgraph-centric model, and analyze their impact on CPU utilization, communication, iterations, and makespan. We analyze these using Breadth First Search and PageRank algorithms on powerlaw and spatio-planar graphs. They are validated on a commodity cluster using our GoFFish subgraph-centric platform, and compared against Apache Giraph vertex-centric platform. Our experiments show upto 8 times improvement in utilization resulting to upto 5 times improvement of overall makespan for flat and hierarchical partitioning over the default strategy due to improved machine utilization. Further, these also exhibit better horizontal scalability relative to Giraph."
]
} |
1510.03145 | 2110806612 | Distributed graph platforms like Pregel have usedvertex-centric programming models to process the growing cor-pus of graph datasets using commodity clusters. However, theirregular structure of graphs causes load imbalances acrossmachines, and this is exacerbated for non-stationary graphalgorithms where not all parts of the graph are active at thesame time. As a result, such graph platforms do not make efficientuse of distributed resources. In this paper, we decouple graphpartitioning from placement on hosts, and introduce strategiesfor elastic placement of graph partitions on Cloud VMs to reducethe cost of execution compared to a static placement, even aswe minimize the increase in makespan. These strategies areinnovative in modeling the graph algorithm's non-stationarybehavior a priori using a metagraph sketch. We validate ourstrategies for several real-world graphs, using runtime tracesfor approximate Betweenness Centrality (BC) algorithm on oursubgraph-centric GoFFish graph platform. Our strategies areable to reduce the cost of execution by up to 54 , comparedto a static placement, while achieving a makespan that is within25 of the optimal. | However, unlike stationary algorithms @cite_2 like PageRank where all vertices are active on all supersteps, non-stationary traversal algorithms like BFS have a varying frontier set of vertices that are active in each iteration. This results in an uneven workload across different machines. We address the lack of compute balancing for non-stationary algorithms here, and the associated suboptimal Cloud costs. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2064635301"
],
"abstract": [
"Pregel [23] was recently introduced as a scalable graph mining system that can provide significant performance improvements over traditional MapReduce implementations. Existing implementations focus primarily on graph partitioning as a preprocessing step to balance computation across compute nodes. In this paper, we examine the runtime characteristics of a Pregel system. We show that graph partitioning alone is insufficient for minimizing end-to-end computation. Especially where data is very large or the runtime behavior of the algorithm is unknown, an adaptive approach is needed. To this end, we introduce Mizan, a Pregel system that achieves efficient load balancing to better adapt to changes in computing needs. Unlike known implementations of Pregel, Mizan does not assume any a priori knowledge of the structure of the graph or behavior of the algorithm. Instead, it monitors the runtime characteristics of the system. Mizan then performs efficient fine-grained vertex migration to balance computation and communication. We have fully implemented Mizan; using extensive evaluation we show that---especially for highly-dynamic workloads---Mizan provides up to 84 improvement over techniques leveraging static graph pre-partitioning."
]
} |
1510.02906 | 2252862078 | We propose a temporal dynamic appearance model for online multi-person tracking.We describe a feature selection method to capture appearance changes of persons.We present an incremental learning approach to learn the model online.We propose an online multi-person tracking method which incorporates the model.We conduct comprehensive experiments to demonstrate the superiority of the model. Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 [L. Leal-Taix, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChallenge 2015: Towards a benchmark for multi-target tracking, arXiv preprint arXiv:1504.01942.] demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms. | Appearance modeling has gained increasing attention in the literature of multi-person tracking. Many algorithms solve the problem in a large temporal window, and perform global optimization ( , linear programming @cite_38 , condition random field @cite_39 @cite_28 , or continuous energy minimization @cite_34 ) of multiple trajectories, where appearance information is formulated as constraints in the cost function. Kuo al @cite_22 use online-trained classifiers to estimate the appearance affinity between two short term trajectories (tracklets), and obtain multi-person tracking using a hierarchical association framework. This work is extended in @cite_36 @cite_39 to utilize both target-specific and global appearance models for tracklets association. Brendel al @cite_40 and Wang al @cite_15 employ distance metric learning to find appropriate match between the appearances of detections or tracklets. Due to the significant temporal delay and time-consuming iterative process, it is difficult to apply these methods to time-critical applications. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_28",
"@cite_36",
"@cite_39",
"@cite_40",
"@cite_15",
"@cite_34"
],
"mid": [
"2163385949",
"2132200263",
"1531192956",
"2082716591",
"2056250745",
"1966136723",
"2053744956",
"2083049794"
],
"abstract": [
"In this paper, we show that tracking multiple people whose paths may intersect can be formulated as a convex global optimization problem. Our proposed framework is designed to exploit image appearance cues to prevent identity switches. Our method is effective even when such cues are only available at distant time intervals. This is unlike many current approaches that depend on appearance being exploitable from frame to frame. We validate our approach on three multi-camera sport and pedestrian datasets that contain long and complex sequences. Our algorithm perseveres identities better than state-of-the-art algorithms while keeping similar MOTA scores.",
"We present an approach for online learning of discriminative appearance models for robust multi-target tracking in a crowded scene from a single camera. Although much progress has been made in developing methods for optimal data association, there has been comparatively less work on the appearance models, which are key elements for good performance. Many previous methods either use simple features such as color histograms, or focus on the discriminability between a target and the background which does not resolve ambiguities between the different targets. We propose an algorithm for learning a discriminative appearance model for different targets. Training samples are collected online from tracklets within a time sliding window based on some spatial-temporal constraints; this allows the models to adapt to target instances. Learning uses an Ad-aBoost algorithm that combines effective image descriptors and their corresponding similarity measurements. We term the learned models as OLDAMs. Our evaluations indicate that OLDAMs have significantly higher discrimination between different targets than conventional holistic color histograms, and when integrated into a hierarchical association framework, they help improve the tracking accuracy, particularly reducing the false alarms and identity switches.",
"In this paper, we tackle two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As for the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As for another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI [16] and MOT [2] datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10 higher MOTA) over the state-of-the-art.",
"We address the problem of multi-person tracking in a complex scene from a single camera. Although tracklet-association methods have shown impressive results in several challenging datasets, discriminability of the appearance model remains a limitation. Inspired by the work of person identity recognition, we obtain discriminative appearance-based affinity models by a novel framework to incorporate the merits of person identity recognition, which help multi-person tracking performance. During off-line learning, a small set of local image descriptors is selected to be used in on-line learned appearances-based affinity models effectively and efficiently. Given short but reliable track-lets generated by frame-to-frame association of detection responses, we identify them as query tracklets and gallery tracklets. For each gallery tracklet, a target-specific appearance model is learned from the on-line training samples collected by spatio-temporal constraints. Both gallery tracklets and query tracklets are fed into hierarchical association framework to obtain final tracking results. We evaluate our proposed system on several public datasets and show significant improvements in terms of tracking evaluation metrics.",
"We describe an online approach to learn non-linear motion patterns and robust appearance models for multi-target tracking in a tracklet association framework. Unlike most previous approaches that use linear motion methods only, we online build a non-linear motion map to better explain direction changes and produce more robust motion affinities between tracklets. Moreover, based on the incremental learned entry exit map, a multiple instance learning method is devised to produce strong appearance models for tracking; positive sample pairs are collected from different track-lets so that training samples have high diversity. Finally, using online learned moving groups, a tracklet completion process is introduced to deal with tracklets not reaching entry exit points. We evaluate our approach on three public data sets, and show significant improvements compared with state-of-art methods.",
"This paper addresses the problem of simultaneous tracking of multiple targets in a video. We first apply object detectors to every video frame. Pairs of detection responses from every two consecutive frames are then used to build a graph of tracklets. The graph helps transitively link the best matching tracklets that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the maximum-weight independent set (MWIS) of the graph. We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity and contextual constraints between object detections, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. Our results demonstrate advantages of simultaneously accounting for soft and hard contextual constraints in multitarget tracking. We outperform the state of the art on the benchmark datasets.",
"This paper presents a novel introduction of online target-specific metric learning in track fragment (tracklet) association by network flow optimization for long-term multi-person tracking. Different from other network flow formulation, each node in our network represents a tracklet, and each edge represents the likelihood of neighboring tracklets belonging to the same trajectory as measured by our proposed affinity score. In our method, target-specific similarity metrics are learned, which give rise to the appearance-based models used in the tracklet affinity estimation. Trajectory-based tracklets are refined by using the learned metrics to account for appearance consistency and to identify reliable tracklets. The metrics are then re-learned using reliable tracklets for computing tracklet affinity scores. Long-term trajectories are then obtained through network flow optimization. Occlusions and missed detections are handled by a trajectory completion step. Our method is effective for long-term tracking even when the targets are spatially close or completely occluded by others. We validate our proposed framework on several public datasets and show that it outperforms several state of art methods.",
"Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets."
]
} |
1510.02906 | 2252862078 | We propose a temporal dynamic appearance model for online multi-person tracking.We describe a feature selection method to capture appearance changes of persons.We present an incremental learning approach to learn the model online.We propose an online multi-person tracking method which incorporates the model.We conduct comprehensive experiments to demonstrate the superiority of the model. Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 [L. Leal-Taix, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChallenge 2015: Towards a benchmark for multi-target tracking, arXiv preprint arXiv:1504.01942.] demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms. | Our work focuses on building an effective appearance model for online multi-person tracking, which only considers observations up to the current frame and outputs trajectories without temporal delay. The conventional approach of appearance modeling is using descriptors, such as color histograms @cite_31 @cite_21 @cite_10 @cite_12 , to represent the targets, and compute the similarities between descriptors to indicate appearance affinities. Alternatively, Yang al @cite_37 use multi-cue integration to fuse color, shape and texture features to build a more sophisticated appearance model. Breitenstein al @cite_17 and Shu al @cite_26 integrate person-specific classifiers to obtain discriminative appearance models that are able to distinguish the tracked person from the background and other targets. Bae and Yoon @cite_24 use online-trained classifiers to evaluate the observation likelihood in terms of appearance information, and perform multi-person tracking within a Bayesian framework. Kim al @cite_19 use structured SVM to predict the appearance similarity between all pairs of targets in two consecutive frames. Bae and Yoon @cite_14 employ incremental linear discriminant analysis to find a projection that gathers the appearances from the same target and simultaneously separates the appearances from different targets. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_14",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2534411223",
"2121091546",
"2055022211",
"2171932356",
"2158734944",
"115730240",
"1987118352",
"1980675325",
"2022515186",
"2148958980"
],
"abstract": [
"In video surveillance scenarios, appearances of both human and their nearby scenes may experience large variations due to scale and view angle changes, partial occlusions, or interactions of a crowd. These challenges may weaken the effectiveness of a dedicated target observation model even based on multiple cues, which demands for an agile framework to adjust target observation models dynamically to maintain their discriminative power. Towards this end, we propose a new adaptive way to integrate multi-cue in tracking multiple human driven by human detections. Given a human detection can be reliably associated with an existing trajectory, we adapt the way how to combine specifically devised models based on different cues in this tracker so as to enhance the discriminative power of the integrated observation model in its local neighborhood. This is achieved by solving a regression problem efficiently. Specifically, we employ 3 observation models for a single person tracker based on color models of part of torso regions, an elliptical head model, and bags of local features, respectively. Extensive experiments on 3 challenging surveillance datasets demonstrate long-term reliable tracking performance of this method.",
"Single camera-based multiple-person tracking is often hindered by difficulties such as occlusion and changes in appearance. In this paper, we address such problems by proposing a robust part-based tracking-by-detection framework. Human detection using part models has become quite popular, yet its extension in tracking has not been fully explored. Our approach learns part-based person-specific SVM classifiers which capture the articulations of the human bodies in dynamically changing appearance and background. With the part-based model, our approach is able to handle partial occlusions in both the detection and the tracking stages. In the detection stage, we select the subset of parts which maximizes the probability of detection, which significantly improves the detection performance in crowded scenes. In the tracking stage, we dynamically handle occlusions by distributing the score of the learned person classifier among its corresponding parts, which allows us to detect and predict partial occlusions, and prevent the performance of the classifiers from being degraded. Extensive experiments using the proposed method on several challenging sequences demonstrate state-of-the-art performance in multiple-people tracking.",
"Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"This paper presents an online detection-based two-stage multi-object tracking method in dense visual surveillances scenarios with a single camera. In the local stage, a particle filter with observer selection that could deal with partial object occlusion is used to generate a set of reliable tracklets. In the global stage, the detection responses are collected from a temporal sliding window to deal with ambiguity caused by full object occlusion to generate a set of potential tracklets. The reliable tracklets generated in the local stage and the potential tracklets generated within the temporal sliding window are associated by Hungarian algorithm on a modified pairwise tracklets association cost matrix to get the global optimal association. This method is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results prove the effectiveness of our method.",
"In this paper, we consider a multiobject tracking problem in complex scenes. Unlike batch tracking systems using detections of the entire sequence, we propose a novel online multiobject tracking system in order to build tracks sequentially using online provided detections. To track objects robustly even under frequent occlusions, the proposed system consists of three main parts: 1) visual tracking with a novel data association with a track existence probability by associating online detections with the corresponding tracks under partial occlusions; 2) track management to associate terminated tracks for linking tracks fragmented by long-term occlusions; and 3) online model learning to generate discriminative appearance models for successful associations in other two parts. Experimental results using challenging public data sets show the obvious performance improvement of the proposed system, compared with other state-of-the-art tracking systems. Furthermore, extensive performance analysis of the three main parts demonstrates effects and usefulness of the each component for multiobject tracking.",
"We present an online data association algorithm for multi-object tracking using structured prediction. This problem is formulated as a bipartite matching and solved by a generalized classification, specifically, Structural Support Vector Machines (S-SVM). Our structural classifier is trained based on matching results given the similarities between all pairs of objects identified in two consecutive frames, where the similarity can be defined by various features such as appearance, location, motion, etc. With an appropriate joint feature map and loss function in the S-SVM, finding the most violated constraint in training and predicting structured labels in testing are modeled by the simple and efficient Kuhn-Munkres (Hungarian) algorithm in a bipartite graph. The proposed structural classifier can be generalized effectively for many sequences without re-training. Our algorithm also provides a method to handle entering leaving objects, short-term occlusions, and misdetections by introducing virtual agents--additional nodes in a bipartite graph. We tested our algorithm on multiple datasets and obtained comparable results to the state-of-the-art methods with great efficiency and simplicity.",
"Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.",
"We propose a generic online multi-target track-before-detect (MT-TBD) that is applicable on confidence maps used as observations. The proposed tracker is based on particle filtering and automatically initializes tracks. The main novelty is the inclusion of the target ID in the particle state, enabling the algorithm to deal with unknown and large number of targets. To overcome the problem of mixing IDs of targets close to each other, we propose a probabilistic model of target birth and death based on a Markov Random Field (MRF) applied to the particle IDs. Each particle ID is managed using the information carried by neighboring particles. The assignment of the IDs to the targets is performed using Mean-Shift clustering and supported by a Gaussian Mixture Model. We also show that the computational complexity of MT-TBD is proportional only to the number of particles. To compare our method with recent state-of-the-art works, we include a postprocessing stage suited for multi-person tracking. We validate the method on real-world and crowded scenarios, and demonstrate its robustness in scenes presenting different perspective views and targets very close to each other.",
"Online multi-object tracking with a single moving camera is a challenging problem as the assumptions of 2D conventional motion models (e.g., first or second order models) in the image coordinate no longer hold because of global camera motion. In this paper, we consider motion context from multiple objects which describes the relative movement between objects and construct a Relative Motion Network (RMN) to factor out the effects of unexpected camera motion for robust tracking. The RMN consists of multiple relative motion models that describe spatial relations between objects, thereby facilitating robust prediction and data association for accurate tracking under arbitrary camera movements. The RMN can be incorporated into various multi-object tracking frameworks and we demonstrate its effectiveness with one tracking framework based on a Bayesian filter. Experiments on benchmark datasets show that online multi-object tracking performance can be better achieved by the proposed method.",
"In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness."
]
} |
1510.02906 | 2252862078 | We propose a temporal dynamic appearance model for online multi-person tracking.We describe a feature selection method to capture appearance changes of persons.We present an incremental learning approach to learn the model online.We propose an online multi-person tracking method which incorporates the model.We conduct comprehensive experiments to demonstrate the superiority of the model. Robust online multi-person tracking requires the correct associations of online detection responses with existing trajectories. We address this problem by developing a novel appearance modeling approach to provide accurate appearance affinities to guide data association. In contrast to most existing algorithms that only consider the spatial structure of human appearances, we exploit the temporal dynamic characteristics within temporal appearance sequences to discriminate different persons. The temporal dynamic makes a sufficient complement to the spatial structure of varying appearances in the feature space, which significantly improves the affinity measurement between trajectories and detections. We propose a feature selection algorithm to describe the appearance variations with mid-level semantic features, and demonstrate its usefulness in terms of temporal dynamic appearance modeling. Moreover, the appearance model is learned incrementally by alternatively evaluating newly-observed appearances and adjusting the model parameters to be suitable for online tracking. Reliable tracking of multiple persons in complex scenes is achieved by incorporating the learned model into an online tracking-by-detection framework. Our experiments on the challenging benchmark MOTChallenge 2015 [L. Leal-Taix, A. Milan, I. Reid, S. Roth, K. Schindler, MOTChallenge 2015: Towards a benchmark for multi-target tracking, arXiv preprint arXiv:1504.01942.] demonstrate that our method outperforms the state-of-the-art multi-person tracking algorithms. | The concept of temporal dynamic is not new in the community of computer vision. It has been introduced into action event recognition @cite_3 @cite_0 , dynamic appearance prediction @cite_5 @cite_16 , and video-based face recognition @cite_18 @cite_32 . To our best knowledge, ours is the first work to model temporal dynamic characteristics of dynamic appearances for multi-person tracking. We demonstrate its significant importance and practical usefulness in online multi-person tracking by solving crucial issues including feature selection, appearance matching, and model parameter estimation. | {
"cite_N": [
"@cite_18",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_16"
],
"mid": [
"2141425367",
"2134849720",
"2032304035",
"2169992457",
"2149627238",
"2153026700"
],
"abstract": [
"While traditional face recognition is typically based on still images, face recognition from video sequences has become popular. In this paper, we propose to use adaptive hidden Markov models (HMM) to perform video-based face recognition. During the training process, the statistics of training video sequences of each subject, and the temporal dynamics, are learned by an HMM. During the recognition process, the temporal characteristics of the test video sequence are analyzed over time by the HMM corresponding to each subject. The likelihood scores provided by the HMMs are compared, and the highest score provides the identity of the test video sequence. Furthermore, with unsupervised learning, each HMM is adapted with the test video sequence, which results in better modeling over time. Based on extensive experiments with various databases, we show that the proposed algorithm results in better performance than using majority voting of image-based recognition results.",
"This paper presents a method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space, the complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a K-means algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihood that the input image comes from a particular pose manifold and the transition probability to this pose manifold from the previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing frame-based face recognition methods with temporal voting schemes.",
"In this work, we propose a novel video representation for activity recognition that models video dynamics with attributes of activities. A video sequence is decomposed into short-term segments, which are characterized by the dynamics of their attributes. These segments are modeled by a dictionary of attribute dynamics templates, which are implemented by a recently introduced generative model, the binary dynamic system (BDS). We propose methods for learning a dictionary of BDSs from a training corpus, and for quantizing attribute sequences extracted from videos into these BDS code words. This procedure produces a representation of the video as a histogram of BDS code words, which is denoted the bag-of-words for attribute dynamics (BoWAD). An extensive experimental evaluation reveals that this representation outperforms other state-of-the-art approaches in temporal structure modeling for complex activity recognition.",
"While approaches based on bags of features excel at low-level action classification, they are ill-suited for recognizing complex events in video, where concept-based temporal representations currently dominate. This paper proposes a novel representation that captures the temporal dynamics of windowed mid-level concept detectors in order to improve complex event recognition. We first express each video as an ordered vector time series, where each time step consists of the vector formed from the concatenated confidences of the pre-trained concept detectors. We hypothesize that the dynamics of time series for different instances from the same event class, as captured by simple linear dynamical system (LDS) models, are likely to be similar even if the instances differ in terms of low-level visual features. We propose a two-part representation composed of fusing: (1) a singular value decomposition of block Hankel matrices (SSID-S) and (2) a harmonic signature (HS) computed from the corresponding eigen-dynamics matrix. The proposed method offers several benefits over alternate approaches: our approach is straightforward to implement, directly employs existing concept detectors and can be plugged into linear classification frameworks. Results on standard datasets such as NIST's TRECVID Multimedia Event Detection task demonstrate the improved accuracy of the proposed method.",
"This paper presents a technique to learn dynamic appearance models from a small number of training frames. Under this framework, dynamic appearance is modelled as an unknown operator that satisfies certain interpolation conditions and that can be efficiently identified using very little a priori information with off the shelf software. The advantages of the proposed method are illustrated with several examples where the leaned dynamics accurately predict the appearance of the targets preventing tracking failures due to occlusion or clutter.",
"Dynamic appearance is one of the most important cues for tracking and identifying moving people. However, direct modeling spatio-temporal variations of such appearance is often a difficult problem due to their high dimensionality and nonlinearities. In this paper we present a human tracking system that uses a dynamic appearance and motion modeling framework based on the use of robust system dynamics identification and nonlinear dimensionality reduction techniques. The proposed system learns dynamic appearance and motion models from a small set of initial frames and does not require prior knowledge such as gender or type of activity. The advantages of the proposed tracking system are illustrated with several examples where the learned dynamics accurately predict the location and appearance of the targets in future frames, preventing tracking failures due to model drifting, target occlusion and scene clutter."
]
} |
1510.02131 | 2219856611 | Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto's service for marketers to track their brands in user-generated images, and LogoGrab's mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset. | Neural networks have played a part in logo detection as well. @cite_4 incorporated neural networks by applying a probabilistic neural network on top of SIFT for vehicle model recognition. Francesconi @cite_13 used recursive neural networks to classify black-and-white logos. Duffner & Garcia @cite_9 fed pixel values directly into a convolutional neural network with two convolution layers to detect watermarks on television. | {
"cite_N": [
"@cite_13",
"@cite_9",
"@cite_4"
],
"mid": [
"1597757028",
"1955131295",
"2069428064"
],
"abstract": [
"In this paper we propose recognizing logo images by using an adaptive model referred to as recursive artificial neural network. At first, logo images are converted into a structured representation based on contour trees. Recursive neural networks are then learnt using the contourtrees as inputs to the neural nets. On the other hand, the contour-tree is constructed by associating a node with each exterior or interior contour extracted from the logo instance. Nodes in the tree are labeled by a feature vector, which describes the contour by means of its perimeter, surrounded area, and a synthetic representation of its curvature plot. The contour-tree representation contains the topological structured information of logo and continuous values pertaining to each contour node. Hence symbolic and sub-symbolic information coexist in the contour-tree representation of logo image. Experimental results are reported on 40 real logos distorted with artificial noise and performance of recursive neural network is compared with another two types of neural approaches.",
"In this paper, we present a connectionist approach for detecting and precisely localizing transparent logos in TV programs. Our system automatically synthesizes simple problem-specific feature extractors from a training set of logo images, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the logo pattern to analyze. We present in detail the design of our architecture, our learning strategy and the resulting process of logo detection. We also provide experimental results to illustrate the robustness of our approach, that does not require any local preprocessing and leads to a straightforward real time implementation.",
"This paper deals with a novel vehicle manufacturer and model recognition scheme, which is enhanced by color recognition for more robust results. A probabilistic neural network is assessed as a classifier and it is demonstrated that relatively simple image processing measurements can be used to obtain high performance vehicle authentication. The proposed system is assisted by a previously developed license plate recognition, a symmetry axis detector and an image phase congruency calculation modules. The reported results indicate a high recognition rate and a fast processing time, making the system suitable for real-time applications."
]
} |
1510.02131 | 2219856611 | Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto's service for marketers to track their brands in user-generated images, and LogoGrab's mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset. | Used car ad verification. used logo recognition (e.g. Ford, Volkswagen, etc) as one of several signals for classifying cars by make and model @cite_4 . Their logo recognition system consists of SIFT features classified with a Probabilistic Neural Network. This would be useful for identifying inaccurate listings on eBay or Autotrader used car classified ads -- e.g., flagging an ad labeled as a Honda CR-V that contains photos of a Volkswagen Passat. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2069428064"
],
"abstract": [
"This paper deals with a novel vehicle manufacturer and model recognition scheme, which is enhanced by color recognition for more robust results. A probabilistic neural network is assessed as a classifier and it is demonstrated that relatively simple image processing measurements can be used to obtain high performance vehicle authentication. The proposed system is assisted by a previously developed license plate recognition, a symmetry axis detector and an image phase congruency calculation modules. The reported results indicate a high recognition rate and a fast processing time, making the system suitable for real-time applications."
]
} |
1510.02131 | 2219856611 | Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto's service for marketers to track their brands in user-generated images, and LogoGrab's mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset. | Watermark removal. @cite_8 and @cite_9 used logo recognition as part of a system for removing logos and watermarks from television content. Note that this application does not necessarily need to identify the type of logo (e.g. Apple or Adidas); it simply needs to localize logos and remove them. | {
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"1955131295",
"1985026962"
],
"abstract": [
"In this paper, we present a connectionist approach for detecting and precisely localizing transparent logos in TV programs. Our system automatically synthesizes simple problem-specific feature extractors from a training set of logo images, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the logo pattern to analyze. We present in detail the design of our architecture, our learning strategy and the resulting process of logo detection. We also provide experimental results to illustrate the robustness of our approach, that does not require any local preprocessing and leads to a straightforward real time implementation.",
"Most commercial television channels use video logos, which can be considered a form of visible watermark, as a declaration of intellectual property ownership. They are also used as a symbol of authorization to rebroadcast when original logos are used in conjunction with newer logos. An unfortunate side effect of such logos is the concomitant decrease in viewing pleasure. In this paper, we use the temporal correlation of video frames to detect and remove video logos. In the video-logo-detection part, as an initial step, the logo boundary box is first located by using a distance threshold of video frames and is further refined by employing a comparison of edge lengths. Second, our proposed Bayesian classifier framework locates fragments of logos called logo-lets. In this framework, we systematically integrate the prior knowledge about the location of the video logos and their intrinsic local features to achieve a robust detection result. In our logo-removal part, after the logo region is marked, a matching technique is used to find the best replacement patch for the marked region within that video shot. This technique is found to be useful for small logos. Furthermore, we extend the image inpainting technique to videos. Unlike the use of 2D gradients in the image inpainting technique, we inpaint the logo region of video frames by using 3D gradients exploiting the temporal correlations in video. The advantage of this algorithm is that the inpainted regions are consistent with the surrounding texture and hence the result is perceptually pleasing. We present the results of our implementation and demonstrate the utility of our method for logo removal."
]
} |
1510.02131 | 2219856611 | Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto's service for marketers to track their brands in user-generated images, and LogoGrab's mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset. | In our view, logo recognition is an instantiation of the broader problem of object recognition . Recently, Deep Convolutional Neural Networks (DCNNs) have unleashed a torrent of progress on object recognition. In the ImageNet object classification challenge, DCNNs have posted accuracy improvements of several percentage points per year. Using DCNNs, outperformed state-of-the-art accuracy on scene classification and fine-grained bird classification @cite_3 . DCNNs enabled to outperform state-of-the-art accuracy human attribute detection and visual instance retrieval @cite_11 . | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"2953360861",
"2953391683"
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks."
]
} |
1510.02125 | 2271548840 | A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The "words as classifiers" model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of the denotations of its component words. It was recently shown to perform well in a game-playing scenario with a small number of object types. We apply it to two large sets of real-world photographs that contain a much larger variety of types and for which referring expressions are available. Using a pre-trained convolutional neural network to extract image features, and augmenting these with in-picture positional information, we show that the model achieves performance competitive with the state of the art in a reference resolution task (given expression, find bounding box of its referent), while, as we argue, being conceptually simpler and more flexible. | This suggestion has variously been taken up in computational work. An early example is Deb Roy's work from the early 2000s @cite_0 @cite_9 @cite_14 . In @cite_0 , computer vision techniques are used to detect object boundaries in a video feed, and to compute colour features (mean colour pixel value), positional features, and features encoding the relative spatial configuration of objects. These features are then associated in a learning process with certain words, resulting in an association of colour features with colour words, spatial features with prepositions, etc., and based on this, these words can be interpreted with reference to the scene currently presented to the video feed. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_14"
],
"mid": [
"1598442403",
"2156050092",
"2101317480"
],
"abstract": [
"We present a trainable, visually-grounded, spoken language understanding system. The system acquires a grammar and vocabulary from a “show-and-tell” procedure in which visual scenes are paired with verbal descriptions. The system is embodied in a table-top mounted active vision platform. During training, a set of objects is placed in front of the vision system. Using a laser pointer, the system points to objects in random sequence, prompting a human teacher to provide spoken descriptions of the selected objects. The descriptions are transcribed and used to automatically acquire a visually-grounded vocabulary and grammar. Once trained, a person can interact with the system by verbally describing objects placed in front of the system. The system recognizes and robustly parses the speech and points, in real-time, to the object which best fits the visual semantics of the spoken description.",
"A spoken language generation system has been developed that learns to describe objects in computer-generated visual scenes. The system is trained by a ‘show-and-tell\" procedure in which visual scenes are paired with natural language descriptions. Learning algorithms acquire probabilistic structures which encode the visual semantics of phrase structure, word classes, and individual words. Using these structures, a planning algorithm integrates syntactic, semantic, and contextual constraints to generate natural and unambiguous descriptions of objects in novel scenes. The system generates syntactically well-formed compound adjective noun phrases, as well as relative spatial clauses. The acquired linguistic structures generalize from training data, enabling the production of novel word sequences which were never observed during training. The output of the generation system is synthesized using word-based concatenative synthesis drawing from the original training speech corpus. In evaluations of semantic comprehension by human judges, the performance of automatically generated spoken descriptions was comparable to human-generated descriptions. This work is motivated by our long-term goal of developing spoken language processing systems which grounds semantics in machine perception and action. ! 2002 Elsevier Science Ltd. All rights reserved.",
"We use words to communicate about things and kinds of things, their properties, relations and actions. Researchers are now creating robotic and simulated systems that ground language in machine perception and action, mirroring human abilities. A new kind of computational model is emerging from this work that bridges the symbolic realm of language with the physical realm of real-world referents. It explains aspects of context-dependent shifts of word meaning that cannot easily be explained by purely symbolic models. An exciting implication for cognitive modeling is the use of grounded systems to ‘step into the shoes’ of humans by directly processing first-personperspective sensory data, providing a new methodology for testing various hypotheses of situated communication and learning."
]
} |
1510.02125 | 2271548840 | A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The "words as classifiers" model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of the denotations of its component words. It was recently shown to perform well in a game-playing scenario with a small number of object types. We apply it to two large sets of real-world photographs that contain a much larger variety of types and for which referring expressions are available. Using a pre-trained convolutional neural network to extract image features, and augmenting these with in-picture positional information, we show that the model achieves performance competitive with the state of the art in a reference resolution task (given expression, find bounding box of its referent), while, as we argue, being conceptually simpler and more flexible. | The second area to mention here is the recently very active one of image-to-text generation, which has been spurred on by the availability of large datasets and competitions structured around them. The task here typically is to generate a description (a caption) for a given image. A frequently taken approach is to use a convolutional neural network () to map the image to a dense vector (which we do as well, as we will describe below), and then condition a neural language model (typically, an LSTM) on this to produce an output string @cite_4 @cite_2 . modify this approach somewhat, by using what they call word detectors'' first to specifically propose words for image regions, out of which the caption is then generated. This has some similarity to our word models as described below, but again is tailored more towards generation. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2951912364",
"2952782394"
],
"abstract": [
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments."
]
} |
1510.02125 | 2271548840 | A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The "words as classifiers" model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of the denotations of its component words. It was recently shown to perform well in a game-playing scenario with a small number of object types. We apply it to two large sets of real-world photographs that contain a much larger variety of types and for which referring expressions are available. Using a pre-trained convolutional neural network to extract image features, and augmenting these with in-picture positional information, we show that the model achieves performance competitive with the state of the art in a reference resolution task (given expression, find bounding box of its referent), while, as we argue, being conceptually simpler and more flexible. | Two very recent papers carry this type of approach over to the problem of resolving references to objects in images. Both @cite_3 and @cite_8 use s to encode image information (and interestingly, both combine, in different ways, information from the candidate region with more global information about the image as a whole), on which they condition an to get a prediction score for fit of candidate region and referring expression. As we will discuss below, our approach has some similarities, but can be seen as being more compositional, as the expression score is more clearly composed out of individual word scores (with rule-driven composition, however). We will directly compare our results to those reported in these papers, as we were able to use the same datasets. | {
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"2963735856",
"2144960104"
],
"abstract": [
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https: github.com mjhucla Google_Refexp_toolbox."
]
} |
1510.02086 | 2184993403 | In this paper, we report our ongoing investigations of the inherent non-determinism in contemporary execution environments that can potentially lead to divergence in state of a multi-channel hardware software system. Our approach involved setting up of experiments to study execution path variability of a simple program by tracing its execution at the kernel level. In the first of the two experiments, we analyzed the execution path by repeated execution of the program. In the second, we executed in parallel two instances of the same program, each pinned to a separate processor core. Our results show that for a program executing in a contemporary hardware software platform , there is sufcient path non-determinism in kernel space that can potentially lead to diversity in replicated architectures. We believe the execution non-determinism can impact the activation of residual systematic faults in software. If this is true, then the inherent diversity can be used together with architectural means to protect safety related systems against residual systematic faults in the operating systems. | With respect to variability of systems, @cite_9 reports on an experimental study on the variation of execution times of program particularly the influence of operating system jitter on its variability, while @cite_3 performed a study on the variability of execution time in multicore architectures. These two related work focus on execution time of user space applications incontrast to our work on kernel execution path. | {
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"2154735742",
"1037740039"
],
"abstract": [
"In computer experiments, many research works rely on the accuracy of measured programs' execution time. We observe that not all studies consider that repeated executions of the same program, under the same experimental conditions, may produce statistically significant different completion times. In this work, we experimentally demonstrate that several sources of OS Jitter affect the execution time of computer programs. We compare various execution time samples using three test protocols, which apply different statistical techniques. The results show that significant differences are detected in all evaluated scenarios.",
"The recent growth in the number of precessing units in today's multicore processor architectures enables multiple threads to execute simultanesiouly achieving better performances by exploiting thread level parallelism. With the architectural complexity of these new state of the art designs, comes a need to better understand the interactions between the operating system layers, the applications and the underlying hardware platforms. The ability to characterise and to quantify those interactions can be useful in the processes of performance evaluation and analysis, compiler optimisations and operating system job scheduling allowing to achieve better performance stability, reproducibility and predictability. We consider in our study performances instability as variations in program execution times. While these variations are statistically insignificant for large sequential applications, we observe that parallel native OpenMP programs have less performance stability. Understanding the performance instability in current multicore architectures is even more complicated by the variety of factors and sources influencing the applications performances."
]
} |
1510.02086 | 2184993403 | In this paper, we report our ongoing investigations of the inherent non-determinism in contemporary execution environments that can potentially lead to divergence in state of a multi-channel hardware software system. Our approach involved setting up of experiments to study execution path variability of a simple program by tracing its execution at the kernel level. In the first of the two experiments, we analyzed the execution path by repeated execution of the program. In the second, we executed in parallel two instances of the same program, each pinned to a separate processor core. Our results show that for a program executing in a contemporary hardware software platform , there is sufcient path non-determinism in kernel space that can potentially lead to diversity in replicated architectures. We believe the execution non-determinism can impact the activation of residual systematic faults in software. If this is true, then the inherent diversity can be used together with architectural means to protect safety related systems against residual systematic faults in the operating systems. | Related to our work on the basis of using non-deterministic property of systems for protection against faults, is the INDEXYS project @cite_13 , whose stated objective is to investigate how intrinsic diversity of complex operating systems helps in detecting faults in computing platforms. They project proposers state their intension of employing architectural protection schemes to mask and or detect random faults through temporal relaxation. We are, however, not aware of the current state of the project. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1543608709"
],
"abstract": [
"Embedded computing systems have become a pervasive aspect in virtually all application domains, such as industrial, mobile communication, transportation and medical. Due to increasing computational capabilities of microcomputers and their decreasing cost, new functionality has been enabled (e.g., driver assistance systems) and cost savings have become possible, e.g., by the replacement of mechanical components by embedded computers. Conventionally, each application domain tends to develop customized solutions, often re-inventing concepts that are already applied in other domains. It is therefore expedient to invest into a generic embedded system architecture that supports the development of dependable embedded applications in many different application domains, using the same hardware devices and software modules. INDEXYS targets to pave the way from the European Commission Framework 7 GENESYS Project reference computing architecture approach towards pilot applications in the automotive-, railway- and aerospace industrial domains. INDEXYS will follow-up GENESYS project results and will implement selected industrial-grade services of GENESYS architectural concepts. The results of laying together GENESYS, INDEXYS and the new ARTEMIS project ACROSS, which will develop multi processor systems on a chip (MPSoC) using GENESYS reference architecture and services, will provide integral cross-domain architecture and platform, design- and verification- tools, middleware and flexible FPGA- or chip- based devices lowering OEM cost of development and production at faster time-to market.n of COOPERS."
]
} |
1510.02078 | 2094931013 | The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant's online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai). | Various sensor-based methods for automated dietary monitoring have been proposed over the years. Amft and Troster @cite_13 explored sensors in the wrists, head and neck to automatically detect food intake gestures, chewing, and swallowing from accelerometer and acoustic sensor data. built a system for monitoring swallowing and chewing using a piezoelectric strain gauge positioned below the ear and a small microphone located over the laryngopharynx @cite_9 . Yatani and Truong presented a wearable acoustic sensor attached to the user's neck @cite_7 while explored the use of a neckband for nutrition monitoring @cite_20 . | {
"cite_N": [
"@cite_9",
"@cite_20",
"@cite_13",
"@cite_7"
],
"mid": [
"2151703098",
"2068367136",
"2050282683",
"2144685889"
],
"abstract": [
"A methodology of studying of ingestive behavior by non-invasive monitoring of swallowing (deglutition) and chewing (mastication) has been developed. The target application for the developed methodology is to study the behavioral patterns of food consumption and producing volumetric and weight estimates of energy intake. Monitoring is non-invasive based on detecting swallowing by a sound sensor located over laryngopharynx or by a bone conduction microphone and detecting chewing through a below-the-ear strain sensor. Proposed sensors may be implemented in a wearable monitoring device, thus enabling monitoring of ingestive behavior in free living individuals. In this paper, the goals in the development of this methodology are two-fold. First, a system comprised of sensors, related hardware and software for multimodal data capture is designed for data collection in a controlled environment. Second, a protocol is developed for manual scoring of chewing and swallowing for use as a gold standard. The multi-modal data capture was tested by measuring chewing and swallowing in twenty one volunteers during periods of food intake and quiet sitting (no food intake). Video footage and sensor signals were manually scored by trained raters. Inter-rater reliability study for three raters conducted on the sample set of 5 subjects resulted in high average intra-class correlation coefficients of 0.996 for bites, 0.988 for chews, and 0.98 for swallows. The collected sensor signals and the resulting manual scores will be used in future research as a gold standard for further assessment of sensor design, development of automatic pattern recognition routines, and study of the relationship between swallowing chewing and ingestive behavior.",
"We build on previous work [5] that demonstrated, in simple isolated experiments, how head and neck related events (e.g. swallowing, head motion) can be detected using an unobtrusive, textile capacitive sensor integrated in a collar like neckband. We have now developed a 2nd generation that allows long term recording in real life environments in conjunction with a low power Bluetooth enabled smart phone. It allows the system to move from the detection of individual swallows which is too unreliable for practical applications to an analysis of the statistical distribution of swallow frequency. Such an analysis allows the detection of \"nutrition events\" such as having lunch or breakfast. It also allows us to see the general level of activity and distinguish between just being absolutely quiet (no motion) and sleeping. The neckband can be useful in a variety of applications such as cognitive disease monitoring and elderly care.",
"Objective: An imbalanced diet elevates health risks for many chronic diseases including obesity. Dietary monitoring could contribute vital information to lifestyle coaching and diet management, however, current monitoring solutions are not feasible for a long-term implementation. Towards automatic dietary monitoring, this work targets the continuous recognition of dietary activities using on-body sensors. Methods: An on-body sensing approach was chosen, based on three core activities during intake: arm movements, chewing and swallowing. In three independent evaluation studies the continuous recognition of activity events was investigated and the precision-recall performance analysed. An event recognition procedure was deployed, that addresses multiple challenges of continuous activity recognition, including the dynamic adaptability for variable-length activities and flexible deployment by supporting one to many independent classes. The approach uses a sensitive activity event search followed by a selective refinement of the detection using different information fusion schemes. The method is simple and modular in design and implementation. Results: The recognition procedure was successfully adapted to the investigated dietary activities. Four intake gesture categories from arm movements and two food groups from chewing cycle sounds were detected and identified with a recall of 80-90 and a precision of 50- 64 . The detection of individual swallows resulted in 68 recall and 20 precision. Sample-accurate recognition rates were 79 for movements, 86 for chewing and 70 for swallowing. Conclusions: Body movements and chewing sounds can be accurately identified using on-body sensors, demonstrating the feasibility of on-body dietary monitoring. Further investigations are needed to improve the swallowing spotting performance.",
"Accurate activity recognition enables the development of a variety of ubiquitous computing applications, such as context-aware systems, lifelogging, and personal health systems. Wearable sensing technologies can be used to gather data for activity recognition without requiring sensors to be installed in the infrastructure. However, the user may need to wear multiple sensors for accurate recognition of a larger number of different activities. We developed a wearable acoustic sensor, called BodyScope, to record the sounds produced in the user's throat area and classify them into user activities, such as eating, drinking, speaking, laughing, and coughing. The F-measure of the Support Vector Machine classification of 12 activities using only our BodyScope sensor was 79.5 . We also conducted a small-scale in-the-wild study, and found that BodyScope was able to identify four activities (eating, drinking, speaking, and laughing) at 71.5 accuracy."
]
} |
1510.02078 | 2094931013 | The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant's online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai). | With the emergence of low-cost, high-resolution wearable cameras, recording individuals as they perform everyday activities such as eating has been gaining appeal @cite_3 . In this approach, individuals wear cameras that take first-person point-of-view photographs periodically throughout the day. Although first-person point-of-view images offer a viable alternative to direct observation, one of the fundamental problems is image analysis. All captured images must be manually coded for salient content (e.g., evidence of eating activity), a process tends to be tedious and time-consuming. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2088214743"
],
"abstract": [
"Background Objectives: The accuracy of dietary recalls might be enhanced by providing participants with photo images of foods they consumed during the test period. Subjects Methods: We examined the feasibility of a system (Image-Diet Day) that is a user-initiated camera-equipped mobile phone that is programmed to automatically capture and transmit images to a secure website in conjunction with computerassisted, multipass, 24-h dietary recalls in 14 participants during 2007. Participants used the device during eating periods on each of the three independent days. Image processing filters successfully eliminated underexposed, overexposed and blurry images. The captured images were accessed by the participants using the ImageViewer software while completing the 24-h dietary recall on the following day. Results: None of the participants reported difficulty using the ImageViewer. Images were deemed ‘helpful’ or ‘sort of helpful’ by 93 of participants. A majority (79 ) of users reported having no technical problems, but 71 rated the burden of wearing the device as somewhat to very difficult, owing to issues such as limited battery life, self-consciousness about wearing the device in public and concerns about the field of view of the camera. Conclusion: Overall, these findings suggest that automated imaging is a promising technology to facilitate dietary recall. The challenge of managing the thousands of images generated can be met. Smaller devices with a broader field of view may aid in overcoming self-consciousness of the user with using or wearing the device. European Journal of Clinical Nutrition advance online publication, 18 May 2011; doi:10.1038 ejcn.2011.75"
]
} |
1510.02078 | 2094931013 | The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant's online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai). | Over the past decade, research in computer vision is moving towards in the wild'' approaches. Recent research has focussed on recognizing realistic actions in videos @cite_22 , unconstrained face verification and labeling @cite_17 and objection detection and recognition in natural images @cite_28 . Food recognition in the wild using vision-based methods is growing as a topic of interest, with @cite_14 showing promise. | {
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_22",
"@cite_17"
],
"mid": [
"2031489346",
"2127467614",
"2100916003",
"2536626143"
],
"abstract": [
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"Food images have been receiving increased attention in recent dietary control methods. We present the current status of our web-based system that can be used as a dietary management support system by ordinary Internet users. The system analyzes image archives of the user to identify images of meals. Further image analysis determines the nutritional composition of these meals and stores the data to form a Foodlog. The user can view the data in different formats, and also edit the data to correct any mistakes that occurred during image analysis. This paper presents detailed analysis of the performance of the current system and proposes an improvement of analysis by pre-classification and personalization. As a result, the accuracy of food balance estimation is significantly improved.",
"In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92 and 26.34 , respectively, and 31.68 when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance."
]
} |
1510.02078 | 2094931013 | The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant's online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai). | Finally, human computation lies in-between completely manual and fully-automated vision-based image analysis. PlateMate @cite_2 crowdsources nutritional analysis from food photographs using Amazon Mechanical Turk, and investigated the use of crowdsourcing to detect @cite_5 eating moments from first-person point-of-view images. Despite the promise of these crowdsourcing-based approaches, there are clear benefits to a fully automated method in economic terms, and possibly with regards to privacy as well. | {
"cite_N": [
"@cite_5",
"@cite_2"
],
"mid": [
"1996752302",
"2111298664"
],
"abstract": [
"There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68 accuracy.",
"We introduce PlateMate, a system that allows users to take photos of their meals and receive estimates of food intake and composition. Accurate awareness of this information can help people monitor their progress towards dieting goals, but current methods for food logging via self-reporting, expert observation, or algorithmic analysis are time-consuming, expensive, or inaccurate. PlateMate crowdsources nutritional analysis from photographs using Amazon Mechanical Turk, automatically coordinating untrained workers to estimate a meal's calories, fat, carbohydrates, and protein. We present the Management framework for crowdsourcing complex tasks, which supports PlateMate's nutrition analysis workflow. Results of our evaluations show that PlateMate is nearly as accurate as a trained dietitian and easier to use for most users than traditional self-reporting."
]
} |
1510.02073 | 2015023291 | We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. | Accurate indoor localization has been an area of active research @cite_19 . Indoor localization can leverage GSM @cite_29 , active badges @cite_10 , 802.11b wireless ethernet @cite_2 , bluetooth and WAP @cite_11 , listeners and beacons @cite_22 , radiofrequency @cite_0 technologies and SLAM @cite_17 . | {
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_17",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"2112737587",
"2132110947",
"2109749443",
"2170102584",
"169989506",
"2115513410",
"2094204865",
"1996660370"
],
"abstract": [
"This paper presents the design, implementation, and evaluation of Cricket , a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration.",
"Accurate indoor localization has long been an objective of the ubiquitous computing research community, and numerous indoor localization solutions based on 802.11, Bluetooth, ultrasound and infrared technologies have been proposed. This paper presents the first accurate GSM indoor localization system that achieves median accuracy of 5 meters in large multi-floor buildings. The key idea that makes accurate GSM-based indoor localization possible is the use of wide signal-strength fingerprints. In addition to the 6-strongest cells traditionally used in the GSM standard, the wide fingerprint includes readings from additional cells that are strong enough to be detected, but too weak to be used for efficient communication. Experiments conducted on three multi-floor buildings show that our system achieves accuracy comparable to an 802.11-based implementation, and can accurately differentiate between floors in both wooden and steel-reinforced concrete structures.",
"The application of the extended Kaman filter to the problem of mobile robot navigation in a known environment is presented. An algorithm for, model-based localization that relies on the concept of a geometric beacon, a naturally occurring environment feature that can be reliably observed in successive sensor measurements and can be accurately described in terms of a concise geometric parameterization, is developed. The algorithm is based on an extended Kalman filter that utilizes matches between observed geometric beacons and an a priori map of beacon locations. Two implementations of this navigation algorithm, both of which use sonar, are described. The first implementation uses a simple vehicle with point kinematics equipped with a single rotating sonar. The second implementation uses a 'Robuter' mobile robot and six static sonar transducers to provide localization information while the vehicle moves at typical speeds of 30 cm s. >",
"The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy.",
"",
"A key subproblem in the construction of location-aware systems is the determination of the position of a mobile device. This article describes the design, implementation and analysis of a system for determining position inside a building from measured RF signal strengths of packets on an IEEE 802.11b wireless Ethernet network. Previous approaches to location-awareness with RF signals have been severely hampered by non-Gaussian signals, noise, and complex correlations due to multi-path effects, interference and absorption. The design of our system begins with the observation that determining position from complex, noisy and non-Gaussian signals is a well-studied problem in the field of robotics. Using only off-the-shelf hardware, we achieve robust position estimation to within a meter in our experimental context and after adequate training of our system. We can also coarsely determine our orientation and can track our position as we move. Our results show that we can localize a stationary device to within 1.5 meters over 80 of the time and track a moving device to within 1 meter over 50 of the time. Both localization and tracking run in real-time. By applying recent advances in probabilistic inference of position and sensor fusion from noisy signals, we show that the RF emissions from base stations as measured by off-the-shelf wireless Ethernet cards are sufficiently rich in information to permit a mobile device to reliably track its location.",
"A novel system for the location of people in an office environment is described. Members of staff wear badges that transmit signals providing information about their location to a centralized location service, through a network of sensors. The paper also examines alternative location techniques, system design issues and applications, particularly relating to telephone call routing. Location systems raise concerns about the privacy of an individual and these issues are also addressed.",
"Advertising on mobile devices has large potential due to the very personal and intimate nature of the devices and high targeting possibilities. We introduce a novel B-MAD system for delivering permission-based location-aware mobile advertisements to mobile phones using Bluetooth positioning and Wireless Application Protocol (WAP) Push. We present a thorough quantitative evaluation of the system in a laboratory environment and qualitative user evaluation in form of a field trial in the real environment of use. Experimental results show that the system provides a viable solution for realizing permission-based mobile advertising."
]
} |
1510.02073 | 2015023291 | We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. | Outdoor localization from images or video has also been explored, including methods to match new images to street-side images @cite_18 @cite_16 @cite_20 . Other techniques include urban navigation using a camera mobile phone @cite_9 , image geo-tagging based on travel priors @cite_13 and the IM2GPS system @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"1992191556",
"1883248133",
"2537480791",
"2103163130",
"2087463677"
],
"abstract": [
"",
"We describe the prototype of a system intended to allow a userto navigate in an urban environment using a mobile telephone equipped wi th a camera. The system uses a database of views of building facades to det ermine the pose of a query view provided by the user. Our method is based o n a novel wide-baseline matching algorithm that can identify corres ponding building facades in two views despite significant changes of viewpoin t and lighting. We show that our system is capable of localising query views r eliably in a large part of Cambridge city centre.",
"Finding an image's exact GPS location is a challenging computer vision problem that has many real-world applications. In this paper, we address the problem of finding the GPS location of images with an accuracy which is comparable to hand-held GPS devices. We leverage a structured data set of about 100,000 images build from Google Maps Street View as the reference images. We propose a localization method in which the SIFT descriptors of the detected SIFT interest points in the reference images are indexed using a tree. In order to localize a query image, the tree is queried using the detected SIFT descriptors in the query image. A novel GPS-tag-based pruning method removes the less reliable descriptors. Then, a smoothing step with an associated voting scheme is utilized; this allows each query descriptor to vote for the location its nearest neighbor belongs to, in order to accurately localize the query image. A parameter called Confidence of Localization which is based on the Kurtosis of the distribution of votes is defined to determine how reliable the localization of a particular image is. In addition, we propose a novel approach to localize groups of images accurately in a hierarchical manner. First, each image is localized individually; then, the rest of the images in the group are matched against images in the neighboring area of the found first match. The final location is determined based on the Confidence of Localization parameter. The proposed image group localization method can deal with very unclear queries which are not capable of being geolocated individually.",
"This paper presents a method for estimating geographic location for sequences of time-stamped photographs. A prior distribution over travel describes the likelihood of traveling from one location to another during a given time interval. This distribution is based on a training database of 6 million photographs from Flickr.com. An image likelihood for each location is defined by matching a test photograph against the training database. Inferring location for images in a test sequence is then performed using the Forward-Backward algorithm, and the model can be adapted to individual users as well. Using temporal constraints allows our method to geolocate images without recognizable landmarks, and images with no geographic cues whatsoever. This method achieves a substantial performance improvement over the best-available baseline, and geolocates some users' images with near-perfect accuracy.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"With recent advances in CBIR, mobile visual location recognition becomes feasible. Using video recordings of a mobile device as a visual fingerprint of the environment and matching them to a georeferenced database provides pose information in a very natural way. Hence, LBSs can be provided without complex infrastructure in areas where the accuracy and availability of GPS is limited. This includes indoor environments where georeferenced data are just about to become publicly available."
]
} |
1510.02073 | 2015023291 | We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. | Detecting and understanding the salient regions in images and videos has been an active area of research for over three decades. Seminal efforts in the 80s and 90s focused on understanding saliency and attention from a neuroscience and cognitive psychology perspective @cite_23 . In the late 90s, Illti @cite_27 built a visual attention model using a bottom-up model of the human visual system. Other approaches used graph based techniques @cite_15 , information theoretical methods @cite_26 , frequency domain analysis @cite_4 and the use of higher level cues like face-detection @cite_1 to build attention maps and detect objects and regions-of-interests in images and video. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_27",
"@cite_23",
"@cite_15"
],
"mid": [
"2139047169",
"2037328649",
"2169561119",
"2128272608",
"2149095485",
""
],
"abstract": [
"A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.",
"We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.",
"Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80 within their first two fixations; furthermore, they exhibit more similar scan-paths when faces are present. Remarkably, our model's predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.",
"A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
""
]
} |
1510.02073 | 2015023291 | We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. | In the last few years, focus has shifted to applications which incorporate attention and egocentric vision. These include gaze prediction @cite_21 , image quality assessment @cite_30 , action localization and recognition @cite_5 @cite_24 , understanding social interactions @cite_6 and video summarization @cite_28 . Our goal in this work is to leverage image and sensor matching between the reference set and POV sensors to extract and localize the egocentric FOV. | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_5"
],
"mid": [
"2142841657",
"2106229755",
"2136668269",
"2147806277",
"2212494831",
"2096037448"
],
"abstract": [
"The aim of an objective image quality assessment is to find an automatic algorithm that evaluates the quality of pictures or video as a human observer would do. To reach this goal, researchers try to simulate the Human Visual System (HVS). Visual attention is a main feature of the HVS, but few studies have been done on using it in image quality assessment. In this work, we investigate the use of the visual attention information in their final pooling step. The rationale of this choice is that an artefact is likely more annoying in a salient region than in other areas. To shed light on this point, a quality assessment campaign has been conducted during which eye movements have been recorded. The results show that applying the visual attention to image quality assessment is not trivial, even with the ground truth.",
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.",
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.",
"We propose a weakly-supervised structured learning approach for recognition and spatio-temporal localization of actions in video. As part of the proposed approach, we develop a generalization of the Max-Path search algorithm which allows us to efficiently search over a structured space of multiple spatio-temporal paths while also incorporating context information into the model. Instead of using spatial annotations in the form of bounding boxes to guide the latent model during training, we utilize human gaze data in the form of a weak supervisory signal. This is achieved by incorporating eye gaze, along with the classification, into the structured loss within the latent SVM learning framework. Experiments on a challenging benchmark dataset, UCF-Sports, show that our model is more accurate, in terms of classification, and achieves state-of-the-art results in localization. In addition, our model can produce top-down saliency maps conditioned on the classification label and localized latent paths."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | This research can be inscribed into the discipline of , a sub-field of information retrieval @cite_0 . There are two main problems in expertise retrieval: | {
"cite_N": [
"@cite_0"
],
"mid": [
"2108671511"
],
"abstract": [
"Link spam is used to increase the ranking of certain target web pages by misleading the connectivity-based ranking algorithms in search engines. In this paper we study how web pages can be interconnected in a spam farm in order to optimize rankings. We also study alliances, that is, interconnections of spam farms. Our results identify the optimal structures and quantify the potential gains. In particular, we show that alliances can be synergistic and improve the rankings of all participants. We believe that the insights we gain will be useful in identifying and combating link spam."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | Traditionally, these problems above have been solved via document mining, i.e. by looking for the papers on topic @math written by person @math combined with centrality or bibliographic measures, such as the H-index and the G-index, in order to assess the expert's relative influence (e.g. @cite_45 ). This is also the approach followed by http: www.arnetminer.org , a popular web-based platform for expertise retrieval @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_45"
],
"mid": [
"2022322548",
"1981432861"
],
"abstract": [
"This paper addresses several key issues in the ArnetMiner system, which aims at extracting and mining academic social networks. Specifically, the system focuses on: 1) Extracting researcher profiles automatically from the Web; 2) Integrating the publication data into the network from existing digital libraries; 3) Modeling the entire academic network; and 4) Providing search services for the academic network. So far, 448,470 researcher profiles have been extracted using a unified tagging approach. We integrate publications from online Web databases and propose a probabilistic framework to deal with the name ambiguity problem. Furthermore, we propose a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues. Search services such as expertise search and people association search have been provided based on the modeling results. In this paper, we describe the architecture and main features of the system. We also present the empirical evaluation of the proposed methods.",
"The emergence of social media has created new ways to publish scientific work, foster collaboration, and build professional connections in the research community. The rich data collected in social media platforms has provided new opportunities for assessing scholars' impact other than the traditional citation-based approach. In this paper, we investigate the measures of scholars' influence in academic social media platforms, taking both academic and social impact into account. A real-life dataset collected from Mendeley is used to apply different influence metrics. We first assess the academic influence of scholars based on the scientific impact of their publications using three different measures. Then we investigate their social influence using network centrality metrics. The experiments show that top influencers with high academic impact tend to be senior scholars with many coauthors. Furthermore, academic influence and social influence measures do not strongly correlate with each other, and thus scholars with high academic impact are not necessarily influential from a social point of view. Adding the social dimension could enhance the traditional impact metrics that only take academic influence into account."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | Despite their unquestionable usefulness, systems based on document mining, like , face formidable challenges that limit their effectiveness. In addition to the specific challenges mentioned by @cite_15 , we could add several problems common to all data mining applications (e.g. name disambiguation). As a small experiment, we have searched for some known names in , and we get several profiles corresponding to the same person, one for each different spelling. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1996203480"
],
"abstract": [
"Expert finding in bibliographic networks has received increased interests in recent years. This task concerns with finding relevant researchers for a given topic. Motivated by the observation that rarely do all coauthors contribute to a paper equally, in this paper, we propose a discriminative method to realize leading authors contributing in a scientific publication. Specifically, we cast the problem of expert finding in a bibliographic network to find leading experts in a research group, which is easier to solve. According to some observations, we recognize three feature groups that can discriminate relevant and irrelevant experts. Experimental results on a real dataset, and an automatically generated one that is gathered from Microsoft academic search show that the proposed model significantly improves the performance of expert finding in terms of all common Information Retrieval evaluation metrics."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | That is one of the reasons why other expertise retrieval models resort to the power of in certain social networks, such as in the perused scientific citation and scientific collaboration networks (e.g. @cite_48 @cite_15 ). Another interesting example related to and social networks is @cite_42 , an extension of that measures the relative influence of users in a certain topic. Like our own extension, is topic-specific: the random surfer jumps from one user to an acquaintance following topic-dependent probabilities. However, does not consider any relationships among the different topics. | {
"cite_N": [
"@cite_48",
"@cite_15",
"@cite_42"
],
"mid": [
"2091565080",
"1996203480",
"2076219102"
],
"abstract": [
"Expertise retrieval, whose task is to suggest people with relevant expertise on the topic of interest, has received increasing interest in recent years. One of the issues is that previous algorithms mainly consider the documents associated with the experts while ignoring the community information that is affiliated with the documents and the experts. Motivated by the observation that communities could provide valuable insight and distinctive information, we investigate and develop two community-aware strategies to enhance expertise retrieval. We first propose a new smoothing method using the community context for statistical language modeling, which is employed to identify the most relevant documents so as to boost the performance of expertise retrieval in the document-based model. Furthermore, we propose a query-sensitive AuthorRank to model the authors' authorities based on the community coauthorship networks and develop an adaptive ranking refinement method to enhance expertise retrieval. Experimental results demonstrate the effectiveness and robustness of both community-aware strategies. Moreover, the improvements made in the enhanced models are significant and consistent.",
"Expert finding in bibliographic networks has received increased interests in recent years. This task concerns with finding relevant researchers for a given topic. Motivated by the observation that rarely do all coauthors contribute to a paper equally, in this paper, we propose a discriminative method to realize leading authors contributing in a scientific publication. Specifically, we cast the problem of expert finding in a bibliographic network to find leading experts in a research group, which is easier to solve. According to some observations, we recognize three feature groups that can discriminate relevant and irrelevant experts. Experimental results on a real dataset, and an automatically generated one that is gathered from Microsoft academic search show that the proposed model significantly improves the performance of expert finding in terms of all common Information Retrieval evaluation metrics.",
"This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4 of the users in Twitter follow more than 80 of their followers, and (2) 80.5 of the users have 80 of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | To the best of our knowledge, there are no precedents for the use of endorsements in social networks, nor for the use of known relationships among different skills, in the context of expertise retrieval. The closest approach might be perhaps the one in @cite_20 , which uses the ACM classification system as an ontology that guides the mining process and expert profiling. Another (very recent) model that uses semantic relationships to increase the effectiveness and efficiency of the search is given in @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_20"
],
"mid": [
"2078652018",
"1486353998"
],
"abstract": [
"On the Semantic Web, the types of resources and the semantic relationships between resources are defined in an ontology. By using that information, the accuracy of information retrieval can be improved. In this paper, we present effective ranking and search techniques considering the semantic relationships in an ontology. Our technique retrieves top-k resources which are the most relevant to query keywords through the semantic relationships. To do this, we propose a weighting measure for the semantic relationship. Based on this measure, we propose a novel ranking method which considers the number of meaningful semantic relationships between a resource and keywords as well as the coverage and discriminating power of keywords. In order to improve the efficiency of the search, we prune the unnecessary search space using the length and weight thresholds of the semantic relationship path. In addition, we exploit Threshold Algorithm based on an extended inverted index to answer top-k results efficiently. The experimental results using real data sets demonstrate that our retrieval method using the semantic information generates accurate results efficiently compared to the traditional methods.",
"This paper proposes an approach to discover the expertise of researchers using data mining with skill classification ontology. The skill classification ontology is an information model containing skills of doing research in the area of computer and information science. A methodology to build the ontology is presented. The expertise search system is developed, which uses the skill classification ontology, researcher profiles and research profiles in the retrieving process. These profiles and ontology are expressed by OWL. Also, the matching and ranking processes are proposed and these follow semantic-based matching. We explored the evaluation of the retrieving process and the result shows that the proposed approach enables the expertise search system to be efficient regarding accuracy."
]
} |
1510.01997 | 1633988844 | For expertise retrieval purposes we can rely on the endorsements received by members of a social network, with respect to some skill.Skills are correlated, which may affect individual rankings.An endorsement deduction method is proposed, which improves data consistency and completeness.Endorsement deduction makes use of the known correlation among skills.The method is validated in a synthetic network resembling LinkedIn. Some social networks, such as LinkedIn and ResearchGate, allow user endorsements for specific skills. In this way, for each skill we get a directed graph where the nodes correspond to users' profiles and the arcs represent endorsement relations. From the number and quality of the endorsements received, an authority score can be assigned to each profile. In this paper we propose an authority score computation method that takes into account the relations existing among different skills. Our method is based on enriching the information contained in the digraph of endorsements corresponding to a specific skill, and then applying a ranking method admitting weighted digraphs, such as PageRank. We describe the method, and test it on a synthetic network of 1493 nodes, fitted with endorsements. Display Omitted | Another related field which has attained a growing interest in the last few years is that of reputation systems, that is, systems intended to rank the agents of a domain based on others' agents reports. Strategies for ranking agents in a reputation system range from a direct ranking by agents (as used in eBay) to more sophisticated approaches (see @cite_36 for a survey). One particularly important family of reputation system strategies is that of -based algorithms. There are many of such approaches. For instance, @cite_21 provides an algorithm based on the so-called Dirichlet , which addresses problems such as: (1) Some links in the network may indicate distrust rather than trust, and (2) How to infer a ranking for a node based on the ranking stated for a well-known subnetwork. | {
"cite_N": [
"@cite_36",
"@cite_21"
],
"mid": [
"2001652",
"1719306472"
],
"abstract": [
"Artificial Intelligence has become one of the most fundamental areas of computer science research. One line of research in artificial intelligence is associated with coordination of intelligent agents. Coordination takes place in multi-agent environments where distributed agents have limited knowledge about their surrounding environment; therefore, they continuously ask other agents to obtain required information. In general, the growing popularity of agents requires systematic coordination management and reputation system that enable agents to decide about their interacting partner and overall acting attitude. Using the reputation system, agents choose reliable agents to interact with. Here, the question arises that how the reputation mechanism helps agents to make the most prudent decisions that yield the best outcome. More specifically, agents need to use a decision-theoretic reasoning algorithm to optimize their decisions under uncertainty. In multi-agent systems, the reputation value is highly competitive because it is used as the beliefs of agents about one another. Thus, a carefully designed mechanism is required to maintain accuracy of this parameter.",
"Motivated by numerous models of representing trust and distrust within a graph ranking system, we examine a quantitative vertex ranking with consideration of the influence of a subset of nodes. An efficient algorithm is given for computing Dirichlet PageRank vectors subject to Dirichlet boundary conditions on a subset of nodes. We then give several algorithms for various trust-based ranking problems using Dirichlet PageRank with boundary conditions, showing several applications of our algorithms."
]
} |
1510.01576 | 1986136466 | We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07 in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data. | In contrast to state-of-the-art approaches that use hand-crafted features with traditional classification approaches on egocentric images and videos, our approach is based on Convolutional Neural Networks (CNNs) combining image pixel data, contextual metadata (time) and global image features. Convolutional Neural Networks have recently been used with success on single image classification with a vast number of classes @cite_0 and have been effective at learning hierarchies of features @cite_10 . However, little work has been done on classifying activities on single images from an egocentric device over extended periods of time. This work aims to explore that area. | {
"cite_N": [
"@cite_0",
"@cite_10"
],
"mid": [
"2618530766",
"1849277567"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1510.01576 | 1986136466 | We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07 in predicting a person's activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data. | One of the challenges of continuous and automatic capture of first person point-of-view images is that these images may, in some circumstances, pose a privacy concern. Privacy is an area that deserves special attention when dealing with wearable cameras, particularly in public settings. proposed an ethical framework to formalize privacy protection when wearable cameras are used in health behavior research and beyond @cite_22 while proposed a framework for understanding the balance between saliency and privacy when examining images, with a particular focus on photos taken with wearable cameras @cite_15 . People's perceptions of wearable cameras are also very relevant. examined how individuals perceive and react to being recorded by a wearable camera in real-life situations @cite_25 , and studied how individuals manage privacy while capturing lifelong photos with wearable cameras @cite_4 . | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_22",
"@cite_4"
],
"mid": [
"2096975613",
"2152244690",
"1991731112",
"2108627835"
],
"abstract": [
"First-person point-of-view (FPPOV) images taken by wearable cameras can be used to better understand people's eating habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious concern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating information in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard-mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy-infringing content of images while still maintaining evidence of eating behaviors throughout the day.",
"In this paper, we present a study of responses to the idea of being recorded by a ubicomp recording technology called SenseCam. This study focused on real-life situations in two North American and two European locations. We present the findings of this study and their implications, specifically how those who might be recorded perceive and react to SenseCam. We describe what system parameters, social processes, and policies are required to meet the needs of both the primary users and these secondary stakeholders and how being situated within a particular locale can influence responses. Our results indicate that people would tolerate potential incursions from SenseCam for particular purposes. Furthermore, they would typically prefer to be informed about and to consent to recording as well as to grant permission before any data is shared. These preferences, however, are unlikely to instigate a request for deletion or other action on their part. These results inform future design of recording technologies like SenseCam and provide a broader understanding of how ubicomp technologies might be taken up across different cultural and political regions.",
"Technologic advances mean automated, wearable cameras are now feasible for investigating health behaviors in a public health context. This paper attempts to identify and discuss the ethical implications of such research, in relation to existing guidelines for ethical research in traditional visual methodologies. Research using automated, wearable cameras can be very intrusive, generating unprecedented levels of image data, some of it potentially unflattering or unwanted. Participants and third parties they encounter may feel uncomfortable or that their privacy has been affected negatively. This paper attempts to formalize the protection of all according to best ethical principles through the development of an ethical framework. Respect for autonomy, through appropriate approaches to informed consent and adequate privacy and confidentiality controls, allows for ethical research, which has the potential to confer substantial benefits on the field of health behavior research.",
"A number of wearable 'lifelogging' camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always 'on' and automatically capturing images. Such features may challenge users' (and bystanders') expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create. To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider 'sensitive,' we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its 'sensitivity;' and 3) people are concerned about the privacy of bystanders, despite reporting almost no opposition or concerns expressed by bystanders over the course of the study."
]
} |
1510.01392 | 2278683827 | We leverage stochastic geometry to characterize key performance metrics for neighboring Wi-Fi and LTE networks in unlicensed spectrum. Our analysis focuses on a single unlicensed frequency band, where the locations for the Wi-Fi access points and LTE eNodeBs are modeled as two independent homogeneous Poisson point processes. Three LTE coexistence mechanisms are investigated: 1) LTE with continuous transmission and no protocol modifications; 2) LTE with discontinuous transmission; and 3) LTE with listen-before-talk and random back-off. For each scenario, we derive the medium access probability, the signal-to-interference-plus-noise ratio coverage probability, the density of successful transmissions (DST), and the rate coverage probability for both Wi-Fi and LTE. Compared with the baseline scenario where one Wi-Fi network coexists with an additional Wi-Fi network, our results show that Wi-Fi performance is severely degraded when LTE transmits continuously. However, LTE is able to improve the DST and rate coverage probability of Wi-Fi while maintaining acceptable data rate performance when it adopts one or more of the following coexistence features: a shorter transmission duty cycle, lower channel access priority, or more sensitive clear channel assessment thresholds. | All the aforementioned works are based on extensive system level simulations, which is usually very time-consuming due to the complicated dynamics of the overlaid LTE and Wi-Fi networks. Therefore, a mathematical approach would be helpful for more efficient performance evaluation and transparent comparisons of various techniques. A fluid network model is used in @cite_33 to analyze the coexistence performance when LTE has no protocol modifications. However, the fluid network model is limited to the analysis of deterministic networks, which do not capture the multi-path fading effects and random backoff mechanism of Wi-Fi. A centralized optimization framework is proposed in @cite_26 to optimize the aggregate throughput of LTE and Wi-Fi. However, the analysis of @cite_26 is based on Bianchi's model for CSMA CA @cite_24 , which relies on the idealized assumption that the collision probability of the contending APs is constant and independent". | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_33"
],
"mid": [
"2162598825",
"1008745564",
"2048839029"
],
"abstract": [
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"This paper investigates the co-existence of Wi-Fi and LTE networks in emerging unlicensed frequency bands which are intended to accommodate multiple radio access technologies. Wi-Fi and LTE are the two most prominent wireless access technologies being deployed today, motivating further study of the inter-system interference arising in such shared spectrum scenarios as well as possible techniques for enabling improved co-existence. An analytical model for evaluating the baseline performance of co-existing Wi-Fi and LTE networks is developed and used to obtain baseline performance measures. The results show that both Wi-Fi and LTE networks cause significant interference to each other and that the degradation is dependent on a number of factors such as power levels and physical topology. The model-based results are partially validated via experimental evaluations using USRP-based SDR platforms on the ORBIT testbed. Further, inter-network coordination with logically centralized radio resource management across Wi-Fi and LTE systems is proposed as a possible solution for improved co-existence. Numerical results are presented showing significant gains in both Wi-Fi and LTE performance with the proposed inter-network coordination approach.",
"In this paper, we consider the operation of the Long Term Evolution (LTE) systems in the unlicensed spectrum. Given the lack of the exclusive use of the spectrum, the operation in the unlicensed band is fundamentally limited by interference from other technologies using the same frequency band. By noting that the unlicensed spectrum is heavily used by Wireless Local Area Networks (WLAN), here we focus on the mutual effect between LTE and WLAN when they coexist. For this, we developed a novel inter-system interference analysis technique based on the continuum field approximation and spiral representation, which improves the accuracy of the previous approaches and is applicable to both large-scale and small-scale networks. To this end, we quantify the effect of inter-system interference when the LTE and WLAN systems operate in the same unlicensed spectrum."
]
} |
1510.01392 | 2278683827 | We leverage stochastic geometry to characterize key performance metrics for neighboring Wi-Fi and LTE networks in unlicensed spectrum. Our analysis focuses on a single unlicensed frequency band, where the locations for the Wi-Fi access points and LTE eNodeBs are modeled as two independent homogeneous Poisson point processes. Three LTE coexistence mechanisms are investigated: 1) LTE with continuous transmission and no protocol modifications; 2) LTE with discontinuous transmission; and 3) LTE with listen-before-talk and random back-off. For each scenario, we derive the medium access probability, the signal-to-interference-plus-noise ratio coverage probability, the density of successful transmissions (DST), and the rate coverage probability for both Wi-Fi and LTE. Compared with the baseline scenario where one Wi-Fi network coexists with an additional Wi-Fi network, our results show that Wi-Fi performance is severely degraded when LTE transmits continuously. However, LTE is able to improve the DST and rate coverage probability of Wi-Fi while maintaining acceptable data rate performance when it adopts one or more of the following coexistence features: a shorter transmission duty cycle, lower channel access priority, or more sensitive clear channel assessment thresholds. | In recent years, stochastic geometry has become a popular and powerful mathematical tool to analyze cellular and Wi-Fi systems. Specifically, key performance metrics can be derived by modeling the locations of base stations (BSs) access points (APs) as a realization of certain spatial random point processes. @cite_0 , the coverage probability and average Shannon rate were derived for macro cellular networks with BSs distributed according to the complete spatial random Poisson point process (PPP). The analysis has been extended to several other cellular network scenarios, including heterogeneous cellular networks (HetNets) @cite_27 @cite_40 @cite_3 , MIMO @cite_12 @cite_22 , and carrier aggregation @cite_37 @cite_11 . More realistic macro BS location models than PPP are investigated in @cite_19 @cite_8 @cite_35 . Stochastic geometry can also model CSMA CA-based Wi-Fi networks. A modified Mat ' e rn hard-core point process, which gives a snapshot view of the simultaneous transmitting CSMA CA nodes, has been proposed and validated in @cite_30 for dense 802.11 networks. This Mat ' e rn CSMA model is also used for analyzing other CSMA CA based networks, such as ad-hoc networks with channel-aware CSMA CA protocols @cite_41 , and cognitive radio networks @cite_4 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_41",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_12",
"@cite_11"
],
"mid": [
"2146115135",
"2114207517",
"2067630960",
"2102295378",
"2155860576",
"1659111644",
"1975670011",
"2059973889",
"2150166076",
"2065735671",
"2057540419",
"2149170915",
"2081450055",
"1986224302"
],
"abstract": [
"This paper presents a stochastic geometry model for the performance analysis and the planning of dense IEEE 802.11 networks. This model allows one to propose heuristic formulas for various properties of such networks like the probability for users to be covered, the probability for access points to be granted access to the channel or the average long term throughput provided to end-users. The main merit of this model is to take the effect of interferences and that of CSMA into account within this dense network context. This analytic model, which is based on Matern point processes, is partly validated against simulation. It is then used to assess various properties of such networks. We show for instance how the long term throughput obtained by end-users behaves when the access point density increases. We also briefly show how to use this model for the planning of managed networks and for the economic modeling of unplanned networks.",
"In cellular network models, the base stations are usually assumed to form a lattice or a Poisson point process (PPP). In reality, however, they are deployed neither fully regularly nor completely randomly. Accordingly, in this paper, we consider the very general class of motion-invariant models and analyze the behavior of the outage probability (the probability that the signal-to-interference-plus-noise-ratio (SINR) is smaller than a threshold) as the threshold goes to zero. We show that, remarkably, the slope of the outage probability (in dB) as a function of the threshold (also in dB) is the same for essentially all motion-invariant point processes. The slope merely depends on the fading statistics. Using this result, we introduce the notion of the asymptotic deployment gain (ADG), which characterizes the horizontal gap between the success probabilities of the PPP and another point process in the high-reliability regime (where the success probability is near 1). To demonstrate the usefulness of the ADG for the characterization of the SINR distribution, we investigate the outage probabilities and the ADGs for different point processes and fading statistics by simulations.",
"Carrier aggregation (CA) and small cells are two distinct features of next-generation cellular networks. Cellular networks with different types of small cells are often referred to as HetNets. In this paper, we introduce a load-aware model for CA-enabled multi-band HetNets. Under this model, the impact of biasing can be more appropriately characterized; for example, it is observed that with large enough biasing, the spectral efficiency of small cells may increase while its counterpart in a fully-loaded model always decreases. Further, our analysis reveals that the peak data rate does not depend on the base station density and transmit powers; this strongly motivates other approaches e.g. CA to increase the peak data rate. Last but not least, different band deployment configurations are studied and compared. We find that with large enough small cell density, spatial reuse with small cells outperforms adding more spectrum for increasing user rate. More generally, universal cochannel deployment typically yields the largest rate; and thus a capacity loss exists in orthogonal deployment. This performance gap can be reduced by appropriately tuning the HetNet coverage distribution (e.g. by optimizing biasing factors).",
"We propose a probabilistic model based on stochastic geometry to analyze cognitive radio in a large wireless network with randomly located users sharing the medium with carrier sensing multiple access. Analytical results are derived on the impact of the interaction between primary and secondary users, on their medium access probability, coverage probability and throughput. These results can be seen as the continuation of the theory of priorities in queueing theory to spatial processes. They give insight into the guarantees that can be offered to primary users and more generally on the possibilities offered by cognitive radio to improve the effectiveness of spectrum utilization in such networks.",
"We develop a general downlink model for multi-antenna heterogeneous cellular networks (HetNets), where base stations (BSs) across tiers may differ in terms of transmit power, target signal-to-interference-ratio (SIR), deployment density, number of transmit antennas and the type of multi-antenna transmission. In particular, we consider and compare space division multiple access (SDMA), single user beamforming (SU-BF), and baseline single-input single-output (SISO) transmission. For this general model, the main contributions are: (i) ordering results for both coverage probability and per user rate in closed form for any BS distribution for the three considered techniques, using novel tools from stochastic orders, (ii) upper bounds on the coverage probability assuming a Poisson BS distribution, and (iii) a comparison of the area spectral efficiency (ASE). The analysis concretely demonstrates, for example, that for a given total number of transmit antennas in the network, it is preferable to spread them across many single-antenna BSs vs. fewer multi-antenna BSs. Another observation is that SU-BF provides higher coverage and per user data rate than SDMA, but SDMA is in some cases better in terms of ASE.",
"Although the Poisson point process (PPP) has been widely used to model base station (BS) locations in cellular networks, it is an idealized model that neglects the spatial correlation among BSs. This paper proposes the use of the determinantal point process (DPP) to take into account these correlations, in particular the repulsiveness among macro BS locations. DPPs are demonstrated to be analytically tractable by leveraging several unique computational properties. Specifically, we show that the empty space function, the nearest neighbor function, the mean interference, and the signal-to-interference ratio (SIR) distribution have explicit analytical representations and can be numerically evaluated for cellular networks with DPP-configured BSs. In addition, the modeling accuracy of DPPs is investigated by fitting three DPP models to real BS location data sets from two major U.S. cities. Using hypothesis testing for various performance metrics of interest, we show that these fitted DPPs are significantly more accurate than popular choices such as the PPP and the perturbed hexagonal grid model.",
"We investigate the benefits of channel-aware (opportunistic) scheduling of transmissions in ad hoc networks. The key challenge in optimizing the performance of such systems is finding a good compromise among three interdependent quantities: 1) the density of scheduled transmitters; 2) the quality of transmissions; and 3) the long term fairness among nodes. We propose two new channel-aware slotted CSMA protocols opportunistic CSMA and quantile-based CSMA (QT-CSMA) and develop new stochastic geometric models to quantify their performance in terms of spatial reuse and spatial fairness. When properly optimized, these protocols offer substantial improvements in performance relative to CSMA—particularly, when the density of nodes is moderate to high. In addition, we show that a simple version of QT-CSMA can achieve robust performance gains without requiring careful parameter optimization. The quantitative results in this paper suggest that channel-aware scheduling in ad hoc networks can provide substantial benefits which might far outweigh the associated implementation overheads.",
"The Signal to Interference Plus Noise Ratio (SINR) on a wireless link is an important basis for consideration of outage, capacity, and throughput in a cellular network. It is therefore important to understand the SINR distribution within such networks, and in particular heterogeneous cellular networks, since these are expected to dominate future network deployments . Until recently the distribution of SINR in heterogeneous networks was studied almost exclusively via simulation, for selected scenarios representing pre-defined arrangements of users and the elements of the heterogeneous network such as macro-cells, femto-cells, etc. However, the dynamic nature of heterogeneous networks makes it difficult to design a few representative simulation scenarios from which general inferences can be drawn that apply to all deployments. In this paper, we examine the downlink of a heterogeneous cellular network made up of multiple tiers of transmitters (e.g., macro-, micro-, pico-, and femto-cells) and provide a general theoretical analysis of the distribution of the SINR at an arbitrarily-located user. Using physically realistic stochastic models for the locations of the base stations (BSs) in the tiers, we can compute the general SINR distribution in closed form. We illustrate a use of this approach for a three-tier network by calculating the probability of the user being able to camp on a macro-cell or an open-access (OA) femto-cell in the presence of Closed Subscriber Group (CSG) femto-cells. We show that this probability depends only on the relative densities and transmit powers of the macro- and femto-cells, the fraction of femto-cells operating in OA vs. Closed Subscriber Group (CSG) mode, and on the parameters of the wireless channel model. For an operator considering a femto overlay on a macro network, the parameters of the femto deployment can be selected from a set of universal curves.",
"Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.",
"Stochastic geometry models for wireless communication networks have recently attracted much attention. This is because the performance of such networks critically depends on the spatial configuration of wireless nodes and the irregularity of the node configuration in a real network can be captured by a spatial point process. However, most analysis of such stochastic geometry models for wireless networks assumes, owing to its tractability, that the wireless nodes are deployed according to homogeneous Poisson point processes. This means that the wireless nodes are located independently of each other and their spatial correlation is ignored. In this work we propose a stochastic geometry model of cellular networks such that the wireless base stations are deployed according to the Ginibre point process. The Ginibre point process is one of the determinantal point processes and accounts for the repulsion between the base stations. For the proposed model, we derive a computable representation for the coverage probability—the probability that the signal-to-interference-plus-noise ratio (SINR) for a mobile user achieves a target threshold. To capture its qualitative property, we further investigate the asymptotics of the coverage probability as the SINR threshold becomes large in a special case. We also present the results of some numerical experiments.",
"Random spatial models are attractive for modeling heterogeneous cellular networks (HCNs) due to their realism, tractability, and scalability. A major limitation of such models to date in the context of HCNs is the neglect of network traffic and load: all base stations (BSs) have typically been assumed to always be transmitting. Small cells in particular will have a lighter load than macrocells, and so their contribution to the network interference may be significantly overstated in a fully loaded model. This paper incorporates a flexible notion of BS load by introducing a new idea of conditionally thinning the interference field. For a K-tier HCN where BSs across tiers differ in terms of transmit power, supported data rate, deployment density, and now load, we derive the coverage probability for a typical mobile, which connects to the strongest BS signal. Conditioned on this connection, the interfering BSs of the i^ th tier are assumed to transmit independently with probability p_i, which models the load. Assuming — reasonably — that smaller cells are more lightly loaded than macrocells, the analysis shows that adding such access points to the network always increases the coverage probability. We also observe that fully loaded models are quite pessimistic in terms of coverage.",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.",
"Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.",
""
]
} |
1510.01392 | 2278683827 | We leverage stochastic geometry to characterize key performance metrics for neighboring Wi-Fi and LTE networks in unlicensed spectrum. Our analysis focuses on a single unlicensed frequency band, where the locations for the Wi-Fi access points and LTE eNodeBs are modeled as two independent homogeneous Poisson point processes. Three LTE coexistence mechanisms are investigated: 1) LTE with continuous transmission and no protocol modifications; 2) LTE with discontinuous transmission; and 3) LTE with listen-before-talk and random back-off. For each scenario, we derive the medium access probability, the signal-to-interference-plus-noise ratio coverage probability, the density of successful transmissions (DST), and the rate coverage probability for both Wi-Fi and LTE. Compared with the baseline scenario where one Wi-Fi network coexists with an additional Wi-Fi network, our results show that Wi-Fi performance is severely degraded when LTE transmits continuously. However, LTE is able to improve the DST and rate coverage probability of Wi-Fi while maintaining acceptable data rate performance when it adopts one or more of the following coexistence features: a shorter transmission duty cycle, lower channel access priority, or more sensitive clear channel assessment thresholds. | Due to its tractability for cellular and Wi-Fi networks, stochastic geometry is a natural candidate for analyzing LTE and Wi-Fi coexistence performance. @cite_36 , the coverage and throughput performance of LTE and Wi-Fi were derived using stochastic geometry. However, the analytical Wi-Fi throughput in @cite_36 does not closely match the simulation results. Also, the effect of possible LTE coexistence methods, including discontinuous transmission and LBT with random backoff, were not investigated in @cite_36 . These shortcomings are addressed in this paper. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2068492802"
],
"abstract": [
"In this paper, the co-channel performance of large scale deployment of LTE in Unlicensed (LTE-U) band and WiFi is studied using stochastic geometry. Analytical expressions of LTE-U throughput in presence of WiFi are presented and are partly validated by the simulation results. The LTE-U Low Power Nodes (LPNs) are deployed as Poisson Point Process (PPP), and, the WiFi transmissions are modeled as hardcore Matern point process. Using this analytical approach the impact of various parameters such as sensing threshold and transmission power on the co-existence of LTE-U and WiFi is studied."
]
} |
1510.01257 | 2245047273 | Efficient generation of high-quality object proposals is an essential step in state-of-the-art object detection systems based on deep convolutional neural networks (DCNN) features. Current object proposal algorithms are computationally inefficient in processing high resolution images containing small objects, which makes them the bottleneck in object detection systems. In this paper we present effective methods to detect objects for high resolution images. We combine two complementary strategies. The first approach is to predict bounding boxes based on adjacent visual features. The second approach uses high level image features to guide a two-step search process that adaptively focuses on regions that are likely to contain small objects. We extract features required for the two strategies by utilizing a pre-trained DCNN model known as AlexNet. We demonstrate the effectiveness of our algorithm by showing its performance on a high-resolution image subset of the SUN 2012 object detection dataset. | Compared to object proposal algorithms based on low-level bottom-up processing, such as segmentation @cite_7 and edge detection @cite_9 , our algorithm utilizes redundancy in the images by modeling the high-level visual concepts explicitly. This strategy seems to be complementary to the low-level approach, which as we will demonstrate does not scale well in high resolution settings. We note that while in our implementation we have chosen specific algorithms, our proposed design can work in companion with a traditional object proposal algorithm to improve its scalability. | {
"cite_N": [
"@cite_9",
"@cite_7"
],
"mid": [
"7746136",
"2088049833"
],
"abstract": [
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1510.01257 | 2245047273 | Efficient generation of high-quality object proposals is an essential step in state-of-the-art object detection systems based on deep convolutional neural networks (DCNN) features. Current object proposal algorithms are computationally inefficient in processing high resolution images containing small objects, which makes them the bottleneck in object detection systems. In this paper we present effective methods to detect objects for high resolution images. We combine two complementary strategies. The first approach is to predict bounding boxes based on adjacent visual features. The second approach uses high level image features to guide a two-step search process that adaptively focuses on regions that are likely to contain small objects. We extract features required for the two strategies by utilizing a pre-trained DCNN model known as AlexNet. We demonstrate the effectiveness of our algorithm by showing its performance on a high-resolution image subset of the SUN 2012 object detection dataset. | Some recent object proposal algorithms are based on a neural net model. For example, the Multi-box algorithm @cite_13 uses a single evaluation of a deep network to predict a fixed-number of object proposals. This algorithm similarly models high-level visual concepts and benefits from GPU acceleration. However, we note that one crucial detail that prevents an excessive growth in complexity of Multi-box is the use of a carefully designed set of anchor regions. The robustness of this technique in high resolution images containing small objects is unclear. In this light, our algorithm offers to provide a framework that could boost the performance of Multi-box in high resolution setting without significant efforts in domain adaptation. This is an area of future exploration. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2068730032"
],
"abstract": [
"Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations."
]
} |
1510.01257 | 2245047273 | Efficient generation of high-quality object proposals is an essential step in state-of-the-art object detection systems based on deep convolutional neural networks (DCNN) features. Current object proposal algorithms are computationally inefficient in processing high resolution images containing small objects, which makes them the bottleneck in object detection systems. In this paper we present effective methods to detect objects for high resolution images. We combine two complementary strategies. The first approach is to predict bounding boxes based on adjacent visual features. The second approach uses high level image features to guide a two-step search process that adaptively focuses on regions that are likely to contain small objects. We extract features required for the two strategies by utilizing a pre-trained DCNN model known as AlexNet. We demonstrate the effectiveness of our algorithm by showing its performance on a high-resolution image subset of the SUN 2012 object detection dataset. | The bounding box prediction method we propose is related to the bounding box regression approach introduced in @cite_11 . The traditional bounding-box regression used in fast R-CNN predicts one bounding box for each class. The assumption is that the spatial support of the RoI overlaps with the underlying object well enough for accurate object category prediction. The regression serves to provide a small correction based on the typical shapes of objects of the given category to get an even better overlapping. In our application a typical input region is assumed to have a small partial overlapping with the object. Our strategy is to focus on the spatial correlation preserved by the geometry of overlapping. We will discuss more about these in the next section. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2102605133"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1510.01292 | 1972932498 | Distributed storage plays a crucial role in the current cloud computing framework. After the theoretical bound for distributed storage was derived by the pioneer work of the regenerating code, Reed-Solomon code based regenerating codes were developed. The RS code based minimum storage regeneration code (RS-MSR) and the minimum bandwidth regeneration code (RS-MBR) can achieve theoretical bounds on the MSR point and the MBR point respectively in code regeneration. They can also maintain the MDS property in code reconstruction. However, in the hostile network where the storage nodes can be compromised and the packets can be tampered with, the storage capacity of the network can be significantly affected. In this paper, we propose a Hermitian code based minimum storage regenerating (H-MSR) code and a minimum bandwidth regenerating (H-MBR) code. We first prove that our proposed Hermitian code based regenerating codes can achieve the theoretical bounds for MSR point and MBR point respectively. We then propose data regeneration and reconstruction algorithms for the H-MSR code and the H-MBR code in both error-free network and hostile network. Theoretical evaluation shows that our proposed schemes can detect the erroneous decodings and correct more errors in hostile network than the RS-MSR code and the RS-MBR code with the same code rate. Our analysis also demonstrates that the proposed H-MSR and H-MBR codes have lower computational complexity than the RS-MSR RS-MBR codes in both code regeneration and code reconstruction. | When a storage node in the distributed storage network that employing the conventional @math RS code (such as OceanStore @cite_0 and Total Recall @cite_2 ) fails, the replacement node connects to @math nodes and downloads the whole file to recover the symbols stored in the failed node. This approach is a waste of bandwidth because the whole file has to be downloaded to recover a fraction of it. To overcome this drawback, Dimakis @cite_3 introduced the concept of @math regenerating code. In the context of regenerating code, the replacement node can regenerate the contents stored in a failed node by downloading @math help symbols from @math helper nodes. The bandwidth consumption to regenerate a failed node could be far less than the whole file. A data collector (DC) can reconstruct the original file stored in the network by downloading @math symbols from each of the @math storage nodes. @cite_3 , the authors proved that there is a tradeoff between bandwidth @math and per node storage @math . They find two optimal points: minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) points. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_2"
],
"mid": [
"2017643135",
"2105185344",
""
],
"abstract": [
"Explores mechanisms for storage-level management in OceanStore, a global-scale distributed storage utility infrastructure, designed to scale to billions of users and exabytes of data. OceanStore automatically recovers from server and network failures, incorporates new resources and adjusts to usage patterns. It provides its storage platform through adaptation, fault tolerance and repair. The only role of human administrators in the system is to physically attach or remove server hardware. Of course, an open question is how to scale a research prototype in such a way to demonstrate the basic thesis of this article - that OceanStore is self-maintaining. The allure of connecting millions or billions of components together is the hope that aggregate systems can provide scalability and predictable behavior under a wide variety of failures. The OceanStore architecture is a step towards this goal.",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
""
]
} |
1510.01292 | 1972932498 | Distributed storage plays a crucial role in the current cloud computing framework. After the theoretical bound for distributed storage was derived by the pioneer work of the regenerating code, Reed-Solomon code based regenerating codes were developed. The RS code based minimum storage regeneration code (RS-MSR) and the minimum bandwidth regeneration code (RS-MBR) can achieve theoretical bounds on the MSR point and the MBR point respectively in code regeneration. They can also maintain the MDS property in code reconstruction. However, in the hostile network where the storage nodes can be compromised and the packets can be tampered with, the storage capacity of the network can be significantly affected. In this paper, we propose a Hermitian code based minimum storage regenerating (H-MSR) code and a minimum bandwidth regenerating (H-MBR) code. We first prove that our proposed Hermitian code based regenerating codes can achieve the theoretical bounds for MSR point and MBR point respectively. We then propose data regeneration and reconstruction algorithms for the H-MSR code and the H-MBR code in both error-free network and hostile network. Theoretical evaluation shows that our proposed schemes can detect the erroneous decodings and correct more errors in hostile network than the RS-MSR code and the RS-MBR code with the same code rate. Our analysis also demonstrates that the proposed H-MSR and H-MBR codes have lower computational complexity than the RS-MSR RS-MBR codes in both code regeneration and code reconstruction. | @cite_16 , the authors discussed the amount of information that can be safely stored against passive eavesdropping and active adversarial attacks based on the regeneration structure. @cite_8 , the authors proposed to add CRC codes in the regenerating code to check the integrity of the data in hostile network. Unfortunately, the CRC checks can also be manipulated by the malicious nodes, resulting in the failure of the regeneration and reconstruction. In @cite_6 , the authors analyzed the error resilience of the RS code based regenerating code in the network with errors and erasures. They provided the theoretical error correction capability. Their result is an extension of the MDS code to the regenerating code and their scheme is unable to determine whether the errors in the network are successfully corrected. | {
"cite_N": [
"@cite_16",
"@cite_6",
"@cite_8"
],
"mid": [
"2118925326",
"2949371638",
"2950631985"
],
"abstract": [
"We address the problem of securing distributed storage systems against eavesdropping and adversarial attacks. An important aspect of these systems is node failures over time, necessitating, thus, a repair mechanism in order to maintain a desired high system reliability. In such dynamic settings, an important security problem is to safeguard the system from an intruder who may come at different time instances during the lifetime of the storage system to observe and possibly alter the data stored on some nodes. In this scenario, we give upper bounds on the maximum amount of information that can be stored safely on the system. For an important operating regime of the distributed storage system, which we call the bandwidth-limited regime, we show that our upper bounds are tight and provide explicit code constructions. Moreover, we provide a way to short list the malicious nodes and expurgate the system.",
"Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.",
"Due to the use of commodity software and hardware, crash-stop and Byzantine failures are likely to be more prevalent in today's large-scale distributed storage systems. Regenerating codes have been shown to be a more efficient way to disperse information across multiple nodes and recover crash-stop failures in the literature. In this paper, we present the design of regeneration codes in conjunction with integrity check that allows exact regeneration of failed nodes and data reconstruction in presence of Byzantine failures. A progressive decoding mechanism is incorporated in both procedures to leverage computation performed thus far. The fault-tolerance and security properties of the schemes are also analyzed."
]
} |
1510.01292 | 1972932498 | Distributed storage plays a crucial role in the current cloud computing framework. After the theoretical bound for distributed storage was derived by the pioneer work of the regenerating code, Reed-Solomon code based regenerating codes were developed. The RS code based minimum storage regeneration code (RS-MSR) and the minimum bandwidth regeneration code (RS-MBR) can achieve theoretical bounds on the MSR point and the MBR point respectively in code regeneration. They can also maintain the MDS property in code reconstruction. However, in the hostile network where the storage nodes can be compromised and the packets can be tampered with, the storage capacity of the network can be significantly affected. In this paper, we propose a Hermitian code based minimum storage regenerating (H-MSR) code and a minimum bandwidth regenerating (H-MBR) code. We first prove that our proposed Hermitian code based regenerating codes can achieve the theoretical bounds for MSR point and MBR point respectively. We then propose data regeneration and reconstruction algorithms for the H-MSR code and the H-MBR code in both error-free network and hostile network. Theoretical evaluation shows that our proposed schemes can detect the erroneous decodings and correct more errors in hostile network than the RS-MSR code and the RS-MBR code with the same code rate. Our analysis also demonstrates that the proposed H-MSR and H-MBR codes have lower computational complexity than the RS-MSR RS-MBR codes in both code regeneration and code reconstruction. | In this paper, we propose a Hermitian code based minimum storage regeneration (H-MSR) code and a Hermitian code based minimum bandwidth regeneration (H-MBR) code. The proposed H-MSR H-MBR codes can correct more errors than the RS-MSR RS-MBR codes and can always determine whether the error correction is successful. Our design is based on the structural analysis of the Hermitian code and the efficient decoding algorithm proposed in @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2020590697"
],
"abstract": [
"In this paper, it is proved that Hermitian code is a direct sum of concatenated Reed-Solomon codes over GF(q sup 2 ). Based on this discovery, first, a new method for computing the dimension and tightly estimating the minimum distance of the Hermitian code is derived. Secondly, a new decoding algorithm, which is especially effective in dealing with burst errors with complexity O(n sup 5 3 ), is described. Finally, some possible approaches for optimization of Hermitian codes are discussed."
]
} |
1510.01374 | 2951793495 | Our goal is to determine the structural differences between different categories of networks and to use these differences to predict the network category. Existing work on this topic has looked at social networks such as Facebook, Twitter, co-author networks etc. We, instead, focus on a novel data set that we have assembled from a variety of sources, including law-enforcement agencies, financial institutions, commercial database providers and other similar organizations. The data set comprises networks of "persons of interest" with each network belonging to different categories such as suspected terrorists, convicted individuals etc. We demonstrate that such "anti-social" networks are qualitatively different from the usual social networks and that new techniques are required to identify and learn features of such networks for the purposes of prediction and classification. We propose Cliqster, a new generative Bernoulli process-based model for unweighted networks. The generating probabilities are the result of a decomposition which reflects a network's community structure. Using a maximum likelihood solution for the network inference leads to a least-squares problem. By solving this problem, we are able to present an efficient algorithm for transforming the network to a new space which is both concise and discriminative. This new space preserves the identity of the network as much as possible. Our algorithm is interpretable and intuitive. Finally, by comparing our research against the baseline method (SVD) and against a state-of-the-art Graphlet algorithm, we show the strength of our algorithm in discriminating between different categories of networks. | Significant attention has been given to to the approach of studying criminal activity through an analysis of social networks @cite_36 , @cite_30 , and @cite_26 . @cite_36 discovered that two-thirds of criminals commit crimes alongside another person. @cite_30 demonstrated that charting social interactions can facilitate an understanding of criminal activity. @cite_26 investigated the importance of weak ties to interpret criminal activity. | {
"cite_N": [
"@cite_36",
"@cite_26",
"@cite_30"
],
"mid": [
"",
"2151486346",
"2140200133"
],
"abstract": [
"",
"The aim of this paper is to investigate whether weak ties play an important role in explaining criminal activities. We first develop a model where individuals learn about crime opportunities by interacting with other peers. These interactions can take the form of either strong or weak ties. We find that increasing the percentage of weak ties induces more transitions from non-crime to crime and thus the crime rate in the economy increases. This is because, when the percentage of weak ties is high, delinquents and non-delinquents are in close contact with each other. We then test these predictions using the U.S. National Longitudinal Survey of Adolescent Health (AddHealth), which contains unique detailed informations on friendship relationships among teenagers. The theoretical predictions of our model are confirmed by the empirical analysis since we find that weak ties, as measured by friends of friends, have a positive impact on criminal activities.",
"The high degree of variance of crime rates across space (and across time) is one of the oldest puzzles in the social sciences (see Quetelet (1835)). Our empirical work strongly suggests that this variance is not the result of observed or unobserved geographic attributes. This paper presents a model where social interactions create enough covariance across individuals to explain the high cross- city variance of crime rates. This model provides a natural index of social interactions which can compare the degree of social interaction across crimes, across geographic 1units and across time. Our index gives similar results for different data samples and suggests that the amount of social interactions are highest in petty crimes (such as larceny and auto theft), moderate in more serious crimes (assault, burglary and robbery) and almost negligible in murder and rape. The index of social interactions is also applied to non-criminal choices and we find that there is substantial interaction in schooling choice."
]
} |
1510.01344 | 1914116674 | Purpose In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. | For instance, used a series of intensity and texture based features to make a feature space of over 300 dimensions, on which a random forest classifier was trained. and also used random forests . constructed a multi-dimensional feature space by incorporating first order neighborhood statistical images, GMM and Markov Random Field (MRF) posteriors, and template differences. @cite_11 performed binary segmentation (tumor vs. non-tumor) using T1, T2, T1C in an SVM framework followed by a variation of conditional random fields to account for neighborhood relationships. @cite_1 used a kernel SVM for multiclass segmentation of brain tumors, where a CRF is used to regularize the results. | {
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"1865761",
"1763905908"
],
"abstract": [
"Delineating brain tumor boundaries from magnetic resonance images is an essential task for the analysis of brain cancer. We propose a fully automatic method for brain tissue segmentation, which combines Support Vector Machine classification using multispectral intensities and textures with subsequent hierarchical regularization based on Conditional Random Fields. The CRF regularization introduces spatial constraints to the powerful SVM classification, which assumes voxels to be independent from their neighbors. The approach first separates healthy and tumor tissue before both regions are subclassified into cerebrospinal fluid, white matter, gray matter and necrotic, active, edema region respectively in a novel hierarchical way. The hierarchical approach adds robustness and speed by allowing to apply different levels of regularization at different stages. The method is fast and tailored to standard clinical acquisition protocols. It was assessed on 10 multispectral patient datasets with results outperforming previous methods in terms of segmentation detail and computation times.",
"Locating Brain tumor segmentation within MR (magnetic resonance) images is integral to the treatment of brain cancer. This segmentation task requires classifying each voxel as either tumor or non-tumor, based on a description of that voxel. Unfortunately, standard classifiers, such as Logistic Regression (LR) and Support Vector Machines (SVM), typically have limited accuracy as they treat voxels as independentand identically distributed(iid). Approaches based on random fields, which are able to incorporate spatial constraints, have recently been applied to brain tumor segmentation with notable performance improvement over iid classifiers. However, previous random field systems involved computationally intractable formulations, which are typically solved using some approximation. Here, we present pseudo-conditional random fields(PCRFs), which achieve accuracy similar to other random fields variants, but are significantly more efficient. We formulate a PCRF as a regularized discriminative classifier that relaxes the classification decision for each voxel by considering the labels and features of neighboring voxels."
]
} |
1510.01344 | 1914116674 | Purpose In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. | Although our method is a semi-automatic method, it shares with automatic methods the use of a machine learning classification algorithm, ran on a feature representation of voxels and improved by a spatial dependency model. The main difference is that generalization is performed within each brain, based on the training data provided by the user's interaction. This simplified generalization problem allows us to use a very simple feature space, yielding an interactive segmentation method that is fast and effective. @cite_7 used a similar, semi-automatic, kNN classification method, applied to proton density, T1 and T2 modalities. @cite_8 also proposed a semi-automatic segmentation method that uses instead Quadratic Discriminative Aanalysis to perform multi-class segmentation. However, they did not use the @math voxel positions as features (see ) nor did they deal with label spatial dependency modeling (see ), which we found to play a crucial role in obtaining competitive performances. | {
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2043254649",
"2145183917"
],
"abstract": [
"Abstract Two different multispectral pattern recognition methods are used to segment magnetic resonance images (MRI) of the brain for quantitative estimation of tumor volume and volume changes with therapy. A supervised k-nearest neighbor (kNN) rule and a semi-supervised fuzzy c-means (SFCM) method are used to segment MRI slice data. Tumor volumes as determined by the kNN and SFCM segmentation methods are compared with two reference methods, based on image grey scale, as a basis for an estimation of ground truth, namely: (a) a commonly used seed growing method that is applied to the contrast enhanced T 1 -weighted image, and (b) a manual segmentation method using a custom-designed graphical user interface applied to the same raw image ( T 1 -weighted) dataset. Emphasis is placed on measurement of intra and inter observer reproducibility using the proposed methods. Intra- and interobserver variation for the kNN method was 9 and 5 , respectively. The results for the SFCM method was a little better at 6 and 4 , respectively. For the seed growing method, the intra-observer variation was 6 and the interobserver variation was 17 , significantly larger when compared with the multispectral methods. The absolute tumor volume determined by the multispectral segmentation methods was consistently smaller than that observed for the reference methods. The results of this study are found to be very patient case-dependent. The results for SFCM suggest that it should be useful for relative measurements of tumor volume during therapy, but further studies are required. This work demonstrates the need for minimally supervised or unsupervised methods for tumor volume measurements.",
"In this paper, multi-modal magnetic resonance (MR) images are integrated into a tissue profile that aims at differentiating tumor components, edema and normal tissue. This is achieved by a tissue classification technique that learns the appearance models of different tissue types based on training samples identified by an expert and assigns tissue labels to each voxel. These tissue classifiers produce probabilistic tissue maps reflecting imaging characteristics of tumors and surrounding tissues that may be employed to aid in diagnosis, tumor boundary delineation, surgery and treatment planning. The main contributions of this work are: 1) conventional structural MR modalities are combined with diffusion tensor imaging data to create an integrated multimodality profile for brain tumors, and 2) in addition to the tumor components of enhancing and non-enhancing tumor types, edema is also characterized as a separate class in our framework. Classification performance is tested on 22 diverse tumor cases using cross-validation."
]
} |
1510.00598 | 2950652725 | In this paper we consider the Maximum Independent Set problem (MIS) on @math -EPG graphs. EPG (for Edge intersection graphs of Paths on a Grid) was introduced in edgeintersinglebend as the class of graphs whose vertices can be represented as simple paths on a rectangular grid so that two vertices are adjacent if and only if the corresponding paths share at least one edge of the underlying grid. The restricted class @math -EPG denotes EPG-graphs where every path has at most @math bends. The study of MIS on @math -EPG graphs has been initiated in wadsMIS where authors prove that MIS is NP-complete on @math -EPG graphs, and provide a polynomial @math -approximation. In this article we study the approximability and the fixed parameter tractability of MIS on @math -EPG. We show that there is no PTAS for MIS on @math -EPG unless P @math NP, even if there is only one shape of path, and even if each path has its vertical part or its horizontal part of length at most @math . This is optimal, as we show that if all paths have their horizontal part bounded by a constant, then MIS admits a PTAS. Finally, we show that MIS is FPT in the standard parameterization on @math -EPG restricted to only three shapes of path, and @math -hard on @math -EPG. The status for general @math -EPG (with the four shapes) is left open. | Let us now consider graphs with small number of bends. Notice first that @math -EPG graphs coincide with interval graphs. Several recent papers started the study EPG graphs with small number of bends. For example, it has been proved that @math -EPG contains trees @cite_5 , and that @math -EPG and @math -EPG respectively contain outerplanar graphs and planar graphs @cite_11 . We can also mention that the recognition of @math -EPG graphs is NP-hard, even when only one shape of path is allowed @cite_6 . | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_11"
],
"mid": [
"2023033525",
"2569784685",
"1992898832"
],
"abstract": [
"We combine the known notion of the edge intersection graphs of paths in a tree with a VLSI grid layout model to introduce the edge intersection graphs of paths on a grid. Let P be a collection of nontrivial simple paths on a grid G. We define the edge intersection graph EPG(P) of P to have vertices which correspond to the members of P, such that two vertices are adjacent in EPG(P) if the corresponding paths in P share an edge in G. An undirected graph G is called an edge intersection graph of paths on a grid (EPG) if G = EPG(P) for some P and G, and 〈P,G〉 is an EPG representation of G. We prove that every graph is an EPG graph. A turn of a path at a grid point is called a bend. We consider here EPG representations in which every path has at most a single bend, called B1-EPG representations and the corresponding graphs are called B1-EPG graphs. We prove that any tree is a B1-EPG graph. Moreover, we give a structural property that enables one to generate non B1-EPG graphs. Furthermore, we characterize the representation of cliques and chordless 4-cycles in B1-EPG graphs. We also prove that single bend paths on a grid have Strong Helly number 3. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009",
"Abstract In this paper we continue the study of the edge intersection graphs of single bend paths on a rectangular grid (i.e., the edge intersection graphs where each vertex is represented by one of the following shapes: ⌞ , ⌜ , ⌟ , ⌝ ). These graphs, called B 1 - EPG graphs, were first introduced by (2009) [Golumbic, M. C., M. Lipshteyn and M. Stern, Edge intersection graphs of single bend paths on a grid , Networks 54 :3 (2009), 130–138]. We focus on the class [⌞] (the edge intersection graphs of ⌞-shapes) and show that testing for membership in [⌞] is NP-complete. We then give a characterization and polytime recognition algorithm for special subclasses of Split ∩ [ ⌞ ] . We also consider the natural subclasses of B 1 -EPG formed by the subsets of the four single bend shapes (i.e., ⌞ , ⌞ , ⌜ , ⌞ , ⌝ , ⌞ , ⌜ , ⌝ – note: all other subsets are isomorphic to these up to 90 degree rotation). We observe the expected strict inclusions and incomparability (i.e., [ ⌞ ] ⊊ [ ⌞ , ⌜ ] , [ ⌞ , ⌝ ] ⊊ [ ⌞ , ⌜ , ⌝ ] ⊊ B 1 -EPG and [ ⌞ , ⌜ ] is incomparable with [ ⌞ , ⌝ ] ).",
"The bend-numberb(G) of a graph G is the minimum k such that G may be represented as the edge intersection graph of a set of grid paths with at most k bends. We confirm a conjecture of Biedl and Stern showing that the maximum bend-number of outerplanar graphs is 2. Moreover we improve the formerly known lower and upper bound for the maximum bend-number of planar graphs from 2 and 5 to 3 and 4, respectively."
]
} |
1510.00598 | 2950652725 | In this paper we consider the Maximum Independent Set problem (MIS) on @math -EPG graphs. EPG (for Edge intersection graphs of Paths on a Grid) was introduced in edgeintersinglebend as the class of graphs whose vertices can be represented as simple paths on a rectangular grid so that two vertices are adjacent if and only if the corresponding paths share at least one edge of the underlying grid. The restricted class @math -EPG denotes EPG-graphs where every path has at most @math bends. The study of MIS on @math -EPG graphs has been initiated in wadsMIS where authors prove that MIS is NP-complete on @math -EPG graphs, and provide a polynomial @math -approximation. In this article we study the approximability and the fixed parameter tractability of MIS on @math -EPG. We show that there is no PTAS for MIS on @math -EPG unless P @math NP, even if there is only one shape of path, and even if each path has its vertical part or its horizontal part of length at most @math . This is optimal, as we show that if all paths have their horizontal part bounded by a constant, then MIS admits a PTAS. Finally, we show that MIS is FPT in the standard parameterization on @math -EPG restricted to only three shapes of path, and @math -hard on @math -EPG. The status for general @math -EPG (with the four shapes) is left open. | In terms of forbidden induced subgraphs, it is also known that @math -EPG graphs exclude induced suns @math with @math , @math , and @math @cite_5 . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2023033525"
],
"abstract": [
"We combine the known notion of the edge intersection graphs of paths in a tree with a VLSI grid layout model to introduce the edge intersection graphs of paths on a grid. Let P be a collection of nontrivial simple paths on a grid G. We define the edge intersection graph EPG(P) of P to have vertices which correspond to the members of P, such that two vertices are adjacent in EPG(P) if the corresponding paths in P share an edge in G. An undirected graph G is called an edge intersection graph of paths on a grid (EPG) if G = EPG(P) for some P and G, and 〈P,G〉 is an EPG representation of G. We prove that every graph is an EPG graph. A turn of a path at a grid point is called a bend. We consider here EPG representations in which every path has at most a single bend, called B1-EPG representations and the corresponding graphs are called B1-EPG graphs. We prove that any tree is a B1-EPG graph. Moreover, we give a structural property that enables one to generate non B1-EPG graphs. Furthermore, we characterize the representation of cliques and chordless 4-cycles in B1-EPG graphs. We also prove that single bend paths on a grid have Strong Helly number 3. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009"
]
} |
1510.00844 | 2235832317 | Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdos--Renyi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achieving significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research. | There has been a flurry of activity in developing algorithms and implementations of SpGEMM for Graphics Processing Units (GPUs). Among those, the algorithm of @cite_31 uses the row-wise formulation of SpGEMM. By contrast, @cite_12 uses the data-parallel ESC (expansion, sorting, and contraction) formulation, which is based on outer products. One downside of the ESC formulation is that expansion might create @math intermediate storage in the worst case, depending on the number of additions performed immediately in shared memory when possible, which might be asymptotically larger than the sizes of the inputs and outputs combined. The recent work of Liu and Vinter is currently the fastest implementation on GPUs and it also addresses heterogenous CPU-GPU processors @cite_42 . | {
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_12"
],
"mid": [
"1973918431",
"2168931017",
"1980282429"
],
"abstract": [
"We present an algorithm for general sparse matrix-matrix multiplication (SpGEMM) on many-core architectures, such as GPUs. SpGEMM is implemented by iterative row merging, similar to merge sort, except that elements with duplicate column indices are aggregated on the fly. The main kernel merges small numbers of sparse rows at once using subwarps of threads to realize an early compression effect which reduces the overhead of global memory accesses. The performance is compared with a parallel CPU implementation as well as with three GPU-based implementations. Measurements performed for computing the matrix square for 21 sparse matrices show that the proposed method consistently outperforms the other methods. Analysis showed that the performance is achieved by utilizing the compression effect and the GPU caching architecture. An improved performance was also found for computing Galerkin products which are required by algebraic multigrid solvers. The performance was particularly good for seven-point stencil ma...",
"General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input matrices.In this work we propose a framework for SpGEMM on GPUs and emerging CPU-GPU heterogeneous processors. This framework particularly focuses on the above three problems. Memory pre-allocation for the resulting matrix is organized by a hybrid method that saves a large amount of global memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary arithmetic operations on the nonzero entries and is guaranteed in all stages.Compared with the state-of-the-art CPU and GPU SpGEMM methods, our approach delivers excellent absolute performance and relative speedups on various benchmarks multiplying matrices with diverse sparsity structures. Furthermore, on heterogeneous processors, our SpGEMM approach achieves higher throughput by using re-allocatable shared virtual memory. We design a framework for SpGEMM on modern manycore processors using the CSR format.We present a hybrid method for pre-allocating the resulting sparse matrix.We propose an efficient parallel insert method for long rows of the resulting matrix.We develop a heuristic-based load balancing strategy.Our approach significantly outperforms other known CPU and GPU SpGEMM methods.",
"Sparse matrix--matrix multiplication (SpGEMM) is a key operation in numerous areas from information to the physical sciences. Implementing SpGEMM efficiently on throughput-oriented processors, such as the graphics processing unit (GPU), requires the programmer to expose substantial fine-grained parallelism while conserving the limited off-chip memory bandwidth. Balancing these concerns, we decompose the SpGEMM operation into three highly parallel phases: expansion, sorting, and contraction, and introduce a set of complementary bandwidth-saving performance optimizations. Our implementation is fully general and our optimization strategy adaptively processes the SpGEMM workload row-wise to substantially improve performance by decreasing the work complexity and utilizing the memory hierarchy more effectively."
]
} |
1510.00844 | 2235832317 | Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdos--Renyi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achieving significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research. | In distributed memory, under many definitions of scalability, all known parallel SpGEMM algorithms are unscalable due to increased communication costs relative to arithmetic operations. For instance, there is no way to keep the parallel efficiency ( @math ) fixed for any constant @math as we increase the number of processors @cite_16 . Recently, two attempts have been made to model the communication costs of SpGEMM in a more fine grained manner. Akbudak and Aykanat @cite_45 proposed the first hypergraph model for outer-product formulation of SpGEMM. Unfortunately, a symbolic SpGEMM computation has to be performed initially as the hypergraph model needs full access to the computational pattern that forms the output matrix. @cite_28 recently proposed hypergraph models for a class of SpGEMM algorithms more general than Akbudak and Aykanat considered. Their model also requires the sparsity structure of the output matrix and the number of vertices in the hypergraph is @math , making the approach impractical. | {
"cite_N": [
"@cite_28",
"@cite_16",
"@cite_45"
],
"mid": [
"1985181952",
"2093171965",
""
],
"abstract": [
"The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computation to improve application-specific algorithms for multiplying sparse matrices.",
"Abstract The scalability of a parallel algorithm on a parallel architecture is a measure of its capacity to effectively utilize an increasing number of processors. Scalability analysis may be used to select the best algorithm-architecture combination for a problem under different constraints on the growth of the problem size and the number of processors. It may be used to predict the performance of a parallel algorithm and a parallel architecture for a large number of processors from the known performance on fewer processors. For a fixed problem size, it may be used to determine the optimal number of processors to be used and the maximum possible speedup that can be obtained. The objectives of this paper are to critically assess the state of the art in the theory of scalability analysis, and to motivate further research on the development of new and more comprehensive analytical tools to study the scalability of parallel algorithms and architectures. We survey a number of techniques and formalisms that have been developed for studying scalability issues, and discuss their interrelationships. For example, we derive an important relationship between time-constrained scaling and the isoefficiency function. We point out some of the weaknesses of the existing schemes for measuring scalability, and discuss possible ways of extending them.",
""
]
} |
1510.00844 | 2235832317 | Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdos--Renyi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achieving significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research. | We also mention that there has been significant research devoted to dense matrix multiplication in distributed-memory settings. In particular, the development of so-called 3D algorithms for dense matrix multiplication spans multiple decades; see @cite_35 @cite_36 @cite_2 @cite_0 and the references therein. Many aspects of our 3D algorithm for sparse matrix multiplication are derived from the dense case, though there are important differences as we detail below. | {
"cite_N": [
"@cite_36",
"@cite_35",
"@cite_0",
"@cite_2"
],
"mid": [
"2010747199",
"2029342163",
"201315547",
"2056999868"
],
"abstract": [
"We present lower bounds on the amount of communication that matrix multiplication algorithms must perform on a distributed-memory parallel computer. We denote the number of processors by P and the dimension of square matrices by n. We show that the most widely used class of algorithms, the so-called two-dimensional (2D) algorithms, are optimal, in the sense that in any algorithm that only uses O(n2 P) words of memory per processor, at least one processor must send or receive Ω(n2 P1 2) words. We also show that algorithms from another class, the so-called three-dimensional (3D) algorithms, are also optimal. These algorithms use replication to reduce communication. We show that in any algorithm that uses O(n2 P2 3) words of memory per processor, at least one processor must send or receive Ω(n2 P2 3) words. Furthermore, we show a continuous tradeoff between the size of local memories and the amount of communication that must be performed. The 2D and 3D bounds are essentially instantiations of this tradeoff. We also show that if the input is distributed across the local memories of multiple nodes without replication, then Ω(n2) words must cross any bisection cut of the machine. All our bounds apply only to conventional Θ(n3) algorithms. They do not apply to Strassen's algorithm or other o(n3) algorithms.",
"Matrix multiplication algorithms for cube connected and perfect shuffle computers are presented. It is shown that in both these models two @math matrices can be multiplied in @math time when @math , @math , processing elements (PEs) are available. When only @math , @math , PEs are available, two @math matrices can be multiplied in @math time. It is shown that many graph problems can be solved efficiently using the matrix multiplication algorithms.",
"Extra memory allows parallel matrix multiplication to be done with asymptotically less communication than Cannon's algorithm and be faster in practice. \"3D\" algorithms arrange the p processors in a 3D array, and store redundant copies of the matrices on each of p1 3 layers. \"2D\" algorithms such as Cannon's algorithm store a single copy of the matrices on a 2D array of processors. We generalize these 2D and 3D algorithms by introducing a new class of \"2.5D algorithms\". For matrix multiplication, we can take advantage of any amount of extra memory to store c copies of the data, for any c ∈ 1, 2,..., ⌊p1 3⌋ , to reduce the bandwidth cost of Cannon's algorithm by a factor of c1 2 and the latency cost by a factor c3 2. We also show that these costs reach the lower bounds, modulo polylog(p) factors. We introduce a novel algorithm for 2.5D LU decomposition. To the best of our knowledge, this LU algorithm is the first to minimize communication along the critical path of execution in the 3D case. Our 2.5D LU algorithm uses communicationavoiding pivoting, a stable alternative to partial-pivoting. We prove a novel lower bound on the latency cost of 2.5D and 3D LU factorization, showing that while c copies of the data can also reduce the bandwidth by a factor of c1 2, the latency must increase by a factor of c1 2, so that the 2D LU algorithm (c = 1) in fact minimizes latency. We provide implementations and performance results for 2D and 2.5D versions of all the new algorithms. Our results demonstrate that 2.5D matrix multiplication and LU algorithms strongly scale more efficiently than 2D algorithms. Each of our 2.5D algorithms performs over 2X faster than the corresponding 2D algorithm for certain problem sizes on 65,536 cores of a BG P supercomputer.",
"In this paper, we give a straight forward, highly efficient, scalable implementation of common matrix multiplication operations. The algorithms are much simpler than previously published methods, yield better performance, and require less work space. MPI implementations are given, as are performance results on the Intel Paragon system."
]
} |
1510.00249 | 2233019047 | Compounding of natural language units is a very common phenomena. In this paper, we show, for the first time, that Twitter hashtags which, could be considered as correlates of such linguistic units, undergo compounding. We identify reasons for this compounding and propose a prediction model that can identify with 77.07 accuracy if a pair of hashtags compounding in the near future (i.e., 2 months after compounding) shall become popular. At longer times T = 6, 10 months the accuracies are 77.52 and 79.13 respectively. This technique has strong implications to trending hashtag recommendation since newly formed hashtag compounds can be recommended early, even before the compounding has taken place. Further, humans can predict compounds with an overall accuracy of only 48.7 (treated as baseline). Notably, while humans can discriminate the relatively easier cases, the automatic framework is successful in classifying the relatively harder cases. | * Language use in social media There have been considerable works that focus on the content and its linguistic aspects in social media. Honeycutt and Herring @cite_50 analyzed conversational exchanges in Twitter focusing on mentions. @cite_52 developed an unsupervised learning approach to identify conversational structure from open-topic conversations. Danescu-Niculescu- @cite_6 studied how people adopt linguistic styles while in conversation on Twitter. @cite_3 studied the role of geography and demographics on the language in Twitter. @cite_31 investigated the cultural differences in Twitter's language. @cite_16 studied the characterization of linguistic and psycholinguistic aspects in Twitter. @cite_48 studied how people curse each other in Twitter. @cite_32 performed a large scale quantitative analysis on deleted tweets. | {
"cite_N": [
"@cite_48",
"@cite_52",
"@cite_3",
"@cite_6",
"@cite_32",
"@cite_50",
"@cite_31",
"@cite_16"
],
"mid": [
"1979296086",
"1654173042",
"2142889507",
"2160176417",
"2032695641",
"2140173168",
"2396764674",
"2189286218"
],
"abstract": [
"Cursing is not uncommon during conversations in the physical world: 0.5 to 0.7 of all the words we speak are curse words, given that 1 of all the words are first-person plural pronouns (e.g., we, us, our). On social media, people can instantly chat with friends without face-to-face interaction, usually in a more public fashion and broadly disseminated through highly connected social network. Will these distinctive features of social media lead to a change in people's cursing behavior? In this paper, we examine the characteristics of cursing activity on a popular social media platform - Twitter, involving the analysis of about 51 million tweets and about 14 million users. In particular, we explore a set of questions that have been recognized as crucial for understanding cursing in offline communications by prior studies, including the ubiquity, utility, and contextual dependencies of cursing.",
"We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium.",
"The rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation. In this paper, we present a multi-level generative model that reasons jointly about latent topics and geographical regions. High-level topics such as \"sports\" or \"entertainment\" are rendered differently in each geographic region, revealing topic-specific regional distinctions. Applied to a new dataset of geotagged microblogs, our model recovers coherent topics and their regional variants, while identifying geographic areas of linguistic consistency. The model also enables prediction of an author's geographic location from raw text, outperforming both text regression and supervised topic models.",
"The psycholinguistic theory of communication accommodation accounts for the general observation that participants in conversations tend to converge to one another's communicative behavior: they coordinate in a variety of dimensions including choice of words, syntax, utterance length, pitch and gestures. In its almost forty years of existence, this theory has been empirically supported exclusively through small-scale or controlled laboratory studies. Here we address this phenomenon in the context of Twitter conversations. Undoubtedly, this setting is unlike any other in which accommodation was observed and, thus, challenging to the theory. Its novelty comes not only from its size, but also from the non real-time nature of conversations, from the 140 character length restriction, from the wide variety of social relation types, and from a design that was initially not geared towards conversation at all. Given such constraints, it is not clear a priori whether accommodation is robust enough to occur given the constraints of this new environment. To investigate this, we develop a probabilistic framework that can model accommodation and measure its effects. We apply it to a large Twitter conversational dataset specifically developed for this task. This is the first time the hypothesis of linguistic style accommodation has been examined (and verified) in a large scale, real world setting. Furthermore, when investigating concepts such as stylistic influence and symmetry of accommodation, we discover a complexity of the phenomenon which was never observed before. We also explore the potential relation between stylistic influence and network features commonly associated with social status.",
"This paper describes an empirical study of 1.6M deleted tweets collected over a continuous one-week period from a set of 292K Twitter users. We examine several aggregate properties of deleted tweets, including their connections to other tweets (e.g., whether they are replies or retweets), the clients used to produce them, temporal aspects of deletion, and the presence of geotagging information. Some significant differences were discovered between the two collections, namely in the clients used to post them, their conversational aspects, the sentiment vocabulary present in them, and the days of the week they were posted. However, in other dimensions for which analysis was possible, no substantial differences were found. Finally, we discuss some ramifications of this work for understanding Twitter usage and management of one's privacy.",
"The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In order to determine how well Twitter supports user-touser exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitters current design for collaborative use.",
"Despite the widespread adoption of Twitter internationally, little research has investigated the differences among users of different languages. In prior research, the natural tendency has been to assume that the behaviors of English users generalize to other language users. We studied 62 million tweets collected over a four-week period and found that more than 100 languages were used. Only half of the tweets were in English (51 ). Other popular languages including Japanese, Portuguese, Indonesian, and Spanish together accounted for 39 of the tweets. Examining users of the top 10 languages, we discovered cross-language differences in adoption of features such as URLs, hashtags, mentions, replies, and retweets. We discuss our work’s implications for research on large-scale social systems and design of cross-cultural communication tools.",
"Twitter has become the de facto information sharing and communication platform. Given the factors that influence language on Twitter ‐ size limitation as well as communication and content-sharing mechanisms ‐ there is a continuing debate about the position of Twitter’s language in the spectrum of language on various established mediums. These include SMS and chat on the one hand (size limitations) and email (communication), blogs and newspapers (content sharing) on the other. To provide a way of determining this, we propose a computational framework that offers insights into the linguistic style of all these mediums. Our framework consists of two parts. The first part builds upon a set of linguistic features to quantify the language of a given medium. The second part introduces a flexible factorization framework, SOCLIN, which conducts a psycholinguistic analysis of a given medium with the help of an external cognitive and affective knowledge base. Applying this analytical framework to various corpora from several major mediums, we gather statistics in order to compare the linguistics of Twitter with these other mediums via a quantitative comparative study. We present several key insights: (1) Twitter’s language is surprisingly more conservative, and less informal than SMS and online chat; (2) Twitter users appear to be developing linguistically unique styles; (3) Twitter’s usage of temporal references is similar to SMS and chat; and (4) Twitter has less variation of affect than other more formal mediums. The language of Twitter can thus be seen as a projection of a more formal register into a size-restricted space."
]
} |
1510.00249 | 2233019047 | Compounding of natural language units is a very common phenomena. In this paper, we show, for the first time, that Twitter hashtags which, could be considered as correlates of such linguistic units, undergo compounding. We identify reasons for this compounding and propose a prediction model that can identify with 77.07 accuracy if a pair of hashtags compounding in the near future (i.e., 2 months after compounding) shall become popular. At longer times T = 6, 10 months the accuracies are 77.52 and 79.13 respectively. This technique has strong implications to trending hashtag recommendation since newly formed hashtag compounds can be recommended early, even before the compounding has taken place. Further, humans can predict compounds with an overall accuracy of only 48.7 (treated as baseline). Notably, while humans can discriminate the relatively easier cases, the automatic framework is successful in classifying the relatively harder cases. | While retweets and followers support a hashtag's growth, they also paradoxically undermine its persistence. Various researchers have tried to systematically analyze the features that contribute to the growth and stabilization of the hashtags. Yang and Scott @cite_2 examined the roles of relevance'' and exposure'' for hashtag adoption @cite_2 . @cite_8 studied the duality of hashtags as topical identifiers and a symbol of community membership. @cite_17 studied the growth, survival, and context of novel hashtags during the 2012 U.S. presidential debate. They proposed a framework to capture dynamics of hashtags based on their topicality, interactivity, diversity, and prominence. | {
"cite_N": [
"@cite_8",
"@cite_17",
"@cite_2"
],
"mid": [
"2104894372",
"1551048630",
"1499517307"
],
"abstract": [
"Researchers and social observers have both believed that hashtags, as a new type of organizational objects of information, play a dual role in online microblogging communities (e.g., Twitter). On one hand, a hashtag serves as a bookmark of content, which links tweets with similar topics; on the other hand, a hashtag serves as the symbol of a community membership, which bridges a virtual community of users. Are the real users aware of this dual role of hashtags? Is the dual role affecting their behavior of adopting a hashtag? Is hashtag adoption predictable? We take the initiative to investigate and quantify the effects of the dual role on hashtag adoption. We propose comprehensive measures to quantify the major factors of how a user selects content tags as well as joins communities. Experiments using large scale Twitter datasets prove the effectiveness of the dual role, where both the content measures and the community measures significantly correlate to hashtag adoption on Twitter. With these measures as features, a machine learning model can effectively predict the future adoption of hashtags that a user has never used before.",
"We examine the growth, survival, and context of 256 novel hashtags during the 2012 U.S. presidential debates. Our analysis reveals the trajectories of hashtag use fall into two distinct classes: \"winners\" that emerge more quickly and are sustained for longer periods of time than other \"also-rans\" hashtags. We propose a \"conversational vibrancy\" framework to capture dynamics of hashtags based on their topicality, interactivity, diversity, and prominence. Statistical analyses of the growth and persistence of hashtags reveal novel relationships between features of this framework and the relative success of hashtags. Specifically, retweets always contribute to faster hashtag adoption, replies extend the life of \"winners\" while having no effect on \"also-rans.\" This is the first study on the lifecycle of hashtag adoption and use in response to purely exogenous shocks. We draw on theories of uses and gratification, organizational ecology, and language evolution to discuss these findings and their implications for understanding social influence and collective action in social media more generally.",
"We present results of network analyses of information diffusion on Twitter, via users’ ongoing social interactions as denoted by “@username” mentions. Incorporating survival analysis, we constructed a novel model to capture the three major properties of information diffusion: speed, scale, and range. On the whole, we find that some properties of the tweets themselves predict greater information propagation but that properties of the users, the rate with which a user is mentioned historically in particular, are equal or stronger predictors. Implications for end users and system designers are discussed."
]
} |
1509.09308 | 2949245006 | Deep convolutional neural networks take GPU days of compute time to train on large data sets. Pedestrian detection for self driving cars requires very low latency. Image recognition for mobile phones is constrained by limited processing resources. The success of convolutional neural networks in these situations is limited by how fast we can compute them. Conventional FFT based convolution is fast for large filters, but state of the art convolutional neural networks use small, 3x3 filters. We introduce a new class of fast algorithms for convolutional neural networks using Winograd's minimal filtering algorithms. The algorithms compute minimal complexity convolution over small tiles, which makes them fast with small filters and small batch sizes. We benchmark a GPU implementation of our algorithm with the VGG network and show state of the art throughput at batch sizes from 1 to 64. | The Strassen algorithm for fast matrix multiplication was used by Cong and Xiao @cite_10 to reduce the number of convolutions in a convnet layer, thereby reducing its total arithmetic complexity. The authors also suggested that more techniques from arithmetic complexity theory might be applicable to convnets. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1005811612"
],
"abstract": [
"Convolutional Neural Networks (CNNs) have been successfully used for many computer vision applications. It would be beneficial to these applications if the computational workload of CNNs could be reduced. In this work we analyze the linear algebraic properties of CNNs and propose an algorithmic modification to reduce their computational workload. An up to a 47 reduction can be achieved without any change in the image recognition results or the addition of any hardware accelerators."
]
} |
1509.08955 | 2227670993 | The GLEON Research And PRAGMA Lake Expedition -- GRAPLE -- is a collaborative effort between computer science and lake ecology researchers. It aims to improve our understanding and predictive capacity of the threats to the water quality of our freshwater resources, including climate change. This paper presents GRAPLEr, a distributed computing system used to address the modeling needs of GRAPLE researchers. GRAPLEr integrates and applies overlay virtual network, high-throughput computing, and Web service technologies in a novel way. First, its user-level IP-over-P2P (IPOP) overlay network allows compute and storage resources distributed across independently-administered institutions (including private and public clouds) to be aggregated into a common virtual network, despite the presence of firewalls and network address translators. Second, resources aggregated by the IPOP virtual network run unmodified high-throughput computing middleware (HTCondor) to enable large numbers of model simulations to be executed concurrently across the distributed computing resources. Third, a Web service interface allows end users to submit job requests to the system using client libraries that integrate with the R statistical computing environment. The paper presents the GRAPLEr architecture, describes its implementation and reports on its performance for batches of General Lake Model (GLM) simulations across three cloud infrastructures (University of Florida, CloudLab, and Microsoft Azure). | Several HTCondor-based high-throughput computing systems have been deployed in support of scientific applications. One representative example is the Open Science Grid (OSG @cite_4 ), which features a distributed set of HTCondor clusters. In contrast to OSG, which expects each site to run and manage its own HTCondor pool, GRAPLEr allows sites to join a collaborative, distributed cluster by joining its virtual HTCondor pool via the IPOP virtual network overlay. This reduces the barrier to entry for participants to contribute nodes to the network -- e.g., by simply deploying one or more VMs on a private or public cloud. Furthermore, GRAPLEr exposes a domain-tailored Web service interface that lowers the barrier to entry for end users. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2124088880"
],
"abstract": [
"The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support its use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org."
]
} |
1509.08955 | 2227670993 | The GLEON Research And PRAGMA Lake Expedition -- GRAPLE -- is a collaborative effort between computer science and lake ecology researchers. It aims to improve our understanding and predictive capacity of the threats to the water quality of our freshwater resources, including climate change. This paper presents GRAPLEr, a distributed computing system used to address the modeling needs of GRAPLE researchers. GRAPLEr integrates and applies overlay virtual network, high-throughput computing, and Web service technologies in a novel way. First, its user-level IP-over-P2P (IPOP) overlay network allows compute and storage resources distributed across independently-administered institutions (including private and public clouds) to be aggregated into a common virtual network, despite the presence of firewalls and network address translators. Second, resources aggregated by the IPOP virtual network run unmodified high-throughput computing middleware (HTCondor) to enable large numbers of model simulations to be executed concurrently across the distributed computing resources. Third, a Web service interface allows end users to submit job requests to the system using client libraries that integrate with the R statistical computing environment. The paper presents the GRAPLEr architecture, describes its implementation and reports on its performance for batches of General Lake Model (GLM) simulations across three cloud infrastructures (University of Florida, CloudLab, and Microsoft Azure). | The NEWT @cite_2 project also provides a RESTful-based Web service interface to High-Performance Computing (HPC) systems. NEWT is focused on providing access to a particular set of resources (NERSC), and does not address the need for a distributed set of (virtualized) computing resources to be interconnected by overlay virtual networks. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1978248517"
],
"abstract": [
"The NERSC Web Toolkit (NEWT) brings High Performance Computing (HPC) to the web through easy to write web applications. Our work seeks to make HPC resources more accessible and useful to scientists who are more comfortable with the web than they are with command line interfaces. The effort required to get a fully functioning web application is decreasing, thanks to Web 2.0 standards and protocols such as AJAX, HTML5, JSON and REST. We believe HPC can speak the same language as the web, by leveraging these technologies to interface with existing grid technologies. NEWT presents computational and data resources through simple transactions against URIs. In this paper we describe our approach to building web applications for science using a RESTful web service. We present the NEWT web service and describe how it can be used to access HPC resources in a web browser environment using AJAX and JSON. We discuss our REST API for NEWT, and address specific challenges in integrating a heterogeneous collection of backend resources under a single web service. We provide examples of client side applications that leverage NEWT to access resources directly in the web browser. The goal of this effort is to create a model whereby HPC becomes easily accessible through the web, allowing users to interact with their scientific computing, data and applications entirely through such web interfaces."
]
} |
1509.08902 | 2952207198 | We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval. | Metric learning has been an active topic of research (we encourage the interested reader to see @cite_28 @cite_60 for extensive surveys) with applications to face verification @cite_1 , person reidentification @cite_18 , image auto-annotation @cite_23 , visual tracking @cite_66 , nearest neighbor based image classification @cite_0 in computer vision. Starting from the seminal paper of Xing al @cite_61 , many different approaches for learning metrics have been proposed @cite_38 @cite_27 @cite_3 @cite_49 @cite_42 @cite_46 @cite_56 @cite_62 @cite_37 @cite_15 . | {
"cite_N": [
"@cite_61",
"@cite_38",
"@cite_18",
"@cite_62",
"@cite_37",
"@cite_60",
"@cite_28",
"@cite_15",
"@cite_42",
"@cite_1",
"@cite_3",
"@cite_56",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_49",
"@cite_46",
"@cite_66"
],
"mid": [
"2117154949",
"",
"2157598322",
"",
"",
"",
"1898424075",
"",
"",
"1782590233",
"",
"",
"1499991161",
"",
"2536305071",
"",
"",
""
],
"abstract": [
"Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.",
"",
"Person re-identification is a fundamental task in automated video surveillance and has been an area of intense research in the past few years. Given an image video of a person taken from one camera, re-identification is the process of identifying the person from images videos taken from a different camera. Re-identification is indispensable in establishing consistent labeling across multiple cameras or even within the same camera to re-establish disconnected or lost tracks. Apart from surveillance it has applications in robotics, multimedia and forensics. Person re-identification is a difficult problem because of the visual ambiguity and spatiotemporal uncertainty in a person's appearance across different cameras. These difficulties are often compounded by low resolution images or poor quality video feeds with large amounts of unrelated information in them that does not aid re-identification. The spatial or temporal conditions to constrain the problem are hard to capture. However, the problem has received significant attention from the computer vision research community due to its wide applicability and utility. In this paper, we explore the problem of person re-identification and discuss the current solutions. Open issues and challenges of the problem are highlighted with a discussion on potential directions for further research.",
"",
"",
"",
"The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.",
"",
"",
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.",
"",
"",
"We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.",
"",
"Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art.",
"",
"",
""
]
} |
1509.08902 | 2952207198 | We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval. | Different types of supervision has been used for learning metrics. While some methods require class level supervision @cite_64 , others only require triplet constraints, @math , where @math should be closer to @math than to @math @cite_2 , and others still, only pairwise constraints, @math where @math if @math are similar and @math if they are dissimilar @cite_52 . | {
"cite_N": [
"@cite_64",
"@cite_52",
"@cite_2"
],
"mid": [
"205159212",
"2048110836",
""
],
"abstract": [
"A dental model trimmer having an easily replaceable abrasive surfaced member. The abrasive surfaced member is contained within a housing and is releasably coupled onto a back plate assembly which is driven by a drive motor. The housing includes a releasably coupled cover plate providing access to the abrasive surfaced member. An opening formed in the cover plate exposes a portion of the abrasive surface so that a dental model workpiece can be inserted into the opening against the abrasive surface to permit work on the dental model workpiece. A tilting work table beneath the opening supports the workpiece during the operation. A stream of water is directed through the front cover onto the abrasive surface and is redirected against this surface by means of baffles positioned inside the cover plate. The opening includes a beveled boundary and an inwardly directed lip permitting angular manipulation of the workpiece, better visibility of the workpiece and maximum safety.",
"This paper introduces Pairwise Constrained Component Analysis (PCCA), a new algorithm for learning distance metrics from sparse pairwise similarity dissimilarity constraints in high dimensional input space, problem for which most existing distance metric learning approaches are not adapted. PCCA learns a projection into a low-dimensional space where the distance between pairs of data points respects the desired constraints, exhibiting good generalization properties in presence of high dimensional data. The paper also shows how to efficiently kernelize the approach. PCCA is experimentally validated on two challenging vision tasks, face verification and person re-identification, for which we obtain state-of-the-art results.",
""
]
} |
1509.08902 | 2952207198 | We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval. | Most of the initial metric learning methods were linear, the semidefinite programming formulation by Xing al @cite_61 , large margin formulation for @math -NN classification by Weinberger al @cite_2 , collapsing classes' formulation (make the distance between vectors of same class zero and between those of different classes large) of Globerson and Roweis @cite_13 and neighbourhood component analysis of Goldberger al @cite_22 . | {
"cite_N": [
"@cite_61",
"@cite_13",
"@cite_22",
"@cite_2"
],
"mid": [
"2117154949",
"2104752854",
"2144935315",
""
],
"abstract": [
"Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.",
"We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance) for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, it yields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance.",
"In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction.",
""
]
} |
1509.08902 | 2952207198 | We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval. | Towards scalability of metric learning methods, Jain al @cite_47 proposed online metric learning and more recently Simonyan al @cite_48 proposed to use stochastic gradient descent for face verification problem. | {
"cite_N": [
"@cite_48",
"@cite_47"
],
"mid": [
"1975780119",
"2118979615"
],
"abstract": [
"Several recent papers on automatic face verification have significantly raised the performance bar by developing novel, specialised representations that outperform standard features such as SIFT for this problem. This paper makes two contributions: first, and somewhat surprisingly, we show that Fisher vectors on densely sampled SIFT features, i.e. an off-the-shelf object recognition representation, are capable of achieving state-of-the-art face verification performance on the challenging “Labeled Faces in the Wild” benchmark; second, since Fisher vectors are very high dimensional, we show that a compact descriptor can be learnt from them using discriminative metric learning. This compact descriptor has a better recognition accuracy and is very well suited to large scale identification tasks.",
"Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once. However, in many real applications, constraints are only available incrementally, thus necessitating methods that can perform online updates to the learned metric. Existing online algorithms offer bounds on worst-case performance, but typically do not perform well in practice as compared to their offline counterparts. We present a new online metric learning algorithm that updates a learned Mahalanobis metric based on LogDet regularization and gradient descent. We prove theoretical worst-case performance bounds, and empirically compare the proposed method against existing online metric learning algorithms. To further boost the practicality of our approach, we develop an online locality-sensitive hashing scheme which leads to efficient updates to data structures used for fast approximate similarity search. We demonstrate our algorithm on multiple datasets and show that it outperforms relevant baselines."
]
} |
1509.08902 | 2952207198 | We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval. | While the distance function learned as above is linear, the problem may be complex and require nonlinear distance function. A popular way of learning a nonlinear distance function is by kernelizing the metric, as inspired by the traditional kernel based methods KPCA and KLDA; invoke representer theorem like condition and write the rows of @math as linear combinations of the input vectors @math (where @math is the matrix of all vectors @math as columns). Noticing that the distance function in Eq. depends only on the dot products of the vectors, allows nonlinearizing the algorithm as follows. Mapping the vectors with a non-linear @math and then using the @math , we can proceed as follows, where, @math is the matrix of @math mapped vectors and is the t @math column of the kernel matrix. Such reasoning was used by Mignon and Jurie @cite_52 recently. While this is a successful way of nonlinearizing the algorithm, it is costly and not scalable as training requires the whole kernel matrix. | {
"cite_N": [
"@cite_52"
],
"mid": [
"2048110836"
],
"abstract": [
"This paper introduces Pairwise Constrained Component Analysis (PCCA), a new algorithm for learning distance metrics from sparse pairwise similarity dissimilarity constraints in high dimensional input space, problem for which most existing distance metric learning approaches are not adapted. PCCA learns a projection into a low-dimensional space where the distance between pairs of data points respects the desired constraints, exhibiting good generalization properties in presence of high dimensional data. The paper also shows how to efficiently kernelize the approach. PCCA is experimentally validated on two challenging vision tasks, face verification and person re-identification, for which we obtain state-of-the-art results."
]
} |
1509.08960 | 2290003527 | The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities , and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system's efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data. | A few recent papers address the issues of storage and retrieval in dynamic graphs. In our prior work, we proposed DeltaGraph @cite_15 , an index data structure that compactly stores the history of all changes in a dynamic graph and provides efficient snapshot reconstruction. G* @cite_5 stores multiple snapshots compactly by utilizing commonalities. Chronos @cite_6 @cite_27 is an in-memory system for processing dynamic graphs, with objective of shared storage and computation for overlapping snapshots. @cite_42 provide a system of network analytics through labeling graph components. @cite_38 , describe a block-oriented and cache-enabled system to exploit spatio-temporal locality for solving temporal neighborhood queries. also utilize caching to fetch selective portions of temporal graphs they refer to as partial views @cite_36 . LLAMA @cite_29 uses multiversioned arrays to represent a mutating graph, but their focus is primarily on in-memory representation. There is also recent work on streaming analytics over dynamic graph data @cite_23 @cite_9 , but it typically focuses on analyzing only the recent activity in the network (typically over a sliding window). Our work in this paper focuses on techniques for a wide variety of temporal graph retrieval and analysis on entire graph histories. | {
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_29",
"@cite_42",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_15"
],
"mid": [
"2065024050",
"2126064871",
"",
"1527539186",
"",
"2097805736",
"",
"2130747448",
"",
""
],
"abstract": [
"In our increasingly connected and instrumented world, live data recording the interactions between people, systems, and the environment is available in various domains, such as telecommunciations and social media. This data often takes the form of a temporally evolving graph, where entities are the vertices and the interactions between them are the edges. An important feature of this graph is that the number of edges it has grows continuously, as new interactions take place. We call such graphs interaction graphs. In this paper we study the problem of storing interaction graphs such that temporal queries on them can be answered efficiently. Since interaction graphs are append-only and edges are added continuously, traditional graph layout and storage algorithms that are batch based cannot be applied directly. We present the design and implementation of a system that caches recent interactions in memory, while quickly placing the expired interactions to disk blocks such that those edges that are likely to be accessed together are placed together. We develop live block formation algorithms that are fast, yet can take advantage of temporal and spatial locality among the edges to optimize the storage layout with the goal of improving query performance. We evaluate the system on synthetic as well as real-world interaction graphs, and show that our block formation algorithms are effective for answering temporal neighborhood queries on the graph. Such queries form a foundation for building more complex online and offline temporal analytics on interaction graphs.",
"In this paper, we deal with the problem of historical query evaluation over evolving social graphs. Historical queries are queries about the social graph in the past. The straightforward way of executing such a query is by first reconstructing the whole social graph at the given time instance or interval, and then, evaluating the query on the reconstructed graph. Since social graphs are large, the cost of a complete graph snapshot reconstruction would dominate the cost of historical query execution. Given that many queries are user-centric, i.e., node-centric queries that require access only of a targeted subgraph, we propose deploying partial view instead of full snapshot construction and define conditions that determine when a partial view can be used to evaluate a query. We also propose using a cache of partial views to further reduce the query evaluation cost, and show how partial views can be extended to new views with reduced cost. Finally, we present a greedy solution for the static view selection problem and study its performance experimentally.",
"",
"Graphs are ubiquitous data structures commonly used to represent highly connected data. Many real-world applications, such as social and biological networks, are modeled as graphs. To answer the surge for graph data management, many graph database solutions were developed. These databases are commonly classified as NoSQL graph databases, and they provide better support for graph data management than their relational counterparts. However, each of these databases implement their own operational graph data model, which differ among the products. Further, there is no commonly agreed conceptual model for graph databases. In this paper, we introduce a novel conceptual model for graph databases. The aim of our model is to provide analysts with a set of simple, well-defined, and adaptable conceptual components to perform rich analysis tasks. These components take into account the evolving aspect of the graph. Our model is analytics-oriented, flexible and incremental, enabling analysis over evolving graph data. The proposed model provides a typing mechanism for the underlying graph, and formally defines the minimal set of data structures and operators needed to analyze the graph.",
"",
"Temporal graphs capture changes in graphs over time and are becoming a subject that attracts increasing interest from the research communities, for example, to understand temporal characteristics of social interactions on a time-evolving social graph. Chronos is a storage and execution engine designed and optimized specifically for running in-memory iterative graph computation on temporal graphs. Locality is at the center of the Chronos design, where the in-memory layout of temporal graphs and the scheduling of the iterative computation on temporal graphs are carefully designed, so that common \"bulk\" operations on temporal graphs are scheduled to maximize the benefit of in-memory data locality. The design of Chronos further explores the interesting interplay among locality, parallelism, and incremental computation in supporting common mining tasks on temporal graphs. The result is a high-performance temporal-graph system that offers up to an order of magnitude speedup for temporal iterative graph mining compared to a straightforward application of existing graph engines on a series of snapshots.",
"",
"Kineograph is a distributed system that takes a stream of incoming data to construct a continuously changing graph, which captures the relationships that exist in the data feed. As a computing platform, Kineograph further supports graph-mining algorithms to extract timely insights from the fast-changing graph structure. To accommodate graph-mining algorithms that assume a static underlying graph, Kineograph creates a series of consistent snapshots, using a novel and efficient epoch commit protocol. To keep up with continuous updates on the graph, Kineograph includes an incremental graph-computation engine. We have developed three applications on top of Kineograph to analyze Twitter data: user ranking, approximate shortest paths, and controversial topic detection. For these applications, Kineograph takes a live Twitter data feed and maintains a graph of edges between all users and hashtags. Our evaluation shows that with 40 machines processing 100K tweets per second, Kineograph is able to continuously compute global properties, such as user ranks, with less than 2.5-minute timeliness guarantees. This rate of traffic is more than 10 times the reported peak rate of Twitter as of October 2011.",
"",
""
]
} |
1509.08960 | 2290003527 | The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities , and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system's efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data. | Temporal graph analytics is an area of growing interest. Evolution of shortest paths in dynamic graphs has been studies by @cite_21 , @cite_53 , and @cite_26 . Evolution of community structures in graphs has been of interest as well @cite_18 @cite_3 @cite_28 @cite_2 . Change in page rank with evolving graphs @cite_14 @cite_43 , and the study of change in centrality of vertices, path lengths of vertex pairs, etc. @cite_33 , also lie under the larger umbrella of temporal graph analysis. @cite_7 provide a taxonomy of analytical tasks over evolving graphs. @cite_44 , provide a good reference for studying several dynamic processes modeled over graphs. Our system significantly reduces the effort involved in building and deploying such analytics over large volumes of graph data. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_53",
"@cite_21",
"@cite_3",
"@cite_44",
"@cite_43",
"@cite_2"
],
"mid": [
"",
"1984196269",
"2003362605",
"",
"2154391837",
"2145977038",
"",
"2035057445",
"2162691596",
"1544632947",
"2051381393",
""
],
"abstract": [
"",
"New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be effectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journey (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys).",
"Link Analysis has been a popular and widely used Web mining technique, especially in the area of Web search. Various ranking schemes based on link analysis have been proposed, of which the PageRank metric has gained the most popularity with the success of Google. Over the last few years, there has been significant work in improving the relevance model of PageRank to address issues such as personalization and topic relevance. In addition, a variety of ideas have been proposed to address the computational aspects of PageRank, both in terms of efficient I O computations and matrix computations involved in computing the PageRank score. The key challenge has been to perform computation on very large Web graphs. In this paper, we propose a method to incrementally compute PageRank for a large graph that is evolving. We note that although the Web graph evolves over time, its rate of change is rather slow. When compared to its size. We exploit the underlying principle of first order markov model on which PageRank is based, to incrementally compute PageRank for the evolving Web graph. Our experimental results show significant speed up in computational cost, the computation involves only the (small) portion of Web graph that has undergone change. Our approach is quite general, and can be used to incrementally compute (on evolving graphs) any metric that satisfies the first order Markov property.",
"",
"Visualization has proven to be a useful tool for understanding network structures. Yet the dynamic nature of social media networks requires powerful visualization techniques that go beyond static network diagrams. To provide strong temporal network visualization tools, designers need to understand what tasks the users have to accomplish. This paper describes a taxonomy of temporal network visualization tasks. We identify the 1) entities, 2) properties, and 3) temporal features, which were extracted by surveying 53 existing temporal network visualization systems. By building and examining the task taxonomy, we report which tasks are well covered by existing systems and make suggestions for designing future visualization tools. The feedback from 12 network analysts helped refine the taxonomy.",
"Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.",
"",
"Graph-like data appears in many applications, such as social networks, internet hyperlinks, roadmaps, etc. and in most cases, graphs are dynamic, evolving through time. In this work, we study the problem of efficient shortest-path query evaluation on evolving social graphs. Our shortest-path queries are \"temporal\": they can refer to any time-point or time-interval in the graph's evolution, and corresponding valid answers should be returned. To efficiently support this type of temporal query, we extend the traditional Dijkstra's algorithm to compute shortest-path distance(s) for a time-point or a time-interval. To speed up query processing, we explore preprocessing index techniques such as Contraction Hierarchies (CH). Moreover, we examine how to maintain the evolving graph along with the index by utilizing temporal partition strategies. Experimental evaluations on real world datasets and large synthetic datasets demonstrate the feasibility and scalability of our proposed efficient techniques and optimizations.",
"Finding patterns of social interaction within a population has wide-ranging applications including: disease modeling, cultural and information transmission, and behavioral ecology. Social interactions are often modeled with networks. A key characteristic of social interactions is their continual change. However, most past analyses of social networks are essentially static in that all information about the time that social interactions take place is discarded. In this paper, we propose a new mathematical and computational framework that enables analysis of dynamic social networks and that explicitly makes use of information about when social interactions occur.",
"The availability of large data sets have allowed researchers to uncover complex properties such as large scale fluctuations and heterogeneities in many networks which have lead to the breakdown of standard theoretical frameworks and models. Until recently these systems were considered as haphazard sets of points and connections. Recent advances have generated a vigorous research effort in understanding the effect of complex connectivity patterns on dynamical phenomena. For example, a vast number of everyday systems, from the brain to ecosystems, power grids and the Internet, can be represented as large complex networks. This new and recent account presents a comprehensive explanation of these effects.",
"In this paper, we analyze the efficiency of Monte Carlo methods for incremental computation of PageRank, personalized PageRank, and similar random walk based methods (with focus on SALSA), on large-scale dynamically evolving social networks. We assume that the graph of friendships is stored in distributed shared memory, as is the case for large social networks such as Twitter. For global PageRank, we assume that the social network has n nodes, and m adversarially chosen edges arrive in a random order. We show that with a reset probability of e, the expected total work needed to maintain an accurate estimate (using the Monte Carlo method) of the PageRank of every node at all times is [EQUATION]. This is significantly better than all known bounds for incremental PageRank. For instance, if we naively recompute the PageRanks as each edge arrives, the simple power iteration method needs [EQUATION] total time and the Monte Carlo method needs O(mn e) total time; both are prohibitively expensive. We also show that we can handle deletions equally efficiently. We then study the computation of the top k personalized PageRanks starting from a seed node, assuming that personalized PageRanks follow a power-law with exponent α q ln n random walks starting from every node for large enough constant q (using the approach outlined for global PageRank), then the expected number of calls made to the distributed social network database is O(k (R(1-α) α)). We also present experimental results from the social networking site, Twitter, verifying our assumptions and analyses. The overall result is that this algorithm is fast enough for real-time queries over a dynamic social network.",
""
]
} |
1509.08960 | 2290003527 | The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities , and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system's efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data. | Temporal data management for relational databases was a topic of active research in the 80s and early 90s. Snapshot index @cite_47 is an I O optimal solution to the problem of snapshot retrieval for transaction-time databases. Salzberg and Tsotras @cite_35 present a comprehensive survey of temporal data indexing techinques, and discuss two extreme approaches to supporting snapshot retrieval queries, referred to as the and approaches. While the copy approach relies on storing new copies of a snapshot upon every point of change in the database, the log approach relies on storing everything through changes. Their hybrid is often referred to as the approach. We omit a detailed discussion of the work on temporal databases, and refer the interested reader to a representative set of references @cite_30 @cite_32 @cite_10 @cite_39 @cite_31 @cite_46 @cite_35 . Other data structures, such as Interval Trees @cite_40 and Segment trees @cite_52 can also be used for storing temporal information. Temporal aggregation in scientific array databases @cite_50 is another related topic of interest, but the challenges there are significantly different. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_31",
"@cite_46",
"@cite_32",
"@cite_52",
"@cite_39",
"@cite_40",
"@cite_50",
"@cite_47",
"@cite_10"
],
"mid": [
"2070664739",
"2103465290",
"1507944889",
"2401328826",
"1841612980",
"2050252966",
"",
"2114123275",
"",
"",
"2113056857"
],
"abstract": [
"Numerous researchers in a handful of disciplines have been concerned, in recent years, with the special role (or roles) that time seems to play in information processing. Designers of computerized information systems have had to deal with the fact that when an information item becomes outdated, it need not be forgotten. Researchers in artificial intelligence have pointed to the need for a realistic world model to include representations not only for snapshot descriptions of the real world, but also for histories, or the evolution of such descriptions over time. Many logicians have regarded classical logic as an awkward tool for capturing the relationships and the meaning of statements involving temporal reference, and have proposed special \"temporal logics\" for this purpose. Finally, the analysis of tensed statements in natural language is a principal concern of researchers in linguistics.",
"This paper compares different indexing techniques proposed for supporting efficient access to temporal data. The comparison is based on a collection of important performance criteria, including the space consumed, update processing, and query time for representative queries. The comparison is based on worst-case analysis, hence no assumptions on data distribution or query frequencies are made. When a number of methods have the same asymptotic worst-case behavior, features in the methods that affect average case behavior are discussed. Additional criteria examined are the pagination of an index, the ability to cluster related data together, and the ability to efficiently separate old from current data (so that larger archival storage media such as write-once optical disks can be used). The purpose of the paper is to identify the difficult problems in accessing temporal data and describe how the different methods aim to solve them. A general lower bound for answering basic temporal queries is also introduced.",
"From the Publisher: Temporal database systems are systems that provide special support for storing, querying, and updating historical and or future data. Current DBMSs provide essentially no temporal features at all, but this situation is likely to change soon for a variety of reasons; in fact, temporal databases are virtually certain to become important sooner rather than later, in the commercial world as well as in academia. This book provides an in-depth description of the foundations and principles on which those temporal DBMSs will be built. These foundations and principles are firmly rooted in the relational model of data; thus, they represent an evolutionary step, not a revolutionary one, and they will stand the test of time. This book is arranged in three parts and a set of appendixes: * Preliminaries: Provides a detailed review of the relational model, and an overview of the Tutorial D language. * Laying the Foundations: Explains basic temporal data problems and introduces fundamental constructs and operators for addressing those problems. * Building on the Foundations: Applies the material of the previous part to issues of temporal database design, temporal constraints, temporal query and update, and much more. * Appendixes: Include annotated references and bibliography, implementation considerations, and other topics. KEY FEATURES: * Describes a truly relational approach to the temporal data problem. * Addresses implementation as well as model issues. * Covers recent research on new database design techniques, a new normal form, new relational operators, new update operators, a new approach to the problem of granularity, support for cyclic pointtypes, and other matters. * Includes review questions and exercises in every chapter. * Suitable for both reference and tutorial purposes.",
"In this paper, we present a temporal extension of the SPARQL query language for RDF graphs. The new language is based on a temporal RDF database model employing triple timestamping with temporal elements, which best preserves the scalability property enjoyed by triple storage technologies, especially in a multi-temporal setting. The proposed SPARQL extensions are aimed at embedding several features of the TSQL2 consensual language designed for temporal relational databases.",
"",
"The segment tree is a well-known internal data structure with numerous applications in computational geometry. It allows the dynamical maintenance of a set of intervals such that the intervals enclosing a query point can be found efficiently (point enclosure search). In this paper we transfer the underlying principle of the segment tree in a nontrivial way to secondary storage and arrive at the EST-an external file structure with the same functionality and the following properties: (1) Point enclosure searches are very efficient--only very few pages are accessed that are not filled to more than 50 with result intervals. (2) A page filling of 50 is guaranteed--on the average it will be around 70 . Although the segment tree represents, in the worst case, each interval by a logarithmic number offragments, in practical cases fragmentation remains low and the storage requirements about linear. (3) The EST is balanced and the update algorithms are efficient. (4) Unlike many other file structures for spatial objects the EST has no problems with an arbitrarydensity, that is, an arbitrarily large number of intervals covering any point of the line. Furthermore, the EST can be used as a file structureconstructor in the following sense: Let there be a file structureX supporting searches for objects with propertyx and suppose it is necessary to maintain a collection of objects with associated (e.g., time) intervals. Then an EST-X structure that supports searches for objects with propertyx present at timet can be built. This suggests using the EST as a building block in the implementation of temporal database systems. Other applications include the one-dimensional indexing of collections of spatial objects in two or more dimensions. More generally, this paper shows techniques for mapping internal tree structures with node lists (other examples: range tree, interval tree) to secondary memory. In this context an intriguing theoretical problem, thecover-balancing problem, is solved: Given a tree whose nodes have associatedweights partitioned into subtrees whose weights must lie in a certain range, maintain this partition under weight changes at arbitrary nodes. This is in contrast to classical balancing problems where updates occur only at the leaves.",
"",
"The authors present a space- and I O-optimal external-memory data structure for answering stabbing queries on a set of dynamically maintained intervals. The data structure settles an open problem in databases and I O algorithms by providing the first optimal external-memory solution to the dynamic interval management problem, which is a special case of 2-dimensional range searching and a central problem for object-oriented and temporal databases and for constraint logic programming. The data structure simultaneously uses optimal linear space (that is, O(N B) blocks of disk space) and achieves the optimal O(log sub B N+T B) I O query bound and O(log sub B N) I O update bound, where B is the I O block size and T the number of elements in the answer to a query. The structure is also the first optimal external data structure for a 2-dimensional range searching problem that has worst-case as opposed to amortized update bounds. Part of the data structure uses a novel balancing technique for efficient worst-case manipulation of balanced trees, which is of independent interest.",
"",
"",
"A temporal database contains time-varying data. In a real-time database transactions have deadlines or timing constraints. In this paper we review the substantial research in these two previously separate areas. First we characterize the time domain; then we investigate temporal and real-time data models. We evaluate temporal and real-time query languages along several dimensions. We examine temporal and real-time DBMS implementation. Finally, we summarize major research accomplishments to date and list several unanswered research questions. >"
]
} |
1509.08360 | 2952387482 | Spectral embedding based on the Singular Value Decomposition (SVD) is a widely used "preprocessing" step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors and rescaling the coordinate axes (by a predefined function of the singular value). However, the number of such vectors required to capture problem structure grows with problem size, and even partial SVD computation becomes a bottleneck. In this paper, we propose a low-complexity it compressive spectral embedding algorithm, which employs random projections and finite order polynomial expansions to compute approximations to SVD-based embedding. For an m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)), and the embedding dimension is O(log(m+n)), both of which are independent of the number of singular vectors whose effect we wish to capture. To the best of our knowledge, this is the first work to circumvent this dependence on the number of singular vectors for general SVD-based embeddings. The key to sidestepping the SVD is the observation that, for downstream inference tasks such as clustering and classification, we are only interested in using the resulting embedding to evaluate pairwise similarity metrics derived from the euclidean norm, rather than capturing the effect of the underlying matrix on arbitrary vectors as a partial SVD tries to do. Our numerical results on network datasets demonstrate the efficacy of the proposed method, and motivate further exploration of its application to large-scale inference tasks. | Thus, while randomized projections are extensively used in the embedding literature, to the best of our knowledge, the present paper is the first to develop a general compressive framework for spectral embeddings derived from the SVD. It is interesting to note that methods similar to ours have been used in a different context, to estimate the of eigenvalues of a large hermitian matrix @cite_18 , @cite_4 . These methods use a polynomial approximation of indicator functions @math and random projections to compute an approximate histogram of the number of eigenvectors across different bands of the spectrum: @math . | {
"cite_N": [
"@cite_18",
"@cite_4"
],
"mid": [
"2024211806",
"2952283347"
],
"abstract": [
"Chebyshev polynomial approximations are an efficient and numerically stable way to calculate properties of the very large Hamiltonians important in computational condensed matter physics. The present paper derives an optimal kernel polynomial which enforces positivity of density of states and spectral estimates, achieves the best energy resolution, and preserves normalization. This kernel polynomial method (KPM) is demonstrated for electronic structure and dynamic magnetic susceptibility calculations. For tight binding Hamiltonians of Si, we show how to achieve high precision and rapid convergence of the cohesive energy and vacancy formation energy by careful attention to the order of approximation. For disordered XXZ-magnets, we show that the KPM provides a simpler and more reliable procedure for calculating spectral functions than Lanczos recursion methods. Polynomial approximations to Fermi projection operators are also proposed.",
"Estimating the number of eigenvalues located in a given interval of a large sparse Hermitian matrix is an important problem in certain applications and it is a prerequisite of eigensolvers based on a divide-and-conquer paradigm. Often an exact count is not necessary and methods based on stochastic estimates can be utilized to yield rough approximations. This paper examines a number of techniques tailored to this specific task. It reviews standard approaches and explores new ones based on polynomial and rational approximation filtering combined with a stochastic procedure."
]
} |
1509.08654 | 2109176650 | Trickle is a polite gossip algorithm for managing communication traffic. It is of particular interest in low-power wireless networks for reducing the amount of control traffic, as in routing protocols (RPL), or reducing network congestion, as in multicast protocols (MPL). Trickle is used at the network or application level, and relies on up-to-date information on the activity of neighbors. This makes it vulnerable to interference from the media access control layer, which we explore in this paper. We present several scenarios how the MAC layer in low-power radios violates Trickle timing. As a case study, we analyze the impact of CSMA CA with ContikiMAC on Trickle’s performance. Additionally, we propose a solution called Cleansing that resolves these issues. | The Trickle algorithm has been initially designed as an efficient method to disseminate software updates in low-power networks @cite_6 . However, since it only specifies messages should be sent, and not , it has been accommodated in many other protocols @cite_22 , such as network reprogramming @cite_13 , routing @cite_7 @cite_1 and data dissemination @cite_0 . Trickle was recently standardized @cite_9 and used as a basis for the Multicast Protocol for Low power and Lossy Networks (MPL) @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_13"
],
"mid": [
"2281129696",
"2126514744",
"2160343401",
"2298500030",
"2325877089",
"2058632086",
"71688117",
"2100395498"
],
"abstract": [
"This document specifies the Multicast Protocol for Low-Power and Lossy Networks (MPL), which provides IPv6 multicast forwarding in constrained networks. MPL avoids the need to construct or maintain any multicast forwarding topology, disseminating messages to all MPL Forwarders in an MPL Domain. MPL has two modes of operation. One mode uses the Trickle algorithm to manage control-plane and data-plane message transmissions and is applicable for deployments with few multicast sources. The other mode uses classic flooding. By providing both modes and parameterization of the Trickle algorithm, an MPL implementation can be used in a variety of multicast deployments and can trade between dissemination latency and transmission efficiency.",
"The wireless sensor network community approached networking abstractions as an open question, allowing answers to emerge with time and experience. The Trickle algorithm has become a basic mechanism used in numerous protocols and systems. Trickle brings nodes to eventual consistency quickly and efficiently while remaining remarkably robust to variations in network density, topology, and dynamics. Instead of flooding a network with packets, Trickle uses a \"polite gossip\" policy to control send rates so each node hears just enough packets to stay consistent. This simple mechanism enables Trickle to scale to 1000-fold changes in network density, reach consistency in seconds, and require only a few bytes of state yet impose a maintenance cost of a few sends an hour. Originally designed for disseminating new code, experience has shown Trickle to have much broader applicability, including route maintenance and neighbor discovery. This paper provides an overview of the research challenges wireless sensor networks face, describes the Trickle algorithm, and outlines several ways it is used today.",
"This paper presents and evaluates two principles for wireless routing protocols. The first is datapath validation: data traffic quickly discovers and fixes routing inconsistencies. The second is adaptive beaconing: extending the Trickle algorithm to routing control traffic reduces route repair latency and sends fewer beacons. We evaluate datapath validation and adaptive beaconing in CTP Noe, a sensor network tree collection protocol. We use 12 different testbeds ranging in size from 20--310 nodes, comprising seven platforms, and six different link layers, on both interference-free and interference-prone channels. In all cases, CTP Noe delivers > 90 of packets. Many experiments achieve 99.9 . Compared to standard beaconing, CTP Noe sends 73 fewer beacons while reducing topology repair latency by 99.8 . Finally, when using low-power link layers, CTP Noe has duty cycles of 3 while supporting aggregate loads of 30 packets minute.",
"The Trickle algorithm allows wireless nodes to exchange information in a highly robust, energy efficient, simple, and scalable manner. Dynamically adjusting transmission windows allows Trickle to spread new information on the scale of link-layer transmission times while sending only a few messages per hour when information does not change. A simple suppression nechanism and transmission point selection allows Trickle's communication rate to scale logarithmically with density. This document describes Trickle and considerations in its use.",
"Low-Power and Lossy Networks (LLNs) are a class of network in which both the routers and their interconnect are constrained. LLN routers typically operate with constraints on processing power, memory, and energy (battery power). Their interconnects are characterized by high loss rates, low data rates, and instability. LLNs are comprised of anything from a few dozen to thousands of routers. Supported traffic flows include point-to-point (between devices inside the LLN), point- to-multipoint (from a central control point to a subset of devices inside the LLN), and multipoint-to-point (from devices inside the LLN towards a central control point). This document specifies the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL), which provides a mechanism whereby multipoint-to-point traffic from devices inside the LLN towards a central control point as well as point-to- multipoint traffic from the central control point to the devices inside the LLN are supported. Support for point-to-point traffic is also available. [STANDARDS-TRACK]",
"We present Trickle, an algorithm for propagating and maintaining code updates in wireless sensor networks. Borrowing techniques from the epidemic gossip, scalable multicast, and wireless broadcast literature, Trickle uses a \"polite gossip\" policy, where motes periodically broadcast a code summary to local neighbors but stay quiet if they have recently heard a summary identical to theirs. When a mote hears an older summary than its own, it broadcasts an update. Instead of flooding a network with packets, the algorithm controls the send rate so each mote hears a small trickle of packets, just enough to stay up to date. We show that with this simple mechanism, Trickle can scale to thousand-fold changes in network density, propagate new code in the order of seconds, and impose a maintenance cost on the order of a few sends an hour.",
"In this paper, we present CodeDrip, a data dissemination protocol for Wireless Sensor Networks. Dissemination is typically used to query nodes, send commands, and reconfigure the network. CodeDrip utilizes Network Coding to improve energy efficiency, reliability, and speed of dissemination. Network coding allows recovery of lost packets by combining the received packets thereby making dissemination robust to packet losses. While previous work in combining network coding and dissemination focused on bulk data dissemination, we optimize the design of CodeDrip for dissemination of small values. We perform extensive evaluation of CodeDrip on simulations and a large-scale testbed and compare against the implementations of Drip, DIP and DHV protocols. Results show that CodeDrip is faster, smaller and sends fewer messages than Drip, DHV and DIP protocols.",
"We present DIP, a data discovery and dissemination protocol for wireless networks. Prior approaches, such as Trickle or SPIN, have overheads that scale linearly with the number of data items. For T items, DIP can identify new items with 0(log(T)) packets while maintaining a O(l) detection latency. To achieve this performance in a wide spectrum of network configurations, DIP uses a hybrid approach of randomized scanning and tree-based directed searches. By dynamically selecting which of the two algorithms to use, DIP outperforms both in terms of transmissions and speed. Simulation and testbed experiments show that DIP sends 20-60 fewer packets than existing protocols and can be 200 faster, while only requiring O(log(log(T))) additional state per data item."
]
} |
1509.08654 | 2109176650 | Trickle is a polite gossip algorithm for managing communication traffic. It is of particular interest in low-power wireless networks for reducing the amount of control traffic, as in routing protocols (RPL), or reducing network congestion, as in multicast protocols (MPL). Trickle is used at the network or application level, and relies on up-to-date information on the activity of neighbors. This makes it vulnerable to interference from the media access control layer, which we explore in this paper. We present several scenarios how the MAC layer in low-power radios violates Trickle timing. As a case study, we analyze the impact of CSMA CA with ContikiMAC on Trickle’s performance. Additionally, we propose a solution called Cleansing that resolves these issues. | Various aspects of the Trickle algorithm have been studied so far. For example, in @cite_8 @cite_4 , Trickle has been observed as unfair in terms of load share - certain nodes transmit more often than others. Trickle in absence of a MAC layer has previously been analyzed, e.g., @cite_10 @cite_11 @cite_15 . Similarly, CSMA CA for low-power networks has been analyzed without considering the upper layers, e.g., @cite_18 @cite_5 . Finally, the potential problematic interaction between Trickle-based data dissemination and radio duty cycling has been sketched in @cite_3 , along with potential energy efficiency improvements by reducing the scope of single-hop broadcasts. However, to the best of the authors' knowledge, a detailed analysis on the interaction between Trickle and the MAC layer, consisting of both CSMA CA and radio duty cycling, their combined performance and potential problems in specific topologies, has not yet been conducted, which is what this paper aims to do. The analysis and the results presented in this paper explain the simulation results for MPL in @cite_23 @cite_2 , and the poor performance for small Trickle interval lengths. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"111780472",
"2088079708",
"",
"2135869921",
"1966381945",
"2005535065",
"1981159287",
"2077379044",
"1477511185",
"2135912631"
],
"abstract": [
"Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99 of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.",
"RPL (IPv6 Routing Protocol for Low Power and Lossy networks) is a routing protocol recently standardized by the IETF. RPL has been designed to operate in energy-constrained networks with thousands of nodes, and therefore it is one of the most promising candidate routing protocols for Advanced Metering Infrastructure (AMI) networks. In this paper a performance evaluation of RPL is presented. An extensive study of the protocol is carried out with particular focus on Trickle, the algorithm adopted to control routing update distribution across the network. The performance of the protocol is analyzed considering different Trickle parameters in order to capture their impact on route formation and node power consumption. Results highlight that the nondeterministic nature of Trickle can lead to sub-optimal route formation especially when high message suppression is per-formed. In order to mitigate this issue, an enhanced version of the protocol, namely Trickle-F, is proposed in order to guarantee fair broadcast suppression. Trickle-F is demonstrated to be effective in obtaining more efficient routes with the same power consumption of the original version.",
"",
"Although gossiping protocols for wireless sensor networks (sensornets) excel at minimizing the number of generated packets, they leave room for improvement when it comes to the end-to-end performance, namely energy efficiency. As a step in remedying this situation, we propose NarrowCast: a new primitive that can be provided by asynchronous duty-cycling link layers as a substitute for broadcasting for gossiping protocols. The principal idea behind the NarrowCast primitive is to allow a sensor node to transmit to a fraction of its neighbors, which enables controlling energy expenditures and reliability. We discuss methods of approximating the primitive in practice and integrating it with gossiping protocols. We also evaluate implementations of the approximations with Trickle, a state-of-the-art gossiping protocol, and X-MAC, a popular link layer based on low-power listening. The results show thatwithout sacrificing reliabilitygossiping using even the simplest approximations of NarrowCast can considerably outperform gossiping based on broadcasting in energy efficiency.",
"“The Trickle Algorithm” is conceived as an adaptive mechanism for allowing efficient and reliable information sharing among nodes, communicating across a lossy and shared medium. Its basic principle is, for each node, to monitor transmissions from its neighbours, compare what it receives with its current state, and schedule future transmissions accordingly: if an inconsistency of information is detected, or if few or no neighbours have transmitted consistent information “recently”, the next transmission is scheduled “soon” - and, in case consistent information from a sufficient number of neighbours is received, the next transmission is scheduled to be “later”. Developed originally as a means of distributing firmware updates among sensor devices, this algorithm has found use also for distribution of routing information in the routing protocol RPL, standardised within the IETF for maintaining a routing topology for low-power and lossy networks (LLNs). Its use is also proposed in a protocol for multicast in LLNs, denoted “Multicast Forwarding Using Trickle”. This paper studies the performance of the Trickle algorithm, as it is used in that multicast protocol.",
"In wireless sensor deployments, network layer multicast can be used to improve the bandwidth and energy efficiency for a variety of applications, such as service discovery or network management. However, despite efforts to adopt IPv6 in networks of constrained devices, multicast has been somewhat overlooked. The Multicast Forwarding Using Trickle (Trickle Multicast) internet draft is one of the most noteworthy efforts. The specification of the IPv6 routing protocol for low power and lossy networks (RPL) also attempts to address the area but leaves many questions unanswered. In this paper we highlight our concerns about both these approaches. Subsequently, we present our alternative mechanism, called stateless multicast RPL forwarding algorithm (SMRF), which addresses the aforementioned drawbacks. Having extended the TCP IP engine of the Contiki embedded operating system to support both trickle multicast (TM) and SMRF, we present an in-depth comparison, backed by simulated evaluation as well as by experiments conducted on a multi-hop hardware testbed. Results demonstrate that SMRF achieves significant delay and energy efficiency improvements at the cost of a small increase in packet loss. The outcome of our hardware experiments show that simulation results were realistic. Lastly, we evaluate both algorithms in terms of code size and memory requirements, highlighting SMRF's low implementation complexity. Both implementations have been made available to the community for adoption.",
"In this paper, we analyse the impact of the Contiki Operating System (OS), and its Carrier Sense Multiple Access and Collision Avoidance (CSMA-CA) implementation on an IEEE 802.15.4 node's throughput and wireless channel utilization. The analysis is based on Contiki's Rime networking protocol stack, and its target is to determine an upper bound for the stated metrics. We explain that in Contiki with CSMA-CA as a MAC layer protocol, a node's throughput is limited to 8.1 kbps, at maximum, even without power saving features. In order to maximize a node's transmission capability, we modified Contiki's CSMA-CA implementation. A number of simulations are performed, and it is observed that with our modifications node throughput reaches 45 kbps, at maximum. Simulation results for estimating the channel capacity with our modified CSMA-CA MAC layer protocol show that the average per-node delay is low when the offered data load remains below 100 kbps. For an offered load of 100 kbps, the channel drops almost 20 of packets. Going beyond 100 kbps results in large latencies and significant packet loss. Results presented in this paper can serve as basis for the available bandwidth estimation in Wireless Sensor Networks (WSNs), QoS-based routing, and design of congestion control algorithm.",
"As the use of wireless sensor networks increases, the need for (energy-)efficient and reliable broadcasting algorithms grows. Ideally, a broadcasting algorithm should have the ability to quickly disseminate data, while keeping the number of transmissions low. In this paper we develop a model describing the message count in large-scale wireless sensor networks. We focus our attention on the popular Trickle algorithm, which has been proposed as a suitable communication protocol for code maintenance and propagation in wireless sensor networks. Besides providing a mathematical analysis of the algorithm, we propose a generalized version of Trickle, with an additional parameter defining the length of a listen-only period. This generalization proves to be useful for optimizing the design and usage of the algorithm. For single-cell networks we show how the message count increases with the size of the network and how this depends on the Trickle parameters. Furthermore, we derive distributions of inter-broadcasting times and investigate their asymptotic behavior. Our results prove conjectures made in the literature concerning the effect of a listen-only period. Additionally, we develop an approximation for the expected number of transmissions in multi-cell networks. All results are validated by simulations.",
"The Trickle algorithm has proven to be of great benefit to the Wireless Sensor Networking area. It has shown general applicability in this field, e.g. for code distribution to smart objects and routing information distribution between smart objects. Up to now analysis of the algorithm has focussed on simulation studies and measurement campaigns. This paper introduces an analytical models for the algorithm’s behaviour for the time to consistency. The model is compared with simulation results for a set of network topologies and enables to discover efficient settings of the algorithm for various application areas, such as logistics.",
"Trickle is a transmission scheduling algorithm developed for wireless sensor networks. The Trickle algorithm determines whether (and when) a message can be transmitted. Therefore, Trickle operation is critical for performance parameters such as energy consumption and available bandwidth. This letter presents an analytical model for the message count of a static Trickle-based network under steady state conditions, as a function of a parameter called the redundancy constant and the average node degree. The model presented is validated by simulation results."
]
} |
1509.08664 | 2167236381 | Low-power wireless networks play an important role in the Internet of Things. Typically, these networks consist of a very large number of lossy and low-capacity devices, challenging the current state of the art in protocol design. In this context the Trickle algorithm plays an important role, serving as the basic mechanism for message dissemination in notable protocols such as RPL and MPL. While Trickle's broadcast suppression mechanism has been proven to be efficient, recent work has shown that it is intrinsically unfair in terms of load distribution and that its performance relies strongly on network topology. This can lead to increased end-to-end delays (MPL), or creation of sub-optimal routes (RPL). Furthermore, as highlighted in this work, there is no clear consensus within the research community about what the proper parameter settings of the suppression mechanism should be. We propose an extension to the Trickle algorithm, called adaptive-k, which allows nodes to individually adapt their suppression mechanism to local node density. Supported by analysis and a case study with RPL, we show that this extension allows for an easier configuration of Trickle, making it more robust to network topology. | However, little is known about the influence of @math on other QoS measures such as hop-count, end-to-end delay and load distribution. The authors of @cite_28 study the performance of Trickle as a flooding mechanism compared to classic flooding and multipoint relaying. They conclude that while Trickle can outperform both protocols, its performance is highly sensitive to the choice of parameters, stating: . | {
"cite_N": [
"@cite_28"
],
"mid": [
"1966381945"
],
"abstract": [
"“The Trickle Algorithm” is conceived as an adaptive mechanism for allowing efficient and reliable information sharing among nodes, communicating across a lossy and shared medium. Its basic principle is, for each node, to monitor transmissions from its neighbours, compare what it receives with its current state, and schedule future transmissions accordingly: if an inconsistency of information is detected, or if few or no neighbours have transmitted consistent information “recently”, the next transmission is scheduled “soon” - and, in case consistent information from a sufficient number of neighbours is received, the next transmission is scheduled to be “later”. Developed originally as a means of distributing firmware updates among sensor devices, this algorithm has found use also for distribution of routing information in the routing protocol RPL, standardised within the IETF for maintaining a routing topology for low-power and lossy networks (LLNs). Its use is also proposed in a protocol for multicast in LLNs, denoted “Multicast Forwarding Using Trickle”. This paper studies the performance of the Trickle algorithm, as it is used in that multicast protocol."
]
} |
1509.08664 | 2167236381 | Low-power wireless networks play an important role in the Internet of Things. Typically, these networks consist of a very large number of lossy and low-capacity devices, challenging the current state of the art in protocol design. In this context the Trickle algorithm plays an important role, serving as the basic mechanism for message dissemination in notable protocols such as RPL and MPL. While Trickle's broadcast suppression mechanism has been proven to be efficient, recent work has shown that it is intrinsically unfair in terms of load distribution and that its performance relies strongly on network topology. This can lead to increased end-to-end delays (MPL), or creation of sub-optimal routes (RPL). Furthermore, as highlighted in this work, there is no clear consensus within the research community about what the proper parameter settings of the suppression mechanism should be. We propose an extension to the Trickle algorithm, called adaptive-k, which allows nodes to individually adapt their suppression mechanism to local node density. Supported by analysis and a case study with RPL, we show that this extension allows for an easier configuration of Trickle, making it more robust to network topology. | In more recent work @cite_30 the authors conclude that flooding using Trickle can perform poorly due to its suppression mechanism. Since @math does not change with node density, the suppression mechanism favors nodes with few neighbors, letting them broadcast more often than nodes with more neighbors. This leads to increased traffic along the edges of a network and potentially increased end-to-end delays. They underline the importance of correctly setting @math to avoid such issues. Additionally, they write: ". Similar problems have been identified in @cite_13 , where, due to the suppression mechanism, bottleneck topologies have been shown to be prone to extremely large end-to-end delays. | {
"cite_N": [
"@cite_30",
"@cite_13"
],
"mid": [
"2059990980",
"2109176650"
],
"abstract": [
"In this paper, we investigate schemes for energy-efficient multi-hop broadcasting in large-scale dense Wireless Sensor Networks. We begin with an initial simplified study of the schemes for relay selection. Our first finding is that MPR-based (Multipoint Relay) mechanisms work poorly in a dense network while the recently proposed Multicast Protocol for Low power and Lossy Networks (MPL) protocol based on Trickle performs better. However, Trickle requires to overhear packet retransmissions in the vicinity, while sensor nodes try to avoid overhearing by periodically waking up and going to sleep to save energy. We propose Beacon-based Forwarding Tree (BFT), a new scheme that achieves similar performance to MPL, although it fits better the case of nodes with low radio duty cycling MACs of the type of beacon-enabled IEEE 802.15.4. Our scheme also guarantees network coverage and its optimized version results in the shortest path distance to the broadcast source at a cost of lesser load mitigation. We compare and discuss the measured performance of MPL on top of ContikiMAC and BFT over beacon-enabled 802.15.4 on a Contiki testbed. The experimental results of the comparisons show that BFT may achieve very good performance for a range of broadcast intensity, it has a predictable power consumption, a remarkable low power consumption for leaf nodes, and low loss rates. On the other hand, MPL over ContikiMAC can obtain very low duty cycles for low broadcast traffic.",
"Trickle is a polite gossip algorithm for managing communication traffic. It is of particular interest in low-power wireless networks for reducing the amount of control traffic, as in routing protocols (RPL), or reducing network congestion, as in multicast protocols (MPL). Trickle is used at the network or application level, and relies on up-to-date information on the activity of neighbors. This makes it vulnerable to interference from the media access control layer, which we explore in this paper. We present several scenarios how the MAC layer in low-power radios violates Trickle timing. As a case study, we analyze the impact of CSMA CA with ContikiMAC on Trickle’s performance. Additionally, we propose a solution called Cleansing that resolves these issues."
]
} |
1509.08664 | 2167236381 | Low-power wireless networks play an important role in the Internet of Things. Typically, these networks consist of a very large number of lossy and low-capacity devices, challenging the current state of the art in protocol design. In this context the Trickle algorithm plays an important role, serving as the basic mechanism for message dissemination in notable protocols such as RPL and MPL. While Trickle's broadcast suppression mechanism has been proven to be efficient, recent work has shown that it is intrinsically unfair in terms of load distribution and that its performance relies strongly on network topology. This can lead to increased end-to-end delays (MPL), or creation of sub-optimal routes (RPL). Furthermore, as highlighted in this work, there is no clear consensus within the research community about what the proper parameter settings of the suppression mechanism should be. We propose an extension to the Trickle algorithm, called adaptive-k, which allows nodes to individually adapt their suppression mechanism to local node density. Supported by analysis and a case study with RPL, we show that this extension allows for an easier configuration of Trickle, making it more robust to network topology. | The authors of @cite_5 were the first to consider the effect of the redundancy constant @math on RPL's performance. They show that if configured incorrectly, Trickle's suppression mechanism can lead to sub-optimal routes, especially in networks that are heterogeneous in terms of density, such as random spatial topologies. This is again due to the inherent unfairness of Trickle's suppression mechanism. They propose a modification of Trickle, which tries to remove this unfairness by prioritizing nodes that have not broadcasted for a long period of time. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2088079708"
],
"abstract": [
"RPL (IPv6 Routing Protocol for Low Power and Lossy networks) is a routing protocol recently standardized by the IETF. RPL has been designed to operate in energy-constrained networks with thousands of nodes, and therefore it is one of the most promising candidate routing protocols for Advanced Metering Infrastructure (AMI) networks. In this paper a performance evaluation of RPL is presented. An extensive study of the protocol is carried out with particular focus on Trickle, the algorithm adopted to control routing update distribution across the network. The performance of the protocol is analyzed considering different Trickle parameters in order to capture their impact on route formation and node power consumption. Results highlight that the nondeterministic nature of Trickle can lead to sub-optimal route formation especially when high message suppression is per-formed. In order to mitigate this issue, an enhanced version of the protocol, namely Trickle-F, is proposed in order to guarantee fair broadcast suppression. Trickle-F is demonstrated to be effective in obtaining more efficient routes with the same power consumption of the original version."
]
} |
1509.08664 | 2167236381 | Low-power wireless networks play an important role in the Internet of Things. Typically, these networks consist of a very large number of lossy and low-capacity devices, challenging the current state of the art in protocol design. In this context the Trickle algorithm plays an important role, serving as the basic mechanism for message dissemination in notable protocols such as RPL and MPL. While Trickle's broadcast suppression mechanism has been proven to be efficient, recent work has shown that it is intrinsically unfair in terms of load distribution and that its performance relies strongly on network topology. This can lead to increased end-to-end delays (MPL), or creation of sub-optimal routes (RPL). Furthermore, as highlighted in this work, there is no clear consensus within the research community about what the proper parameter settings of the suppression mechanism should be. We propose an extension to the Trickle algorithm, called adaptive-k, which allows nodes to individually adapt their suppression mechanism to local node density. Supported by analysis and a case study with RPL, we show that this extension allows for an easier configuration of Trickle, making it more robust to network topology. | Recently, link instability was identified as a problem for new nodes in a network @cite_8 . Due to the lack of link quality measurements, new nodes have been observed to blindly connect to the first available node in an RPL network, even though better alternatives might exist. They address this issue by adding a probing phase, where nodes first measure the link quality to their neighbors based on a Trickle timer, before selecting a preferred parent. As a result, nodes take more time to join a network, but benefit from having more stable routes. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2075597254"
],
"abstract": [
"Lightweight link quality estimation is crucial in wireless sensor networks. Indeed, devices with limited capabilities shall trade off between consuming their resources to maintain a precise view of the neighbours' link quality and to build routes almost blindly. For instance, the Routing Protocol for Low-Power and Lossy Networks (RPL), which has been recently standardised by the IETF to enable IPv6-based sensor networks, only estimates the quality of the links used to deliver data packets. However, this solution has been demonstrated to cause periods of routing instability and reduced packet delivery rates since it estimates only the quality of utilised links. To address this issue in this work we propose a lightweight link estimation procedure that exploits Trickle-based topology maintenance techniques to simultaneously estimate link qualities and propagate routing information. Our proposed scheme has been integrated in the Contiki's RPL prototype implementation. Simulation results demonstrate that our proposal is capable of measuring the quality of the links to neighbours with small overhead, which results into better routing decisions and improved packet delivery rates."
]
} |
1509.08664 | 2167236381 | Low-power wireless networks play an important role in the Internet of Things. Typically, these networks consist of a very large number of lossy and low-capacity devices, challenging the current state of the art in protocol design. In this context the Trickle algorithm plays an important role, serving as the basic mechanism for message dissemination in notable protocols such as RPL and MPL. While Trickle's broadcast suppression mechanism has been proven to be efficient, recent work has shown that it is intrinsically unfair in terms of load distribution and that its performance relies strongly on network topology. This can lead to increased end-to-end delays (MPL), or creation of sub-optimal routes (RPL). Furthermore, as highlighted in this work, there is no clear consensus within the research community about what the proper parameter settings of the suppression mechanism should be. We propose an extension to the Trickle algorithm, called adaptive-k, which allows nodes to individually adapt their suppression mechanism to local node density. Supported by analysis and a case study with RPL, we show that this extension allows for an easier configuration of Trickle, making it more robust to network topology. | Lastly, an extensive simulation study on the effect of the redundancy constant @math and @math on RPL's performance is given in @cite_3 . In their study they consider several network densities and vary @math between 1 and 15. They observe that one of the RPL parameters that affects routing table construction to the greatest extent is the redundancy constant @math . Additionally, they conclude " and that setting @math should not be done independently of network density. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1974907922"
],
"abstract": [
"The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs."
]
} |
1509.08456 | 2952154193 | The input of most clustering algorithms is a symmetric matrix quantifying similarity within data pairs. Such a matrix is here turned into a quadratic set function measuring cluster score or similarity within data subsets larger than pairs. In general, any set function reasonably assigning a cluster score to data subsets gives rise to an objective function-based clustering problem. When considered in pseudo-Boolean form, cluster score enables to evaluate fuzzy clusters through multilinear extension MLE, while the global score of fuzzy clusterings simply is the sum over constituents fuzzy clusters of their MLE score. This is shown to be no greater than the global score of hard clusterings or partitions of the data set, thereby expanding a known result on extremizers of pseudo-Boolean functions. Yet, a multilinear objective function allows to search for optimality in the interior of the hypercube. The proposed method only requires a fuzzy clustering as initial candidate solution, for the appropriate number of clusters is implicitly extracted from the given data set. | Clustering is here approached by firstly quantifying the cluster score of every non-empty data subset, and secondly in terms of the associated set partitioning combinatorial optimization problem @cite_23 . Cluster score thus is a set function or, geometrically, a point in @math , and rather than measuring a cost (or sum of distances) to be minimized (see above), it measures a worth to be maximized. The idea is to quantify, for every data subset, both internal similarity and dissimilarity with respect to its complement. This resembles the in information-based clustering @cite_8 . Objective function-based clustering intrisically relies on the assumption that every data subset has an associated real-valued worth (or, alternatively, a cost). A main novelty proposed below is to deal with both hard and fuzzy clusters at once by means of the pseudo-Boolean form of set functions. In order to have the same input as in many clustering algorithms, the basic cluster score function provided in the next section obtains from a given similarity matrix, and has polynomial MLE of degree 2 [pp. 157, 162] BorosHammer02 . This also keeps the computational burden at a seemingly reasonable level. | {
"cite_N": [
"@cite_23",
"@cite_8"
],
"mid": [
"1557310162",
"1966168239"
],
"abstract": [
"This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It has arisen as the basis of several courses on combinatorial optimization and more special topics at graduate level. It contains complete but concise proofs, also for many deep results, some of which did not appear in a textbook before. Many very recent topics are covered as well, and many references are provided. Thus this book represents the state of the art of combinatorial optimization. This fourth edition is again significantly extended, most notably with new material on linear programming, the network simplex algorithm, and the max-cut problem. Many further additions and updates are included as well. From the reviews of the previous editions: \"This book on combinatorial optimization is a beautiful example of the ideal textbook.\" Operations Research Letters 33 (2005), p.216-217 \"The second edition (with corrections and many updates) of this very recommendable book documents the relevant knowledge on combinatorial optimization and records those problems and algorithms that define this discipline today. To read this is very stimulating for all the researchers, practitioners, and students interested in combinatorial optimization.\" OR News 19 (2003), p.42 \"... has become a standard textbook in the field.\" Zentralblatt MATH 1099.90054",
"In an age of increasingly large data sets, investigators in many different disciplines have turned to clustering as a tool for data analysis and exploration. Existing clustering methods, however, typically depend on several nontrivial assumptions about the structure of data. Here, we reformulate the clustering problem from an information theoretic perspective that avoids many of these assumptions. In particular, our formulation obviates the need for defining a cluster “prototype,” does not require an a priori similarity metric, is invariant to changes in the representation of the data, and naturally captures nonlinear relations. We apply this approach to different domains and find that it consistently produces clusters that are more coherent than those extracted by existing algorithms. Finally, our approach provides a way of clustering based on collective notions of similarity rather than the traditional pairwise measures."
]
} |
1509.08524 | 2951261390 | Network alignment (NA) aims to find regions of similarities between molecular networks of different species. There exist two NA categories: local (LNA) or global (GNA). LNA finds small highly conserved network regions and produces a many-to-many node mapping. GNA finds large conserved regions and produces a one-to-one node mapping. Given the different outputs of LNA and GNA, when a new NA method is proposed, it is compared against existing methods from the same category. However, both NA categories have the same goal: to allow for transferring functional knowledge from well- to poorly-studied species between conserved network regions. So, which one to choose, LNA or GNA? To answer this, we introduce the first systematic evaluation of the two NA categories. We introduce new measures of alignment quality that allow for fair comparison of the different LNA and GNA outputs, as such measures do not exist. We provide user-friendly software for efficient alignment evaluation that implements the new and existing measures. We evaluate prominent LNA and GNA methods on synthetic and real-world biological networks. We study the effect on alignment quality of using different interaction types and confidence levels. We find that the superiority of one NA category over the other is context-dependent. Further, when we contrast LNA and GNA in the application of learning novel protein functional knowledge, the two produce very different predictions, indicating their complementarity. Our results and software provide guidelines for future NA method development and evaluation. | NA aims to find topologically and functionally similar (conserved) regions between PPI networks of different species @cite_14 . Like genomic sequence alignment, NA can be local (LNA) or global (GNA). LNA aims to find small highly conserved subnetworks, irrespective of the overall similarity of compared networks (Figure (a)) @cite_1 @cite_17 @cite_11 @cite_16 @cite_2 . Since the highly conserved subnetworks can overlap, LNA results in a many-to-many mapping between nodes of the compared networks -- a node can be mapped to multiple nodes from the other network. In contrast, GNA aims to maximize overall similarity of the compared networks, at the expense of suboptimal conservation in local regions (Figure (b)). GNA produces a one-to-one (injective) node mapping -- every node in the smaller network is mapped to exactly one unique node in the larger network @cite_7 @cite_4 @cite_23 @cite_26 @cite_5 @cite_9 @cite_22 @cite_13 @cite_29 @cite_19 @cite_24 @cite_20 @cite_25 . | {
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_4",
"@cite_23",
"@cite_17",
"@cite_26",
"@cite_7",
"@cite_19",
"@cite_16",
"@cite_25",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_13",
"@cite_11"
],
"mid": [
"2243244968",
"2140084895",
"2133832282",
"",
"2077997913",
"2092986233",
"2137648512",
"2070071277",
"2149037310",
"1518994209",
"",
"2024166504",
"2142343225",
"2119021615",
"",
"2136316164",
"",
"2140084895",
"2043795984"
],
"abstract": [
"Introduction: With the so-called OMICS technology the scientific community has generated huge amounts of data that allow us to reconstruct the interplay of all kinds of biological entities. The emerging interaction networks are usually modeled as graphs with thousands of nodes and tens of thousands of edges between them. In addition to sequence alignment, the comparison of biological networks has proven great potential to infer the biological function of proteins and genes. However, the corresponding network alignment problem is computationally hard and theoretically intractable for real world instances. Results: We therefore developed GEDEVO, a novel tool for efficient graph comparison dedicated to real-world size biological networks. Underlying our approach is the so-called Graph Edit Distance (GED) model, where one graph is to be transferred into another one, with a minimal number of (or more general: minimal costs for) edge insertions and deletions. We present a novel evolutionary algorithm aiming to minimize the GED, and we compare our implementation against state of the art tools: SPINAL, GHOST, , and . On a set of protein-protein interaction networks from different organisms we demonstrate that GEDEVO outperforms the current methods. It thus refines the previously suggested alignments based on topological information only. Conclusion: With GEDEVO, we account for the constantly exploding number and size of available biological networks. The software as well as all used data sets are publicly available at http: gedevo.mpi-inf.mpg.de.",
"",
"MOTIVATION:Sequences and protein interaction data are of significance to understand the underlying molecular mechanism of organisms. Local network alignment is one of key systematic ways for predicting protein functions, identifying functional modules, and understanding the phylogeny from these data. Most of currently existing tools, however, encounter their limitations which are mainly concerned with scoring scheme, speed and scalability. Therefore, there are growing demands for sophisticated network evolution models and efficient local alignment algorithms. RESULTS:We developed a fast and scalable local network alignment tool so-called LocalAli for the identification of functionally conserved modules in multiple networks. In this algorithm, we firstly proposed a new framework to reconstruct the evolution history of conserved modules based on a maximum-parsimony evolutionary model. By relying on this model, LocalAli facilitates interpretation of resulting local alignments in terms of conserved modules which have been evolved from a common ancestral module through a series of evolutionary events. A meta-heuristic method simulated annealing was used to search for the optimal or near-optimal inner nodes (i.e. ancestral modules) of the evolutionary tree. To evaluate the performance and the statistical significance, LocalAli were tested on a total of 26 real datasets and 1040 randomly generated datasets. The results suggest that LocalAli outperforms all existing algorithms in terms of coverage, consistency and scalability, meanwhile retains a high precision in the identification of functionally coherent subnetworks. AVAILABILITY:The source code and test datasets are freely available for download under the GNU GPL v3 license at https: code.google.com p localali . CONTACT:jialu.hu@fu-berlin.de or knut.reinert@fu-berlin.de.",
"",
"High-quality datasets are needed to understand how global and local properties of protein-protein interaction, or 'interactome', networks relate to biological mechanisms, and to guide research on individual proteins. In an evaluation of existing curation of protein interaction experiments reported in the literature, we found that curation can be error-prone and possibly of lower quality than commonly assumed.",
"Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology.",
"Motivation: High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. Results: We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein–protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7 of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. Availability: Supplementary files and MI-GRAAL executables: http: bio-nets.doc.ic.ac.uk MI-GRAAL . Contact: natasha@imperial.ac.uk Supplementary information:Supplementary data are available at Bioinformatics online.",
"Genome sequencing projects provide nearly complete lists of the individual components present in an organism, but reveal little about how they work together. Follow-up initiatives have deciphered thousands of dynamic and context-dependent interrelationships between gene products that need to be analyzed with novel bioinformatics approaches able to capture their complex emerging properties. Here, we present a novel framework for the alignment and comparative analysis of biological networks of arbitrary topology. Our strategy includes the prediction of likely conserved interactions, based on evolutionary distances, to counter the high number of missing interactions in the current interactome networks, and a fast assessment of the statistical significance of individual alignment solutions, which vastly increases its performance with respect to existing tools. Finally, we illustrate the biological significance of the results through the identification of novel complex components and potential cases of cross-talk between pathways and alternative signaling routes.",
"Motivation: Protein interaction networks provide an important system-level view of biological processes. One of the fundamental problems in biological network analysis is the global alignment of a pair of networks, which puts the proteins of one network into correspondence with the proteins of another network in a manner that conserves their interactions while respecting other evidence of their homology. By providing a mapping between the networks of different species, alignments can be used to inform hypotheses about the functions of unannotated proteins, the existence of unobserved interactions, the evolutionary divergence between the two species and the evolution of complexes and pathways. Results: We introduce GHOST, a global pairwise network aligner that uses a novel spectral signature to measure topological similarity between subnetworks. It combines a seed-and-extend global alignment phase with a local search procedure and exceeds state-of-the-art performance on several network alignment tasks. We show that the spectral signature used by GHOST is highly discriminative, whereas the alignments it produces are also robust to experimental noise. When compared with other recent approaches, we find that GHOST is able to recover larger and more biologically significant, shared subnetworks between species. Availability: An efficient and parallelized implementation of GHOST, released under the Apache 2.0 license, is available at http: cbcb.umd.edu kingsford_group ghost Contact: rob@cs.umd.edu",
"We describe an algorithm, IsoRank, for global alignment of two protein-protein interaction (PPI) networks. IsoRank aims to maximize the overall match between the two networks; in contrast, much of previous work has focused on the local alignment problem-- identifying many possible alignments, each corresponding to a local region of similarity. IsoRank is guided by the intuition that a protein should be matched with a protein in the other network if and only if the neighbors of the two proteins can also be well matched. We encode this intuition as an eigenvalue problem, in a manner analogous to Google's PageRank method. We use IsoRank to compute the first known global alignment between the S. cerevisiae and D. melanogaster PPI networks. The common subgraph has 1420 edges and describes conserved functional components between the two species. Comparisons of our results with those of a well-known algorithm for local network alignment indicate that the globally optimized alignment resolves ambiguity introduced by multiple local alignments. Finally, we interpret the results of global alignment to identify functional orthologs between yeast and fly; our functional ortholog prediction method is much simpler than a recently proposed approach and yet provides results that are more comprehensive.",
"",
"Evolutionary analysis and comparison of biological networks may result in the identification of conserved mechanism between species as well as conserved modules, such as protein complexes and pathways. Following an holistic philosophy several algorithms, known as network alignment algorithms, have been proposed recently as counterpart of sequence and structure alignment algorithms, to unravel relations between different species at the interactome level. In this work we present AlignMCL, a local alignment algorithm for the identification of conserved subnetworks in different species. As many other existing tools, AlignMCL is based on the idea of merging many protein interaction networks in a single alignment graph and subsequently mining it to identify potentially conserved subnetworks. In order to asses AlignMCL we compared it to the state of the art local alignment algorithms over a rather extensive and updated dataset. Finally, to improve the usability of our tool we developed a Cytoscape plugin, AlignMCL, that offers a graphical user interface to an MCL engine.",
"Motivation: Given the growth of large scale protein-protein interaction (PPI) networks obtained across multiple species and conditions, network alignment is now important research problem. Network alignment performs comparative analysis across multiple PPI networks to understand their connections and relationships. However, PPI data in high-throughput experiments still suffer from significant false positive and false negatives rates. Consequently, high confidence network alignment across entire PPI networks is not possible. At best, local network alignment attempts to alleviate this problem by completely ignoring low confidence mappings; global network alignment, on the other hand, pairs all proteins regardless. To this end, we propose an alternative strategy: instead of full alignment across the entire network or completely ignoring low confidence regions, we aim to perform highly specific protein-to-protein alignments where data confidence is high, and fall back on broader functional region-to-region alignment where detailed protein-protein alignment cannot be ascertained. The basic idea is to provide an alignment of multiple granularities to allow biological predictions at varying specificity. Results: DualAligner performs dual network alignment, in which both region-to-region alignment, where whole subgraph of one network is aligned to subgraph of another, and protein-to-protein alignment, where individual proteins in networks are aligned to one another, are performed to achieve higher accuracy network alignments. Dual network alignment is achieved in DualAligner via background information provided by a combination of GO annotation information and protein interaction network data. We tested DualAligner on the global networks from IntAct and demonstrated the superiority of our approach compared to state-of-the-art network alignment methods. We studied the effects of parameters in DualAligner in controlling the quality of the alignment. We also performed a case study that illustrates the utility of our approach. Availability: http: www.cais.ntu.edu.sg assourav DualAligner",
"Biological network alignment aims to find regions of topological and functional (dis)similarities between molecular networks of different species. Then, network alignment can guide the transfer of biological knowledge from well-studied model species to less well-studied species between conserved (aligned) network regions, thus complementing valuable insights that have already been provided by genomic sequence alignment. Here, we review computational challenges behind the network alignment problem, existing approaches for solving the problem, ways of evaluating their alignment quality, and the approaches’ biomedical applications. We discuss recent innovative efforts of improving the existing view of network alignment. We conclude with open research questions in comparative biological network research that could further our understanding of principles of life, evolution, disease, and therapeutics.",
"",
"To elucidate cellular machinery on a global scale, we performed a multiple comparison of the recently available protein–protein interaction networks of Caenorhabditis elegans, Drosophila melanogaster, and Saccharomyces cerevisiae. This comparison integrated protein interaction and sequence information to reveal 71 network regions that were conserved across all three species and many exclusive to the metazoans. We used this conservation, and found statistically significant support for 4,645 previously undescribed protein functions and 2,609 previously undescribed protein interactions. We tested 60 interaction predictions for yeast by two-hybrid analysis, confirming approximately half of these. Significantly, many of the predicted functions and interactions would not have been identified from sequence similarity alone, demonstrating that network comparisons provide essential biological information beyond what is gleaned from the genome.",
"",
"",
"Local network alignment is an important component of the analysis of protein-protein interaction networks that may lead to the identification of evolutionary related complexes. We present AlignNemo, a new algorithm that, given the networks of two organisms, uncovers subnetworks of proteins that relate in biological function and topology of interactions. The discovered conserved subnetworks have a general topology and need not to correspond to specific interaction patterns, so that they more closely fit the models of functional complexes proposed in the literature. The algorithm is able to handle sparse interaction data with an expansion process that at each step explores the local topology of the networks beyond the proteins directly interacting with the current solution. To assess the performance of AlignNemo, we ran a series of benchmarks using statistical measures as well as biological knowledge. Based on reference datasets of protein complexes, AlignNemo shows better performance than other methods in terms of both precision and recall. We show our solutions to be biologically sound using the concept of semantic similarity applied to Gene Ontology vocabularies. The binaries of AlignNemo and supplementary details about the algorithms and the experiments are available at: sourceforge.net p alignnemo."
]
} |
1509.08524 | 2951261390 | Network alignment (NA) aims to find regions of similarities between molecular networks of different species. There exist two NA categories: local (LNA) or global (GNA). LNA finds small highly conserved network regions and produces a many-to-many node mapping. GNA finds large conserved regions and produces a one-to-one node mapping. Given the different outputs of LNA and GNA, when a new NA method is proposed, it is compared against existing methods from the same category. However, both NA categories have the same goal: to allow for transferring functional knowledge from well- to poorly-studied species between conserved network regions. So, which one to choose, LNA or GNA? To answer this, we introduce the first systematic evaluation of the two NA categories. We introduce new measures of alignment quality that allow for fair comparison of the different LNA and GNA outputs, as such measures do not exist. We provide user-friendly software for efficient alignment evaluation that implements the new and existing measures. We evaluate prominent LNA and GNA methods on synthetic and real-world biological networks. We study the effect on alignment quality of using different interaction types and confidence levels. We find that the superiority of one NA category over the other is context-dependent. Further, when we contrast LNA and GNA in the application of learning novel protein functional knowledge, the two produce very different predictions, indicating their complementarity. Our results and software provide guidelines for future NA method development and evaluation. | NA can also be categorized as or , based on how many networks it can align. See @cite_14 for a review of pairwise and multiple NA. Here, we focus on pairwise NA. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2119021615"
],
"abstract": [
"Biological network alignment aims to find regions of topological and functional (dis)similarities between molecular networks of different species. Then, network alignment can guide the transfer of biological knowledge from well-studied model species to less well-studied species between conserved (aligned) network regions, thus complementing valuable insights that have already been provided by genomic sequence alignment. Here, we review computational challenges behind the network alignment problem, existing approaches for solving the problem, ways of evaluating their alignment quality, and the approaches’ biomedical applications. We discuss recent innovative efforts of improving the existing view of network alignment. We conclude with open research questions in comparative biological network research that could further our understanding of principles of life, evolution, disease, and therapeutics."
]
} |
1509.07996 | 2132385883 | Large graphs arise in a number of contexts and understanding their structure and extracting information from them is an important research area. Early algorithms on mining communities have focused on the global structure, and often run in time functional to the size of the entire graph. Nowadays, as we often explore networks with billions of vertices and find communities of size hundreds, it is crucial to shift our attention from macroscopic structure to microscopic structure in large networks. A growing body of work has been adopting local expansion methods in order to identify the community members from a few exemplary seed members. In this paper, we propose a novel approach for finding overlapping communities called LEMON (Local Expansion via Minimum One Norm). The algorithm finds the community by seeking a sparse vector in the span of the local spectra such that the seeds are in its support. We show that LEMON can achieve the highest detection accuracy among state-of-the-art proposals. The running time depends on the size of the community rather than that of the entire graph. The algorithm is easy to implement, and is highly parallelizable. We further provide theoretical analysis on the local spectral properties, bounding the measure of tightness of extracted community in terms of the eigenvalues of graph Laplacian. Moreover, given that networks are not all similar in nature, a comprehensive analysis on how the local expansion approach is suited for uncovering communities in different networks is still lacking. We thoroughly evaluate our approach using both synthetic and real-world datasets across different domains, and analyze the empirical variations when applying our method to inherently different networks in practice. In addition, the heuristics on how the seed set quality and quantity would affect the performance are provided. | As noted in the preceding section, among the divergent approaches, random walks tend to reveal communities that bear the closest resemblance to the ground truth communities in nature @cite_4 . In the following, we briefly review some methods that have adopted the random walk technique in finding communities. Speaking of methods that focus on the global structure, Pons @cite_12 proposed a hierarchical agglomerative algorithm, , that quantified the similarity between vertices using random walks and then partitioned the network into non-overlapping communities. Meil @math @cite_19 presented a clustering approach by viewing the pairwise similarities as edge flows in a random walk and studied the eigenvectors and values of the resulting transition matrix. A later successful algorithm, , proposed by by Rosvall & Bergstrom @cite_5 enables uncovering hierarchical structures in networks by compressing a description of a random walker as a proxy for real flow on networks. Variants of this technique such as biased random walk @cite_9 has also been employed in community finding. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_19",
"@cite_5",
"@cite_12"
],
"mid": [
"2079149066",
"",
"200434350",
"1743429370",
"2033590892"
],
"abstract": [
"Four major factors govern the intricacies of community extraction in networks: (1) the literature offers a multitude of disparate community detection algorithms whose output exhibits high structural variability across the collection, (2) communities identified by algorithms may differ structurally from real communities that arise in practice, (3) there is no consensus characterizing how to discriminate communities from noncommunities, and (4) the application domain includes a wide variety of networks of fundamentally different natures. In this article, we present a class separability framework to tackle these challenges through a comprehensive analysis of community properties. Our approach enables the assessment of the structural dissimilarity among the output of multiple community detection algorithms and between the output of algorithms and communities that arise in practice. In addition, our method provides us with a way to organize the vast collection of community detection algorithms by grouping those that behave similarly. Finally, we identify the most discriminative graph-theoretical properties of community signature and the small subset of properties that account for most of the biases of the different community detection algorithms. We illustrate our approach with an experimental analysis, which reveals nuances of the structure of real and extracted communities. In our experiments, we furnish our framework with the output of 10 different community detection procedures, representative of categories of popular algorithms available in the literature, applied to a diverse collection of large-scale real network datasets whose domains span biology, online shopping, and social systems. We also analyze communities identified by annotations that accompany the data, which reflect exemplar communities in various domain. We characterize these communities using a broad spectrum of community properties to produce the different structural classes. As our experiments show that community structure is not a universal concept, our framework enables an informed choice of the most suitable community detection method for identifying communities of a specific type in a given network and allows for a comparison of existing community detection algorithms while guiding the design of new ones.",
"",
"We present a new view of clustering and segmentation by pairwise similarities. We interpret the similarities as edge ows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This view shows that spectral methods for clustering and segmentation have a probabilistic foundation. We prove that the Normalized Cut method arises naturally from our framework and we provide a complete characterization of the cases when the Normalized Cut algorithm is exact. Then we discuss other spectral segmentation and clustering methods showing that several of them are essentially the same as NCut.",
"To comprehend the hierarchical organization of large integrated systems, we introduce the hierarchical map equation, which reveals multilevel structures in networks. In this information-theoretic a ...",
"In a representative embodiment of the invention described herein, a well logging system for investigating subsurface formations is controlled by a general purpose computer programmed for real-time operation. The system is cooperatively arranged to provide for all aspects of a well logging operation, such as data acquisition and processing, tool control, information or data storage, and data presentation as a well logging tool is moved through a wellbore. The computer controlling the system is programmed to provide for data acquisition and tool control commands in direct response to asynchronous real-time external events. Such real-time external events may occur, for example, as a result of movement of the logging tool over a selected depth interval, or in response to requests or commands directed to the system by the well logging engineer by means of keyboard input."
]
} |
1509.07996 | 2132385883 | Large graphs arise in a number of contexts and understanding their structure and extracting information from them is an important research area. Early algorithms on mining communities have focused on the global structure, and often run in time functional to the size of the entire graph. Nowadays, as we often explore networks with billions of vertices and find communities of size hundreds, it is crucial to shift our attention from macroscopic structure to microscopic structure in large networks. A growing body of work has been adopting local expansion methods in order to identify the community members from a few exemplary seed members. In this paper, we propose a novel approach for finding overlapping communities called LEMON (Local Expansion via Minimum One Norm). The algorithm finds the community by seeking a sparse vector in the span of the local spectra such that the seeds are in its support. We show that LEMON can achieve the highest detection accuracy among state-of-the-art proposals. The running time depends on the size of the community rather than that of the entire graph. The algorithm is easy to implement, and is highly parallelizable. We further provide theoretical analysis on the local spectral properties, bounding the measure of tightness of extracted community in terms of the eigenvalues of graph Laplacian. Moreover, given that networks are not all similar in nature, a comprehensive analysis on how the local expansion approach is suited for uncovering communities in different networks is still lacking. We thoroughly evaluate our approach using both synthetic and real-world datasets across different domains, and analyze the empirical variations when applying our method to inherently different networks in practice. In addition, the heuristics on how the seed set quality and quantity would affect the performance are provided. | However, the lazy random walk endured a much slower mixing speed and it usually took more than 500 hundred steps to converge to a local structure compared with several steps of rapid mixing in a regular random walk. Featuring on the seeding strategies, Whang @cite_17 established several sophisticated methods for choosing the seed set, and then used similar PageRank scheme as that in @cite_18 to expand the seeds until a community with optimal conductance is found. Nonetheless, the performance gained by adopting these intricate seeding methods was not significantly better than that by using random seeds. This implies that a better scheme of expanding the seeds is also needed aside from a good seeding strategy. A recent work by Kloumann & Kleinberg @cite_11 provided a systematic understanding of variants of PageRank-based seed set expansion. They showed many insightful findings regarding the heuristics on seed set. However, the drawback of lacking a proper stop criterion has limited its functionality in practice. Even though a recently proposed heat kernel algorithm @cite_8 advances PageRank by introducing a sophisticated diffusion method, the detection accuracy achieved by heat kernel approach is still much lower than that of , which we will show in Section . | {
"cite_N": [
"@cite_8",
"@cite_18",
"@cite_11",
"@cite_17"
],
"mid": [
"2016273060",
"2086254934",
"",
"2066090568"
],
"abstract": [
"The heat kernel is a type of graph diffusion that, like the much-used personalized PageRank diffusion, is useful in identifying a community nearby a starting seed node. We present the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it produces. Our algorithm is formally a relaxation method for solving a linear system to estimate the matrix exponential in a degree-weighted norm. We prove that this algorithm stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the diffusion, not the size of the graph. On large graphs, our experiments indicate that the communities produced by this method have better conductance than those produced by PageRank, although they take slightly longer to compute. On a real-world community identification task, the heat kernel communities perform better than those from the PageRank diffusion.",
"A local graph partitioning algorithm finds a cut near a specified starting vertex, with a running time that depends largely on the size of the small side of the cut, rather than the size of the input graph. In this paper, we present a local partitioning algorithm using a variation of PageRank with a specified starting distribution. We derive a mixing result for PageRank vectors similar to that for random walks, and show that the ordering of the vertices produced by a PageRank vector reveals a cut with small conductance. In particular, we show that for any set C with conductance and volume k, a PageRank vector with a certain starting distribution can be used to produce a set with conductance O ( k ). We present an improved algorithm for computing approximate PageRank vectors, which allows us to find such a set in time proportional to its size. In particular, we can find a cut with conductance at most o , whose small side has volume at least 2b, in time O ( 2^b ^2 m o^2 ) where m is the number of edges in the graph. By combining small sets found by this local partitioning algorithm, we obtain a cut with conductance o and approximately optimal balance in time O ( m ^4 m o^2 ).",
"",
"Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. One of the most successful techniques for finding overlapping communities is based on local optimization and expansion of a community metric around a seed set of vertices. In this paper, we propose an efficient overlapping community detection algorithm using a seed set expansion approach. In particular, we develop new seeding strategies for a personalized PageRank scheme that optimizes the conductance community score. The key idea of our algorithm is to find good seeds, and then expand these seed sets using the personalized PageRank clustering procedure. Experimental results show that this seed set expansion approach outperforms other state-of-the-art overlapping community detection methods. We also show that our new seeding strategies are better than previous strategies, and are thus effective in finding good overlapping clusters in a graph."
]
} |
1509.08316 | 2267702630 | Effective emergency and natural disaster management depend on the efficient mission-critical voice and data communication between first responders and victims. Land mobile radio system (LMRS) is a legacy narrowband technology used for critical voice communications with limited use for data applications. Recently, long term evolution (LTE) emerged as a broadband communication technology that has a potential to transform the capabilities of public safety technologies by providing broadband, ubiquitous, and mission-critical voice and data support. For example, in the United States, FirstNet is building a nationwide coast-to-coast public safety network based on LTE broadband technology. This paper presents a comparative survey of legacy and the LTE-based public safety networks, and discusses the LMRS-LTE convergence as well as mission-critical push-to-talk over LTE. A simulation study of LMRS and LTE band class 14 technologies is provided using the NS-3 open source tool. An experimental study of APCO-25 and LTE band class 14 is also conducted using software-defined radio to enhance the understanding of the public safety systems. Finally, emerging technologies that may have strong potential for use in public safety networks are reviewed. | There have been relatively limited studies in the literature on PSC that present a comprehensive survey on public safety LMRS and LTE systems. In @cite_186 , authors present a discussion on voice over LTE as an important aspect of PSC and then provide a high-level overview of LMRS and LTE technologies for their use in PSC scenario. @cite_211 @cite_271 , authors survey the status of various wireless technologies in public safety network (PSN), current regulatory standards, and the research activities that address the challenges in PSC. The ability of LTE to meet the PSN requirements, and classifying possible future developments to LTE that could enhance its capacity to provide the PSC is discussed in @cite_194 @cite_226 . | {
"cite_N": [
"@cite_226",
"@cite_271",
"@cite_186",
"@cite_211",
"@cite_194"
],
"mid": [
"",
"2511512272",
"2028868724",
"2066349561",
"2074140761"
],
"abstract": [
"",
"The field of emergency and crisis management continuously strives to enhance collaboration, communication, and coordination among public safety organizations. Successful integration is challenging due to current policies and regulations. Moreover, policymakers must predict future needs. Regardless of the challenges, development and growth of a national public safety communication system is no longer a hopeless cause and is anticipated to mitigate challenges by enhancing security, dependability and fault tolerance, cost effectiveness, interoperability, spectral efficiency, and advanced capabilities. Although this national public safety communication system is in the process of being implemented by various local, state, and federal agencies, such adoption is voluntary and attributes to a disconnect between policies and stakeholders. This study reviews the evolution of public safety communication system and discusses benefits and challenges of a national system, the policies and regulations affecting wireless communication technologies and spectrum sharing, and the influence of evolving technologies.",
"Public safety communications users in the United States are living in times of unprecedented change. From the challenges associated with the FCC's recent LMR narrowbanding mandate, which limited mobile radio channel bandwidths to 12.5 kHz, to the creation of FirstNet, the world's first, all-LTE public safety broadband network, public safety practitioners are changing the way they react and communicate during times of need. Though the network architecture for FirstNet has yet to be defined, an IMS-based VoLTE solution is the likely voice application successor to LMR push-to-talk and group calling. However, it remains to be seen whether or not VoLTE can provide the features and quality of service that public safety professionals have come to expect and rely on. And the problem is not confined to emergency responders in the United States - the worldwide community of P25 and TETRA users are evaluating the value of migrating to an LTE-based emergency communications network as well. This paper offers a high level discussion on the history of LMR, the basics of LTE and VoLTE, and implementation and testing recommendations based on possible FirstNet architectures.",
"Public Safety (PS) organizations bring value to society by creating a stable and secure environment. The services they provide include protection of people, environment and assets and they address a large number of threats both natural and man-made, acts of terrorism, technological, radiological or environmental accidents. The capability to exchange information (e.g., voice and data) is essential to improve the coordination of PS officers during an emergency crisis and improve response efforts. Wireless communications are particularly important in field operations to support the mobility of first responders. Recent disasters have emphasized the need to enhance interoperability, capacity and broadband connectivity of the wireless networks used by PS organizations. This paper surveys the outstanding challenges in this area, the status of wireless communication technologies in this particular domain and the current regulatory, standardization and research activities to address the identified challenges, with a particular focus on USA and Europe.",
"It is increasingly being recognized that effective communications are key to a successful response to emergency and disaster situations. The ability of the first responder emergency services to communicate among themselves and to share multimedia information directly affects the ability to save lives. This is reflected in increasing public investment in broadband public safety communication systems. These systems have some specific requirements, which are outlined in this article. As LTE is expected to become the most widely deployed broadband communication technology, we examine the capability of LTE to meet these requirements, and identify possible future developments to LTE that could further enhance its ability to provide the necessary service."
]
} |
1509.08316 | 2267702630 | Effective emergency and natural disaster management depend on the efficient mission-critical voice and data communication between first responders and victims. Land mobile radio system (LMRS) is a legacy narrowband technology used for critical voice communications with limited use for data applications. Recently, long term evolution (LTE) emerged as a broadband communication technology that has a potential to transform the capabilities of public safety technologies by providing broadband, ubiquitous, and mission-critical voice and data support. For example, in the United States, FirstNet is building a nationwide coast-to-coast public safety network based on LTE broadband technology. This paper presents a comparative survey of legacy and the LTE-based public safety networks, and discusses the LMRS-LTE convergence as well as mission-critical push-to-talk over LTE. A simulation study of LMRS and LTE band class 14 technologies is provided using the NS-3 open source tool. An experimental study of APCO-25 and LTE band class 14 is also conducted using software-defined radio to enhance the understanding of the public safety systems. Finally, emerging technologies that may have strong potential for use in public safety networks are reviewed. | In this paper our focus is more on the comparative analysis of legacy and emerging technologies for PSC, when compared with the contributions in @cite_186 @cite_211 @cite_194 . We take up the public safety spectrum allocation in the United States as a case study, and present an overview of spectrum allocation in VHF, UHF, 700 MHz, 800, MHz, 900 MHz, and 4.9 GHz bands for various public safety entities. We also provide a holistic view on the current status of the broadband PSN in other regions such as the European Union, the United Kingdom, and Canada. | {
"cite_N": [
"@cite_211",
"@cite_186",
"@cite_194"
],
"mid": [
"2066349561",
"2028868724",
"2074140761"
],
"abstract": [
"Public Safety (PS) organizations bring value to society by creating a stable and secure environment. The services they provide include protection of people, environment and assets and they address a large number of threats both natural and man-made, acts of terrorism, technological, radiological or environmental accidents. The capability to exchange information (e.g., voice and data) is essential to improve the coordination of PS officers during an emergency crisis and improve response efforts. Wireless communications are particularly important in field operations to support the mobility of first responders. Recent disasters have emphasized the need to enhance interoperability, capacity and broadband connectivity of the wireless networks used by PS organizations. This paper surveys the outstanding challenges in this area, the status of wireless communication technologies in this particular domain and the current regulatory, standardization and research activities to address the identified challenges, with a particular focus on USA and Europe.",
"Public safety communications users in the United States are living in times of unprecedented change. From the challenges associated with the FCC's recent LMR narrowbanding mandate, which limited mobile radio channel bandwidths to 12.5 kHz, to the creation of FirstNet, the world's first, all-LTE public safety broadband network, public safety practitioners are changing the way they react and communicate during times of need. Though the network architecture for FirstNet has yet to be defined, an IMS-based VoLTE solution is the likely voice application successor to LMR push-to-talk and group calling. However, it remains to be seen whether or not VoLTE can provide the features and quality of service that public safety professionals have come to expect and rely on. And the problem is not confined to emergency responders in the United States - the worldwide community of P25 and TETRA users are evaluating the value of migrating to an LTE-based emergency communications network as well. This paper offers a high level discussion on the history of LMR, the basics of LTE and VoLTE, and implementation and testing recommendations based on possible FirstNet architectures.",
"It is increasingly being recognized that effective communications are key to a successful response to emergency and disaster situations. The ability of the first responder emergency services to communicate among themselves and to share multimedia information directly affects the ability to save lives. This is reflected in increasing public investment in broadband public safety communication systems. These systems have some specific requirements, which are outlined in this article. As LTE is expected to become the most widely deployed broadband communication technology, we examine the capability of LTE to meet these requirements, and identify possible future developments to LTE that could further enhance its ability to provide the necessary service."
]
} |
1509.08316 | 2267702630 | Effective emergency and natural disaster management depend on the efficient mission-critical voice and data communication between first responders and victims. Land mobile radio system (LMRS) is a legacy narrowband technology used for critical voice communications with limited use for data applications. Recently, long term evolution (LTE) emerged as a broadband communication technology that has a potential to transform the capabilities of public safety technologies by providing broadband, ubiquitous, and mission-critical voice and data support. For example, in the United States, FirstNet is building a nationwide coast-to-coast public safety network based on LTE broadband technology. This paper presents a comparative survey of legacy and the LTE-based public safety networks, and discusses the LMRS-LTE convergence as well as mission-critical push-to-talk over LTE. A simulation study of LMRS and LTE band class 14 technologies is provided using the NS-3 open source tool. An experimental study of APCO-25 and LTE band class 14 is also conducted using software-defined radio to enhance the understanding of the public safety systems. Finally, emerging technologies that may have strong potential for use in public safety networks are reviewed. | We review the LTE-based FirstNet architecture in the United States, the convergence of LTE-LMR technologies, and support for mission-critical PTT over LTE, which are not addressed in earlier survey articles such as @cite_186 @cite_211 . In addition, a unified comparison between LMRS and LTE-based PSN is undertaken in and , which is not available in existing survey articles on PSC to our best knowledge. Study of LMRS and LTE band class 14 is carried out using NS-3 simulations, and software-defined radio (SDR) measurement campaigns for LMRS and LTE band class 14 technologies are reported. Different than existing literature, we also provide a comprehensive perspective on how emerging wireless technologies can shape PSN and discuss open research problems. | {
"cite_N": [
"@cite_211",
"@cite_186"
],
"mid": [
"2066349561",
"2028868724"
],
"abstract": [
"Public Safety (PS) organizations bring value to society by creating a stable and secure environment. The services they provide include protection of people, environment and assets and they address a large number of threats both natural and man-made, acts of terrorism, technological, radiological or environmental accidents. The capability to exchange information (e.g., voice and data) is essential to improve the coordination of PS officers during an emergency crisis and improve response efforts. Wireless communications are particularly important in field operations to support the mobility of first responders. Recent disasters have emphasized the need to enhance interoperability, capacity and broadband connectivity of the wireless networks used by PS organizations. This paper surveys the outstanding challenges in this area, the status of wireless communication technologies in this particular domain and the current regulatory, standardization and research activities to address the identified challenges, with a particular focus on USA and Europe.",
"Public safety communications users in the United States are living in times of unprecedented change. From the challenges associated with the FCC's recent LMR narrowbanding mandate, which limited mobile radio channel bandwidths to 12.5 kHz, to the creation of FirstNet, the world's first, all-LTE public safety broadband network, public safety practitioners are changing the way they react and communicate during times of need. Though the network architecture for FirstNet has yet to be defined, an IMS-based VoLTE solution is the likely voice application successor to LMR push-to-talk and group calling. However, it remains to be seen whether or not VoLTE can provide the features and quality of service that public safety professionals have come to expect and rely on. And the problem is not confined to emergency responders in the United States - the worldwide community of P25 and TETRA users are evaluating the value of migrating to an LTE-based emergency communications network as well. This paper offers a high level discussion on the history of LMR, the basics of LTE and VoLTE, and implementation and testing recommendations based on possible FirstNet architectures."
]
} |
1509.08037 | 2291981158 | Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an object's appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies. | Deformation Lamps utilizes a video projector to give dynamic impressions to static objects. This technique is related to spatial augmented reality, wherein virtual objects are created in the real world without the viewer having to wear special devices @cite_18 @cite_10 . Past research in this field has come up with a variety of projection methods @cite_32 @cite_25 @cite_29 . Raskar and his colleagues proposed Shader Lamps' @cite_25 that can change the appearances of real objects, including their color, texture, and material properties, into those of virtual objects. Bimber and Iwai @cite_17 proposed a light projection method to enhance the luminance contrast and color of printed materials, and and later studies have developed algorithms to modify the appearance of real objects by light projection with image compensation @cite_32 @cite_33 . The technique is used to edit the appearance of real objects @cite_39 @cite_47 , and moreover, is also employed to add motion impressions to a static object by projecting a moving pattern @cite_8 @cite_22 . | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_22",
"@cite_8",
"@cite_29",
"@cite_32",
"@cite_39",
"@cite_47",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"1482795035",
"1981852267",
"2167154254",
"2061274677",
"2127844529",
"2086653392",
"1991502657",
"2003147603",
"185983953",
"2101630196",
"2100540864"
],
"abstract": [
"Like virtual reality, augmented reality is becoming an emerging platform in new application areas for museums, edutainment, home entertainment, research, industry, and the art communities using novel approaches which have taken augmented reality beyond traditional eye-worn or hand-held displays. In this book, the authors discuss spatial augmented reality approaches that exploit optical elements, video projectors, holograms, radio frequency tags, and tracking technology, as well as interactive rendering algorithms and calibration techniques in order to embed synthetic supplements into the real environment or into a live video of the real environment. Special Features: - Comprehensive overview - Detailed mathematical equations - Code fragments - Implementation instructions - Examples of Spatial AR displays The authors have put together a preliminary collection of Errata. Updates will be posted to this site as necessary.",
"This article focuses on real-time image correction techniques that enable projector-camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, colored and textured surfaces. It reviews hardware accelerated methods like pixel-precise geometric warping, radiometric compensation, multi-focal projection, and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super-resolution, high dynamic range and high-speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.",
"Cartoon animations delight the audience with moving characters but they remain on a flat 2D screen. The cartoon dioramas, on the other hand, are detailed, three-dimensional and allow physical interaction but they are static. We present techniques to combine the two in some limited cases. We illuminate static physical models with projectors. The images are generated with real time three dimensional computer graphics. We describe a system to demonstrate various visual effects such as non-photorealistic shading, apparent motion and virtual lighting on a toy-car model.",
"Animated animatronic figures are a unique way to give physical presence to a character. However, their movement and expressions are often limited due to mechanical constraints. In this paper, we propose a complete process for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unattainable due to physical constraints.",
"A new body of research, concerning the divergence of information systems away from today's ubiquitous notion of ‘computer’, is emerging. Though the various threads of inquiry diverge significantly in their approach, each generally seeks some intimate integration of computational resources with its user's immediate or extended environment. We introduce here a new project called theLuminous Room, which uses dynamically controlled video projection to permit information access and manipulation throughout an architectural space. A description of the working components of the system is followed by an explication of the philosophies that provide both motivation and a conceptual scaffolding for the work. Finally, we offer speculations about the way in which formal architecture and ‘environmental’ information systems like theLuminous Room might co-develop.",
"We introduce a new projection display technique that converts a visual material appearance of target object. Unlike conventional projection display, our approach allowed us successive material appearance manipulation by the projector camera feedback without scene modeling. First, we introduce an appearance control framework with a coaxial projector camera system. Next, we introduce two image based material appearance manipulation methods of translucency and glossiness. Last, we verify the ability of the material appearance manipulation of the proposed display technique through the experiments.",
"In this paper, we introduce a system to virtually restore damaged or historically significant objects without needing to physically change the object in any way. Our work addresses both creating a restored synthetic version of the object as viewed from a camera and projecting the necessary light, using digital projectors, to give the illusion of the object being restored. The restoration algorithm uses an energy minimization method to enforce a set of criteria over the surface of the object and provides an interactive tool to the user which can compute a restoration in a few minutes. The visual compensation method develops a formulation that is particularly concerned with obtaining bright compensations under a specified maximum amount of light. The bound on the amount of light is of crucial importance when viewing and restoring old and potentially fragile objects. Finally, we demonstrate our system by restoring several deteriorated and old objects enabling the observer to view the original or restored object at will.",
"We present a system that superimposes multiple projections onto an object of arbitrary shape and color to produce high-resolution appearance changes. Our system produces appearances at an improved resolution compared to prior works and can change appearances at near interactive rates. Three main components are central to our system. First, the problem of computing compensation images is formulated as a constrained optimization which yields high-resolution appearances. Second, decomposition of the target appearance into base and scale images enables fast swapping of appearances on the object by requiring the constrained optimization to be computed only once per object. Finally, to make high-quality appearance edits practical, an elliptical Gaussian is used to model projector pixels and their interaction between projectors. To the best of our knowledge, we build the first system that achieves high-resolution and high-quality appearance edits using multiple superimposed projectors on complex nonplanar colored objects. We demonstrate several appearance edits including specular lighting, subsurface scattering, inter-reflections, and color, texture, and geometry changes on objects with different shapes and colors.",
"",
"We describe a new paradigm for three-dimensional computer graphics, using projectors to graphically animate physical objects in the real world. The idea is to replace a physical object— with its inherent color, texture, and material properties—with a neutral object and projected imagery, reproducing the original (or alternative) appearance directly on the object. Because the approach is to effectively \"lift\" the visual properties of the object into the projector, we call the projectors shader lamps. We address the central issue of complete and continuous illumination of non-trivial physical objects using multiple projectors and present a set of new techniques that makes the process of illumination practical. We demonstrate the viability of these techniques through a variety of table-top applications, and describe preliminary results to reproduce life-sized virtual spaces.",
"We present a simple and cost-efficient way of extending contrast, perceived tonal resolution, and color space of reflective media, such as paper prints, hardcopy photographs, or electronic paper displays. A calibrated projector-camera system is applied for automatic registration, radiometric scanning and superimposition. A second modulation of the projected light on the surface of such media results in a high dynamic range visualization. This holds application potential for a variety of domains, such as radiology, astronomy, optical microscopy, conservation and restoration of historic art, modern art and entertainment installations."
]
} |
1509.08037 | 2291981158 | Light projection is a powerful technique that can be used to edit the appearance of objects in the real world. Based on pixel-wise modification of light transport, previous techniques have successfully modified static surface properties such as surface color, dynamic range, gloss, and shading. Here, we propose an alternative light projection technique that adds a variety of illusory yet realistic distortions to a wide range of static 2D and 3D projection targets. The key idea of our technique, referred to as (Deformation Lamps), is to project only dynamic luminance information, which effectively activates the motion (and shape) processing in the visual system while preserving the color and texture of the original object. Although the projected dynamic luminance information is spatially inconsistent with the color and texture of the target object, the observer's brain automatically combines these sensory signals in such a way as to correct the inconsistency across visual attributes. We conducted a psychophysical experiment to investigate the characteristics of the inconsistency correction and found that the correction was critically dependent on the retinal magnitude of the inconsistency. Another experiment showed that the perceived magnitude of image deformation produced by our techniques was underestimated. The results ruled out the possibility that the effect obtained by our technique stemmed simply from the physical change in an object's appearance by light projection. Finally, we discuss how our techniques can make the observers perceive a vivid and natural movement, deformation, or oscillation of a variety of static objects, including drawn pictures, printed photographs, sculptures with 3D shading, and objects with natural textures including human bodies. | Deformation Lamps produces apparent image movements not by shifting the position of the object image as conducted in the previous studies, but by adding luminance motion signals that activate the motion sensors in the human visual system. In support of our strategy, past studies have invented several dynamic displays that produce vivid motion sensations without the corresponding position shifts in the image. In the phenomenon known as reversed phi' @cite_24 @cite_19 , the perceived motion direction of an object moving across two video frames is reversed when the luminance contrast polarity of the image is reversed. In a display entitled Motion without movement' @cite_5 , local phase shifts of the luminance pattern produce the perception of a global motion flow in a direction consistent with the phase shifts. Even static pictures can produce illusory motion sensations when they activate the motion sensors of the human visual system (See, e.g., @cite_26 @cite_11 , and a graphic technique for automatically generating such illusory motion patterns has been proposed @cite_20 . | {
"cite_N": [
"@cite_26",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_20",
"@cite_11"
],
"mid": [
"2012598113",
"2159111239",
"2071780812",
"2153157788",
"2144275684",
"2340037749"
],
"abstract": [
"Intensive studies of visual illusion have rarely shown examples of polymorphic responses1–3. We show here that, using figures consisting of stripes shaded from dark to light, arranged in repeating sectors, an illusion of movement can be induced in about 75 of observers when viewed peripherally. The responses of the viewers fall into four categories. This polymorphic response suggests a genetic origin.",
"Abstract How similar must two successively presented patterns be for phi movement to be perceived between them ? Phi movement between two granular patterns, one being the photographic negative of the other, appeared to be reversed , towards the direction of the earlier stimulus. Moving objects, displayed on a TV picture which was made positive and negative on alternate frames, appeared to move backwards. (The backward movement could generate its own after effect of movement.) Conclusion: phi movement was perceived between nearby points of similar brightness, irrespective of form or colour. Phi movement was studied between two positive random-dot Julesz patterns. Pairs that gave stereo when presented dichoptically also gave phi movement when presented alternatively to one eye. When one pattern was degraded with noise, both stereo and phi broke down at the same noise level. Conclusion: phi, like stereo, depended upon point-by-point comparison of brightness between two patterns. It could precede the perception of form.",
"Abstract The visual system usually sees phi apparent movement when two similar pictures are exposed successively, and stereoscopic depth when the pictures are exposed one to each eye. But when a picture was followed via a dissolve by its own photographic negative, overlapping but displaced, strong apparent movement was seen in the opposite direction to the image displacement (“reversed phi”). When both eyes saw a positive picture, and one eye also saw an overlapping low-contrast negative containing binocular disparity, “reversed stereo” was seen, with the apparent depth opposite to the physical disparity. Results were explained with a model of spatial summation by visual receptive fields.",
"We describe a technique for displaying patterns that appear to move continuously without changing their positions. The method uses a quadrature pair of oriented filters to vary the local phase, giving the sensation of motion. We have used this technique in various computer graphic and scientific visualization applications.",
"Illusory motion in a still image is a fascinating research topic in the study of human motion perception. Physiologists and psychologists have attempted to understand this phenomenon by constructing simple, color repeated asymmetric patterns (RAP) and have found several useful rules to enhance the strength of illusory motion. Based on their knowledge, we propose a computational method to generate self-animating images. First, we present an optimized RAP placement on streamlines to generate illusory motion for a given static vector field. Next, a general coloring scheme for RAP is proposed to render streamlines. Furthermore, to enhance the strength of illusion and respect the shape of the region, a smooth vector field with opposite directional flow is automatically generated given an input image. Examples generated by our method are shown as evidence of the illusory effect and the potential applications for entertainment and design purposes.",
""
]
} |
1509.08065 | 2964129681 | Based on the definition of local spectral subspace, we propose a novel approach called LOSP for local overlapping community detection. Using the power method for a few steps, LOSP finds an approximate invariant subspace, which depicts the embedding of the local neighborhood structure around the seeds of interest. LOSP then identifies the local community expanded from the given seeds by seeking a sparse indicator vector in the subspace where the seeds are in its support. We provide a systematic investigation on LOSP, and thoroughly evaluate it on large real world networks across multiple domains. With the prior information of very few seed members, LOSP can detect the remaining members of a target community with high accuracy. Experiments demonstrate that LOSP outperforms the Heat Kernel and PageRank diffusions. Using LOSP as a subroutine, we further address the problem of multiple membership identification, which aims to find all the communities a single vertex belongs to. High F1 scores are achieved in detecting multiple local communities with respect to arbitrary single seed for various large real world networks. | The random walk technique has been extensively adopted as a subroutine for locally expanding the seed set @cite_22 @cite_24 @cite_21 and it is observed to produce communities correlated highly to the ground-truth communities in real-world networks @cite_25 . PageRank @cite_18 @cite_12 @cite_13 and Heat Kernel @cite_14 @cite_5 @cite_29 are two main techniques for the probability diffusion. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_24",
"@cite_5",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2086254934",
"2016273060",
"2084309732",
"2107499877",
"",
"",
"2171878761",
"",
"",
""
],
"abstract": [
"A local graph partitioning algorithm finds a cut near a specified starting vertex, with a running time that depends largely on the size of the small side of the cut, rather than the size of the input graph. In this paper, we present a local partitioning algorithm using a variation of PageRank with a specified starting distribution. We derive a mixing result for PageRank vectors similar to that for random walks, and show that the ordering of the vertices produced by a PageRank vector reveals a cut with small conductance. In particular, we show that for any set C with conductance and volume k, a PageRank vector with a certain starting distribution can be used to produce a set with conductance O ( k ). We present an improved algorithm for computing approximate PageRank vectors, which allows us to find such a set in time proportional to its size. In particular, we can find a cut with conductance at most o , whose small side has volume at least 2b, in time O ( 2^b ^2 m o^2 ) where m is the number of edges in the graph. By combining small sets found by this local partitioning algorithm, we obtain a cut with conductance o and approximately optimal balance in time O ( m ^4 m o^2 ).",
"The heat kernel is a type of graph diffusion that, like the much-used personalized PageRank diffusion, is useful in identifying a community nearby a starting seed node. We present the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it produces. Our algorithm is formally a relaxation method for solving a linear system to estimate the matrix exponential in a degree-weighted norm. We prove that this algorithm stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the diffusion, not the size of the graph. On large graphs, our experiments indicate that the communities produced by this method have better conductance than those produced by PageRank, although they take slightly longer to compute. On a real-world community identification task, the heat kernel communities perform better than those from the PageRank diffusion.",
"Expanding a seed set into a larger community is a common procedure in link-based analysis. We show how to adapt recent results from theoretical computer science to expand a seed set into a community with small conductance and a strong relationship to the seed, while examining only a small neighborhood of the entire graph. We extend existing results to give theoretical guarantees that apply to a variety of seed sets from specified communities. We also describe simple and flexible heuristics for applying these methods in practice, and present early experiments showing that these methods compare favorably with existing approaches.",
"We present an efficient algorithm for solving linear systems with a boundary condition by computing the Green’s function of a connected induced subgraph S of a graph. Different from previous linear solvers, we introduce the method of using the Dirichlet heat kernel pagerank of the induced graph to approximate the solution to diagonally dominant linear systems satisfying given boundary conditions. Our algorithm runs in time O(1), with the assumption that a unit time allows a step in a random walk or a sampling of a specified distribution, where the big-O term depends on the error term and the boundary condition.",
"",
"",
"The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).",
"",
"",
""
]
} |
1509.08065 | 2964129681 | Based on the definition of local spectral subspace, we propose a novel approach called LOSP for local overlapping community detection. Using the power method for a few steps, LOSP finds an approximate invariant subspace, which depicts the embedding of the local neighborhood structure around the seeds of interest. LOSP then identifies the local community expanded from the given seeds by seeking a sparse indicator vector in the subspace where the seeds are in its support. We provide a systematic investigation on LOSP, and thoroughly evaluate it on large real world networks across multiple domains. With the prior information of very few seed members, LOSP can detect the remaining members of a target community with high accuracy. Experiments demonstrate that LOSP outperforms the Heat Kernel and PageRank diffusions. Using LOSP as a subroutine, we further address the problem of multiple membership identification, which aims to find all the communities a single vertex belongs to. High F1 scores are achieved in detecting multiple local communities with respect to arbitrary single seed for various large real world networks. | Spielman and Teng @cite_16 use degree-normalized, personalized PageRank (DN PageRank) with respect to the start seed and do truncation on small values, leading to the PageRank Nibble method @cite_18 . And the DN PageRank is adopted by several PageRank-based clustering algorithms @cite_22 @cite_8 , which are competitive with a sophisticated and popular algorithm METIS @cite_3 . Kloumann and Kleinberg @cite_1 evaluate different variations of PageRank, and find that the standard PageRank yields better performance than the DN PageRank. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_16"
],
"mid": [
"2086254934",
"2084309732",
"2068015060",
"1978291048",
"1639284002",
"2045107949"
],
"abstract": [
"A local graph partitioning algorithm finds a cut near a specified starting vertex, with a running time that depends largely on the size of the small side of the cut, rather than the size of the input graph. In this paper, we present a local partitioning algorithm using a variation of PageRank with a specified starting distribution. We derive a mixing result for PageRank vectors similar to that for random walks, and show that the ordering of the vertices produced by a PageRank vector reveals a cut with small conductance. In particular, we show that for any set C with conductance and volume k, a PageRank vector with a certain starting distribution can be used to produce a set with conductance O ( k ). We present an improved algorithm for computing approximate PageRank vectors, which allows us to find such a set in time proportional to its size. In particular, we can find a cut with conductance at most o , whose small side has volume at least 2b, in time O ( 2^b ^2 m o^2 ) where m is the number of edges in the graph. By combining small sets found by this local partitioning algorithm, we obtain a cut with conductance o and approximately optimal balance in time O ( m ^4 m o^2 ).",
"Expanding a seed set into a larger community is a common procedure in link-based analysis. We show how to adapt recent results from theoretical computer science to expand a seed set into a community with small conductance and a strong relationship to the seed, while examining only a small neighborhood of the entire graph. We extend existing results to give theoretical guarantees that apply to a variety of seed sets from specified communities. We also describe simple and flexible heuristics for applying these methods in practice, and present early experiments showing that these methods compare favorably with existing approaches.",
"Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities.",
"In many applications we have a social network of people and would like to identify the members of an interesting but unlabeled group or community. We start with a small number of exemplar group members -- they may be followers of a political ideology or fans of a music genre -- and need to use those examples to discover the additional members. This problem gives rise to the seed expansion problem in community detection: given example community members, how can the social graph be used to predict the identities of remaining, hidden community members? In contrast with global community detection (graph partitioning or covering), seed expansion is best suited for identifying communities locally concentrated around nodes of interest. A growing body of work has used seed expansion as a scalable means of detecting overlapping communities. Yet despite growing interest in seed expansion, there are divergent approaches in the literature and there still isn't a systematic understanding of which approaches work best in different domains. Here we evaluate several variants and uncover subtle trade-offs between different approaches. We explore which properties of the seed set can improve performance, focusing on heuristics that one can control in practice. As a consequence of this systematic understanding we have found several opportunities for performance gains. We also consider an adaptive version in which requests are made for additional membership labels of particular nodes, such as one finds in field studies of social communities. This leads to interesting connections and contrasts with active learning and the trade-offs of exploration and exploitation. Finally, we explore topological properties of communities and seed sets that correlate with algorithm performance, and explain these empirical observations with theoretical ones. We evaluate our methods across multiple domains, using publicly available datasets with labeled, ground-truth communities.",
"",
"We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning."
]
} |
1509.08065 | 2964129681 | Based on the definition of local spectral subspace, we propose a novel approach called LOSP for local overlapping community detection. Using the power method for a few steps, LOSP finds an approximate invariant subspace, which depicts the embedding of the local neighborhood structure around the seeds of interest. LOSP then identifies the local community expanded from the given seeds by seeking a sparse indicator vector in the subspace where the seeds are in its support. We provide a systematic investigation on LOSP, and thoroughly evaluate it on large real world networks across multiple domains. With the prior information of very few seed members, LOSP can detect the remaining members of a target community with high accuracy. Experiments demonstrate that LOSP outperforms the Heat Kernel and PageRank diffusions. Using LOSP as a subroutine, we further address the problem of multiple membership identification, which aims to find all the communities a single vertex belongs to. High F1 scores are achieved in detecting multiple local communities with respect to arbitrary single seed for various large real world networks. | The Heat Kernel provides another local graph diffusion @cite_5 @cite_29 @cite_14 , and involves the Taylor series expansion of the matrix exponential of the random walk transition matrix. analyze the property of this diffusion theoretically @cite_5 , and propose a randomized Monte Carlo method to estimate the diffusion @cite_29 . propose a deterministic method that uses coordinate relaxation on an implicit linear system that estimates the Heat Kernel diffusion, and show that Heat Kernel outperforms the personalized PageRank by finding smaller sets with substantially higher F1 measures @cite_14 . | {
"cite_N": [
"@cite_5",
"@cite_29",
"@cite_14"
],
"mid": [
"2171878761",
"2107499877",
"2016273060"
],
"abstract": [
"The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).",
"We present an efficient algorithm for solving linear systems with a boundary condition by computing the Green’s function of a connected induced subgraph S of a graph. Different from previous linear solvers, we introduce the method of using the Dirichlet heat kernel pagerank of the induced graph to approximate the solution to diagonally dominant linear systems satisfying given boundary conditions. Our algorithm runs in time O(1), with the assumption that a unit time allows a step in a random walk or a sampling of a specified distribution, where the big-O term depends on the error term and the boundary condition.",
"The heat kernel is a type of graph diffusion that, like the much-used personalized PageRank diffusion, is useful in identifying a community nearby a starting seed node. We present the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it produces. Our algorithm is formally a relaxation method for solving a linear system to estimate the matrix exponential in a degree-weighted norm. We prove that this algorithm stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the diffusion, not the size of the graph. On large graphs, our experiments indicate that the communities produced by this method have better conductance than those produced by PageRank, although they take slightly longer to compute. On a real-world community identification task, the heat kernel communities perform better than those from the PageRank diffusion."
]
} |
1509.08065 | 2964129681 | Based on the definition of local spectral subspace, we propose a novel approach called LOSP for local overlapping community detection. Using the power method for a few steps, LOSP finds an approximate invariant subspace, which depicts the embedding of the local neighborhood structure around the seeds of interest. LOSP then identifies the local community expanded from the given seeds by seeking a sparse indicator vector in the subspace where the seeds are in its support. We provide a systematic investigation on LOSP, and thoroughly evaluate it on large real world networks across multiple domains. With the prior information of very few seed members, LOSP can detect the remaining members of a target community with high accuracy. Experiments demonstrate that LOSP outperforms the Heat Kernel and PageRank diffusions. Using LOSP as a subroutine, we further address the problem of multiple membership identification, which aims to find all the communities a single vertex belongs to. High F1 scores are achieved in detecting multiple local communities with respect to arbitrary single seed for various large real world networks. | There are also other local methods based on the random walk technique. For instance, Wu et. al @cite_2 use a variant of the degree normalized, penalized hitting probability to weight the nodes by starting from the query nodes, and define the reciprocal of the weight as the query biased density to effectively reduce the free rider effect that tends to include irrelevant subgraphs in the detected local community. All seed set expansion methods need a stopping criterion for defining the community boundary unless the size of the target community is known. Conductance is commonly recognized as the best stopping criterion due to its intrinsic local property @cite_14 @cite_10 @cite_8 . Yang and Leskovec provide widely-used real world datasets with labeled ground truth @cite_8 , and find that conductance and triad-partition-ratio (TPR) are the two stopping rules yielding the highest detection accuracy. The Heat Kernel method also adopts conductance as the stopping rule for the local community @cite_14 . | {
"cite_N": [
"@cite_8",
"@cite_14",
"@cite_10",
"@cite_2"
],
"mid": [
"2068015060",
"2016273060",
"2066090568",
"2207622687"
],
"abstract": [
"Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities.",
"The heat kernel is a type of graph diffusion that, like the much-used personalized PageRank diffusion, is useful in identifying a community nearby a starting seed node. We present the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it produces. Our algorithm is formally a relaxation method for solving a linear system to estimate the matrix exponential in a degree-weighted norm. We prove that this algorithm stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the diffusion, not the size of the graph. On large graphs, our experiments indicate that the communities produced by this method have better conductance than those produced by PageRank, although they take slightly longer to compute. On a real-world community identification task, the heat kernel communities perform better than those from the PageRank diffusion.",
"Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. One of the most successful techniques for finding overlapping communities is based on local optimization and expansion of a community metric around a seed set of vertices. In this paper, we propose an efficient overlapping community detection algorithm using a seed set expansion approach. In particular, we develop new seeding strategies for a personalized PageRank scheme that optimizes the conductance community score. The key idea of our algorithm is to find good seeds, and then expand these seed sets using the personalized PageRank clustering procedure. Experimental results show that this seed set expansion approach outperforms other state-of-the-art overlapping community detection methods. We also show that our new seeding strategies are better than previous strategies, and are thus effective in finding good overlapping clusters in a graph.",
"Given a large network, local community detection aims at finding the community that contains a set of query nodes and also maximizes (minimizes) a goodness metric. This problem has recently drawn intense research interest. Various goodness metrics have been proposed. However, most existing metrics tend to include irrelevant subgraphs in the detected local community. We refer to such irrelevant subgraphs as free riders. We systematically study the existing goodness metrics and provide theoretical explanations on why they may cause the free rider effect. We further develop a query biased node weighting scheme to reduce the free rider effect. In particular, each node is weighted by its proximity to the query node. We define a query biased density metric to integrate the edge and node weights. The query biased densest subgraph, which has the largest query biased density, will shift to the neighborhood of the query nodes after node weighting. We then formulate the query biased densest connected subgraph (QDC) problem, study its complexity, and provide efficient algorithms to solve it. We perform extensive experiments on a variety of real and synthetic networks to evaluate the effectiveness and efficiency of the proposed methods."
]
} |
1509.08065 | 2964129681 | Based on the definition of local spectral subspace, we propose a novel approach called LOSP for local overlapping community detection. Using the power method for a few steps, LOSP finds an approximate invariant subspace, which depicts the embedding of the local neighborhood structure around the seeds of interest. LOSP then identifies the local community expanded from the given seeds by seeking a sparse indicator vector in the subspace where the seeds are in its support. We provide a systematic investigation on LOSP, and thoroughly evaluate it on large real world networks across multiple domains. With the prior information of very few seed members, LOSP can detect the remaining members of a target community with high accuracy. Experiments demonstrate that LOSP outperforms the Heat Kernel and PageRank diffusions. Using LOSP as a subroutine, we further address the problem of multiple membership identification, which aims to find all the communities a single vertex belongs to. High F1 scores are achieved in detecting multiple local communities with respect to arbitrary single seed for various large real world networks. | The seeding strategy is a key component for seed set expansion algorithms. GCE selects maximal cliques as the seeds @cite_27 . discover that an independent set of high-degree vertices, which they called spread hubs" outperforms Graclus centers, local egonets, and random seeding strategies @cite_10 . Kloumann and Kleinberg @cite_1 compare random seeds with high degree seeds, and discover that random seeds are superior to high degree seeds, and they suggest domain experts provide seeds with a diverse degree distribution. | {
"cite_N": [
"@cite_1",
"@cite_27",
"@cite_10"
],
"mid": [
"1978291048",
"",
"2066090568"
],
"abstract": [
"In many applications we have a social network of people and would like to identify the members of an interesting but unlabeled group or community. We start with a small number of exemplar group members -- they may be followers of a political ideology or fans of a music genre -- and need to use those examples to discover the additional members. This problem gives rise to the seed expansion problem in community detection: given example community members, how can the social graph be used to predict the identities of remaining, hidden community members? In contrast with global community detection (graph partitioning or covering), seed expansion is best suited for identifying communities locally concentrated around nodes of interest. A growing body of work has used seed expansion as a scalable means of detecting overlapping communities. Yet despite growing interest in seed expansion, there are divergent approaches in the literature and there still isn't a systematic understanding of which approaches work best in different domains. Here we evaluate several variants and uncover subtle trade-offs between different approaches. We explore which properties of the seed set can improve performance, focusing on heuristics that one can control in practice. As a consequence of this systematic understanding we have found several opportunities for performance gains. We also consider an adaptive version in which requests are made for additional membership labels of particular nodes, such as one finds in field studies of social communities. This leads to interesting connections and contrasts with active learning and the trade-offs of exploration and exploitation. Finally, we explore topological properties of communities and seed sets that correlate with algorithm performance, and explain these empirical observations with theoretical ones. We evaluate our methods across multiple domains, using publicly available datasets with labeled, ground-truth communities.",
"",
"Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. One of the most successful techniques for finding overlapping communities is based on local optimization and expansion of a community metric around a seed set of vertices. In this paper, we propose an efficient overlapping community detection algorithm using a seed set expansion approach. In particular, we develop new seeding strategies for a personalized PageRank scheme that optimizes the conductance community score. The key idea of our algorithm is to find good seeds, and then expand these seed sets using the personalized PageRank clustering procedure. Experimental results show that this seed set expansion approach outperforms other state-of-the-art overlapping community detection methods. We also show that our new seeding strategies are better than previous strategies, and are thus effective in finding good overlapping clusters in a graph."
]
} |
1509.08172 | 2226779634 | This article extends the results of Fang & Zeitouni (2012a) on branching random walks (BRWs) with Gaussian increments in time inhomogeneous environments. We treat the case where the variance of the increments changes a finite number of times at different scales in [0,1] under a slight restriction. We find the asymptotics of the maximum up to an OP(1) error and show how the profile of the variance influences the leading order and the logarithmic correction term. A more general result was independently obtained by Mallein (2015b) when the law of the increments is not necessarily Gaussian. However, the proof we present here generalizes the approach of Fang & Zeitouni (2012a) instead of using the spinal decomposition of the BRW. As such, the proof is easier to understand and more robust in the presence of an approximate branching structure. | The first order of the maximum (without restriction), was proved in Section 2 of @cite_11 for the @math -BRW and in @cite_3 for the analogous model of scale-inhomogeneous Gaussian free field (GFF). The proofs rely on an analysis of so-called optimal paths'' showing where the maximal particle must be at all times with high probability. These paths were found by a first moment heuristic and the resolution of a related optimisation problem (using the Karush-Kuhn-Tucker theorem). | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"1954191069",
"2229312434"
],
"abstract": [
"In this paper, we study a random field constructed from the two-dimensional Gaussian free field (GFF) by modifying the variance along the scales in the neighborhood of each point. The construction can be seen as a local martingale transform and is akin to the time-inhomogeneous branching random walk. In the case where the variance takes finitely many values, we compute the first order of the maximum and the log-number of high points. These quantities were obtained by Bolthausen, Deuschel and Giacomin (2001) and Daviaud (2006) when the variance is constant on all scales. The proof relies on a truncated second moment method proposed by Kistler (2015), which streamlines the proof of the previous results. We also discuss possible extensions of the construction to the continuous GFF.",
"Voir la bibliographie du memoire pour les references du resume. See the thesis s bibliography for the references in the summary."
]
} |
1509.08172 | 2226779634 | This article extends the results of Fang & Zeitouni (2012a) on branching random walks (BRWs) with Gaussian increments in time inhomogeneous environments. We treat the case where the variance of the increments changes a finite number of times at different scales in [0,1] under a slight restriction. We find the asymptotics of the maximum up to an OP(1) error and show how the profile of the variance influences the leading order and the logarithmic correction term. A more general result was independently obtained by Mallein (2015b) when the law of the increments is not necessarily Gaussian. However, the proof we present here generalizes the approach of Fang & Zeitouni (2012a) instead of using the spinal decomposition of the BRW. As such, the proof is easier to understand and more robust in the presence of an approximate branching structure. | The more involved question of finding the second order of the maximum was first solved by @cite_15 0.3mm for the case @math and @math , and later by @cite_21 , when the law of the increments changes a finite number of times but is not necessarily Gaussian. In his proof, Mallein develops a time-inhomogeneous version of the spinal decomposition for the BRW. The argument presented in this paper was first developed, without the knowledge of Mallein's results, in Section 2.4 of @cite_11 and instead generalizes the approach of @cite_15 . The proof rely on the control of the increments of high points at every @math . | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_11"
],
"mid": [
"2159987645",
"1550653623",
"2229312434"
],
"abstract": [
"We study the maximal displacement of branching random walks in a class of time inhomogeneous environments. Specifically, binary branching random walks with Gaussian increments will be considered, where the variances of the increments change over time macroscopically. We find the asymptotics of the maximum up to an @math (stochastically bounded) error, and focus on the following phenomena: the profile of the variance matters, both to the leading (velocity) term and to the logarithmic correction term, and the latter exhibits a phase transition.",
"In this article, we study a branching random walk in an environment which depends on the time. This time-inhomogeneous environment consists of a sequence of macroscopic time intervals, in each of which the law of reproduction remains constant. We prove that the asymptotic behaviour of the maximal displacement in this process consists of a first ballistic order, given by the solution of an optimization problem under constraints, a negative logarithmic correction, plus stochastically bounded fluctuations.",
"Voir la bibliographie du memoire pour les references du resume. See the thesis s bibliography for the references in the summary."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.