aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1408.5516
1635415043
Hierarchies allow feature sharing between objects at multiple levels of representation, can code exponential variability in a very compact way and enable fast inference. This makes them potentially suitable for learning and recognizing a higher number of object classes. However, the success of the hierarchical approaches so far has been hindered by the use of hand-crafted features or predetermined grouping rules. This paper presents a novel framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes. The approach takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions are sufficiently large and complex to represent the whole shapes of the objects. We learn the vocabulary layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. The experimental results show that the learned multi-class object representation scales favorably with the number of object classes and achieves a state-of-the-art detection performance at both, faster inference as well as shorter training times.
Ommer and Buhmann @cite_39 proposed an unsupervised hierarchical learning approach, which has been successfully utilized for object classification. The features at each layer are defined as histograms over a larger, spatially constrained area. Our approach explicitly models the spatial relations among the features, which should allow for a more reliable detection of objects with lower sensitivity to background clutter.
{ "cite_N": [ "@cite_39" ], "mid": [ "2171108400" ], "abstract": [ "The compositional nature of visual objects significantly limits their representation complexity and renders learning of structured object models tractable. Adopting this modeling strategy we both (i) automatically decompose objects into a hierarchy of relevant compositions and we (ii) learn such a compositional representation for each category without supervision. The compositional structure supports feature sharing already on the lowest level of small image patches. Compositions are represented as probability distributions over their constituent parts and the relations between them. The global shape of objects is captured by a graphical model which combines all compositions. Inference based on the underlying statistical model is then employed to obtain a category level object recognition system. Experiments on large standard benchmark datasets underline the competitive recognition performance of this approach and they provide insights into the learned compositional structure of objects." ] }
1408.5516
1635415043
Hierarchies allow feature sharing between objects at multiple levels of representation, can code exponential variability in a very compact way and enable fast inference. This makes them potentially suitable for learning and recognizing a higher number of object classes. However, the success of the hierarchical approaches so far has been hindered by the use of hand-crafted features or predetermined grouping rules. This paper presents a novel framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes. The approach takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions are sufficiently large and complex to represent the whole shapes of the objects. We learn the vocabulary layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. The experimental results show that the learned multi-class object representation scales favorably with the number of object classes and achieves a state-of-the-art detection performance at both, faster inference as well as shorter training times.
Our approach is also related to the discriminatively trained grammars by @cite_60 , developed after our original work was published. Like us, this approach models objects with deformable parts and subparts, the weights of which are trained using structure prediction. This approach has achieved impressive results for object detection in the past years. Its main drawback, however, is that the structure of the grammar needs to be specified by hand which is what we want to avoid doing here.
{ "cite_N": [ "@cite_60" ], "mid": [ "2153185908" ], "abstract": [ "Compositional models provide an elegant formalism for representing the visual appearance of highly variable objects. While such models are appealing from a theoretical point of view, it has been difficult to demonstrate that they lead to performance advantages on challenging datasets. Here we develop a grammar model for person detection and show that it outperforms previous high-performance systems on the PASCAL benchmark. Our model represents people using a hierarchy of deformable parts, variable structure and an explicit model of occlusion for partially visible objects. To train the model, we introduce a new discriminative framework for learning structured prediction models from weakly-labeled data." ] }
1408.5082
2079098913
The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.
For graph @math , Bloznelis @cite_23 demonstrate that a connected component with at at least a constant fraction of @math emerges asymptotically when the edge probability @math exceeds @math . Bloznelis and uczak @cite_21 have recently considered connectivity and perfect matching. Still in @math , Bloznelis @cite_17 investigate assortativity and clustering, while for asymptotic node degree distribution, Bloznelis @cite_44 analyzes clustering coefficient and the degree distribution of a typical node. We @cite_22 compute the probability distribution for the minimum node degree. Recently, Bloznelis and Rybarczyk @cite_19 and we @cite_20 have derived the asymptotically exact probability of @math -connectivity. Several variants or generalizations of graph @math are also considered in the literature @cite_23 @cite_44 @cite_17 @cite_27 .
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_44", "@cite_19", "@cite_27", "@cite_23", "@cite_20", "@cite_17" ], "mid": [ "", "2020272293", "2145244355", "", "1899546701", "2135291682", "", "2150191198" ], "abstract": [ "", "Let W1,…,Wn be independent random subsets of [m]= 1,…,m . Assuming that each Wi is uniformly distributed in the class of d-subsets of [m] we study the uniform random intersection graph Gs(n,m,d) on the vertex set W1,…Wn , defined by the adjacency relation: Wi∼Wj whenever |Wi∩Wj|≧s. For even n we show that as n,m→∞ the edge density threshold for the property that Gs(n,m,d) contains a perfect matching is asymptotically the same as that for Gs(n,m,d) being connected.", "We establish asymptotic vertex degree distribution and examine its relation to the clustering coecient in two popular random intersection graph models of Godehardt and Jaworski (2001). For sparse graphs with positive clustering coecient, we examine statistical dependence between the (local) clustering coecient and the degree. Our results are mathematically rigorous. They are consistent with the empirical observation of (2011) that correlates negatively with degree.\" Moreover, they explain empirical results on k 1 scaling of the local clustering coecient of a vertex of degree k reported in Ravasz and Barab asi (2003).", "", "Random intersection graphs (RIGs) are an important random structure with algorithmic applications in social networks, epidemic networks, blog readership, and wireless sensor networks. RIGs can be interpreted as a model for large randomly formed non-metric data sets. We analyze the component evolution in general RIGs, giving conditions on the existence and uniqueness of the giant component. Our techniques generalize existing methods for analysis of component evolution: we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts of the study of component evolution in Erdős-Renyi graphs. The major challenge comes from the underlying structure of RIGs, which involves both a set of nodes and a set of attributes, with different probabilities associated with each attribute.", "We study a connectivity property of a secure wireless network that uses random pre-distribution of keys. A network is composed of n sensors. Each sensor is assigned a collection of d different keys drawn uniformly at random from a given set of m keys. Two sensors are joined by a communication link if they share a common key. We show that for large n with high probability the connected component of size Ω(n) emerges in the network when the probability of a link exceeds the threshold 1-n. Similar component evolution is shown for networks where sensors communicate if they share at least s common keys. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009", "", "We consider sparse random intersection graphs with the property that the clustering coefficient does not vanish as the number of nodes tends to infinity. We find explicit asymptotic expressions for the correlation coefficient of degrees of adjacent nodes (called the assortativity coefficient), the expected number of common neighbours of adjacent nodes, and the expected degree of a neighbour of a node of a given degree k. These expressions are written in terms of the asymptotic degree distribution and, alternatively, in terms of the parameters defining the underlying random graph model." ] }
1408.5082
2079098913
The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.
When @math , for graph @math (also referred to as a random key graph @cite_16 @cite_31 @cite_13 or a uniform random intersection graph @cite_24 @cite_26 ) and some of its variants, a number of properties have been extensively studied in the literature including component evolution @cite_40 , connectivity @cite_24 @cite_31 @cite_26 , @math -connectivity @cite_36 @cite_48 , node degree distribution @cite_51 @cite_47 @cite_49 @cite_38 and independent sets @cite_3 @cite_54 .
{ "cite_N": [ "@cite_38", "@cite_47", "@cite_26", "@cite_36", "@cite_48", "@cite_54", "@cite_3", "@cite_24", "@cite_40", "@cite_49", "@cite_51", "@cite_31", "@cite_16", "@cite_13" ], "mid": [ "", "2963821101", "2008111483", "1902204033", "2060271434", "2059112208", "2029907394", "2063090892", "", "2106822743", "1980441549", "2041282222", "2000444110", "1608861364" ], "abstract": [ "", "We show the asymptotic degree distribution of the typical vertex of a sparse inhomogeneous random intersection graph.", "A uniform random intersection graphG(n,m,k) is a random graph constructed as follows. Label each of n nodes by a randomly chosen set of k distinct colours taken from some finite set of possible colours of size m. Nodes are joined by an edge if and only if some colour appears in both their labels. These graphs arise in the study of the security of wireless sensor networks, in particular when modelling the network graph of the well-known key predistribution technique due to Eschenauer and Gligor. The paper determines the threshold for connectivity of the graph G(n,m,k) when n-> in many situations. For example, when k is a function of n such that k>=2 and [email protected]?n^@[email protected]? for some fixed positive real number @a then G(n,m,k) is almost surely connected when lim infk^2n mlogn>1, and G(n,m,k) is almost surely disconnected when lim supk^2n mlogn<1.", "We present a new method which enables us to find threshold functions for many properties in random intersection graphs. This method is used to establish sharp threshold functions in random intersection graphs for @math –connectivity, perfect matching containment and Hamilton cycle containment.", "Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature.", "This paper concerns constructing independent sets in a random intersection graph. We concentrate on two cases of the model: a binomial and a uniform random intersection graph. For both models we analyse two greedy algorithms and prove that they find asymptotically almost optimal independent sets. We provide detailed analysis of the presented algorithms and give tight bounds on the independence number for the studied models. Moreover we determine the range of parameters for which greedy algorithms give better results for a random intersection graph than this is in the case of an Erd?s-Renyi random graph G ( n , p ? ) .", "We investigate the existence and efficient algorithmic construction of close to optimal independent sets in random models of intersection graphs. In particular, (a) we propose a new model for random intersection graphs (G\"n\",\"m\",\"p\"->) which includes the model of [M. Karonski, E.R. Scheinerman, K.B. Singer-Cohen, On random intersection graphs: The subgraph problem, Combinatorics, Probability and Computing journal 8 (1999), 131-159] (the ''uniform'' random intersection graph models) as an important special case. We also define an interesting variation of the model of random intersection graphs, similar in spirit to random regular graphs. (b) For this model we derive exact formulae for the mean and variance of the number of independent sets of size k (for any k) in the graph. (c) We then propose and analyse three algorithms for the efficient construction of large independent sets in this model. The first two are variations of the greedy technique while the third is a totally new algorithm. Our algorithms are analysed for the special case of uniform random intersection graphs. Our analyses show that these algorithms succeed in finding close to optimal independent sets for an interesting range of graph parameters.", "We study properties of the uniform random intersection graph model G(n,m,d). We find asymptotic estimates on the diameter of the largest connected component of the graph near the phase transition and connectivity thresholds. Moreover we manage to prove an asymptotically tight bound for the connectivity and phase transition thresholds for all possible ranges of d, which has not been obtained before. The main motivation of our research is the usage of the random intersection graph model in the studies of wireless sensor networks.", "", "A random intersection graph is constructed by assigning independently to each vertex a subset of a given set and drawing an edge between two vertices if and only if their respective subsets intersect. In this article a model is developed in which each vertex is given a random weight and vertices with larger weights are more likely to be assigned large subsets. The distribution of the degree of a given vertex is characterized and is shown to depend on the weight of the vertex. In particular, if the weight distribution is a power law, the degree distribution will be as well. Furthermore, an asymptotic expression for the clustering in the graph is derived. By tuning the parameters of the model, it is possible to generate a graph with arbitrary clustering, expected degree, and—in the power-law case—tail exponent.", "In this paper we consider the degree of a typical vertex in two models of random intersection graphs introduced in [E. Godehardt, J. Jaworski, Two models of random intersection graphs for classification, in: M. Schwaiger, O. Opitz (Eds.), Exploratory Data Analysis in Empirical Research, Proceedings of the 25th Annual Conference of the Gesellschaft fur Klassifikation e.V., University of Munich, March 14-16, 2001, Springer, Berlin, Heidelberg, New York, 2002, pp. 67-81], the active and passive models. The active models are those for which vertices are assigned a random subset of a list of objects and two vertices are made adjacent when their subsets intersect. We prove sufficient conditions for vertex degree to be asymptotically Poisson as well as closely related necessary conditions. We also consider the passive model of intersection graphs, in which objects are vertices and two objects are made adjacent if there is at least one vertex in the corresponding active model ''containing'' both objects. We prove a necessary condition for vertex degree to be asymptotically Poisson for passive intersection graphs.", "The random key graph is a random graph naturally associated with the random key predistribution scheme introduced by Eschenauer and Gligor in the context of wireless sensor networks (WSNs). For this class of random graphs, we establish a new version of a conjectured zero-one law for graph connectivity as the number of nodes becomes unboundedly large. The results reported here complement and strengthen recent work on this conjecture by Blackburn and Gerke. In particular, the results are given under conditions which are more realistic for applications to WSNs.", "In a random key graph (RKG) of n nodes each node is randomly assigned a key ring of Kn cryptographic keys from a pool of Pn keys. Two nodes can communicate directly if they have at least one common key in their key rings. We assume that the n nodes are distributed uniformly in [0, l]2. In addition to the common key requirement, we require two nodes to also be within rn of each other to be able to have a direct edge. Thus we have a random graph in which the RKG is superposed on the familiar random geometric graph (RGG). For such a random graph, we obtain tight bounds on the relation between Kn, Pn and rn for the graph to be asymptotically almost surely connected.", "The notion of the random key graph, which originally appeared in models of secure communication in wireless sensor networks, has been used in other applications, some of which are unrelated to cryptographic-key predistribution or sensor networks. In this presentation, I will outline some of these applications, which exploit the connectivity property of random key graphs and its similarity with that of random graphs. I’d like to start with the zero-one law for randomgraph connectivity, then explain how (i.e., for what graph parameters) this law appears in random key graphs. Then, I will present three brief encounters with random-key-graph properties in new settings and perhaps speculate on other types of useful properties they might have." ] }
1408.5082
2079098913
The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.
In graph @math , Ya g an @cite_35 presents zero--one laws for connectivity and for the property that the minimum degree is at least @math . We extend Ya g an's results to general @math for @math in @cite_6 @cite_33 .
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_6" ], "mid": [ "2165958223", "1868819612", "2056254754" ], "abstract": [ "We investigate the secure connectivity of wireless sensor networks under the random key distribution scheme of Eschenauer and Gligor. Unlike recent work which was carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as on off channels. We present conditions on how to scale the model parameters so that the network: 1) has no secure node which is isolated and 2) is securely connected, both with high probability when the number of sensor nodes becomes large. The results are given in the form of full zero-one laws, and constitute the first complete analysis of the EG scheme under non-full visibility. Through simulations, these zero-one laws are shown to be valid also under a more realistic communication model (i.e., the disk model). The relations to the Gupta and Kumar's conjecture on the connectivity of geometric random graphs with randomly deleted edges are also discussed.", "Random key predistribution scheme of Eschenauer and Gligor (EG) is a typical solution for ensuring secure communications in a wireless sensor network (WSN). Connectivity of the WSNs under this scheme has received much interest over the last decade, and most of the existing work is based on the assumption of unconstrained sensor-to-sensor communications. In this paper, we study the k-connectivity of WSNs under the EG scheme with physical link constraints; k-connectivity is defined as the property that the network remains connected despite the failure of any (k - 1) sensors. We use a simple communication model, where unreliable wireless links are modeled as independent on off channels, and derive zero-one laws for the properties that i) the WSN is k-connected, and ii) each sensor is connected to at least k other sensors. These zero-one laws improve the previous results by Rybarczyk on the k-connectivity under a fully connected communication model. Moreover, under the on off channel model, we provide a stronger form of the zero-one law for the 1-connectivity as compared to that given by Ya g an. We also discuss the applicability of our results in a different network application, namely in a large-scale, distributed publish-subscribe service for online social networks.", "Random key predistribution scheme of Eschenauer and Gligor (EG) is a typical solution for ensuring secure communications in a wireless sensor network (WSN). Connectivity of the WSNs under this scheme has received much interest over the last decade, and most of the existing work is based on the assumption of unconstrained sensor-to-sensor communications. In this paper, we study the k-connectivity of WSNs under the EG scheme with physical link constraints; k-connectivity is defined as the property that the network remains connected despite the failure of any (k - 1) sensors. We use a simple communication model, where unreliable wireless links are modeled as independent on off channels, and derive zero-one laws for the properties that i) the WSN is k-connected, and ii) each sensor is connected to at least k other sensors. These zero-one laws improve the previous results by Rybarczyk on the k-connectivity under a fully connected communication model. Moreover, under the on off channel model, we provide a stronger form of the zero-one law for the 1-connectivity as compared to that given by Yagan." ] }
1408.5082
2079098913
The q-composite key predistribution scheme [1] is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Prior work [2]-[4] explores topological properties of WSNs employing the q-composite scheme for q = 1 with unreliable communication links modeled as independent on off channels. In this paper, we investigate topological properties related to the node degree in WSNs operating under the q-composite scheme and the on off channel model. Our results apply to general q and are stronger than those reported for the node degree in prior work even for the case of q being 1. Specifically, we show that the number of nodes with certain degree asymptotically converges in distribution to a Poisson random variable, present the asymptotic probability distribution for the minimum degree of the network, and establish the asymptotically exact probability for the property that the minimum degree is at least an arbitrary value. Numerical experiments confirm the validity of our analytical findings.
Krishnan @cite_16 and Krzywdzi ' n ski and Rybarczyk @cite_4 describe results for the probability of connectivity asymptotically converging to 1 in WSNs employing the @math -composite key predistribution scheme with @math (i.e., the basic Eschenauer--Gligor key predistribution scheme), not under the on off channel model but under the well-known disk model @cite_29 @cite_7 @cite_8 @cite_10 @cite_39 @cite_43 , where nodes are distributed over a bounded region of a Euclidean plane, and two nodes have to be within a certain distance for communication. Simulation results in our work @cite_33 indicate that for WSNs under the key predistribution scheme with @math , when the on-off channel model is replaced by the disk model, the performances for @math -connectivity and for the property that the minimum degree is at least @math do not change significantly.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_8", "@cite_29", "@cite_39", "@cite_43", "@cite_16", "@cite_10" ], "mid": [ "29479272", "1868819612", "", "1918250237", "", "2170763206", "2158348301", "2000444110", "2037549896" ], "abstract": [ "In the article we study important properties of random geometric graphs with randomly deleted edges which are natural models of wireless ad hoc networks with communication constraints. We concentrate on two problems which are most important in the context of theoretical studies on wireless ad hoc networks. The first is how to set parameters of the network (graph) to have it connected. The second is the problem of an effective message transmition i.e. the problem of construction of routing protocols in wireless networks. We provide a thorough mathematical analysis of connectivity property and a greedy routing protocol. The models we use are: an intersection of a random geometric graph with an Erdos-Renyi random graph and an intersection of a random geometric graph with a uniform random intersection graph. The obtained results are asymptotically tight up to a constant factor.", "Random key predistribution scheme of Eschenauer and Gligor (EG) is a typical solution for ensuring secure communications in a wireless sensor network (WSN). Connectivity of the WSNs under this scheme has received much interest over the last decade, and most of the existing work is based on the assumption of unconstrained sensor-to-sensor communications. In this paper, we study the k-connectivity of WSNs under the EG scheme with physical link constraints; k-connectivity is defined as the property that the network remains connected despite the failure of any (k - 1) sensors. We use a simple communication model, where unreliable wireless links are modeled as independent on off channels, and derive zero-one laws for the properties that i) the WSN is k-connected, and ii) each sensor is connected to at least k other sensors. These zero-one laws improve the previous results by Rybarczyk on the k-connectivity under a fully connected communication model. Moreover, under the on off channel model, we provide a stronger form of the zero-one law for the 1-connectivity as compared to that given by Ya g an. We also discuss the applicability of our results in a different network application, namely in a large-scale, distributed publish-subscribe service for online social networks.", "", "In wireless data networks each transmitter's power needs to be high enough to reach the intended receivers, while generating minimum interference on other receivers sharing the same channel. In particular, if the nodes in the network are assumed to cooperate in routing each oth­ ers' packets, as is the case in ad hoc wireless networks, each node should transmit with just enough power to guarantee connectivity in the network. Towards this end, we derive the critical power a node in the network needs to transmit in order to ensure that the network is connected with probabil­ ity one as the number of nodes in the network goes to infinity. It is shown that if n nodes are placed in a disc of unit area in !R2 and each node trans­ mits at a power level so as to cover an area of lrT2 = (log n + c(n)) n, then the resulting network is asymptotically connected with probability one if and only if c(n) -+ +00.", "", "In this paper we investigate the connectivity for large-scale clustered wireless sensor and ad hoc networks. We study the effect of mobility on the critical transmission range for asymptotic connectivity in k-hop clustered networks, and compare to existing results on non-clustered stationary networks. By introducing k-hop clustering, any packet from a cluster member can reach a cluster head within k hops, and thus the transmission delay is bounded as Θ(1) for any finite k. We first characterize the critical transmission range for connectivity in mobile k-hop clustered networks where all nodes move under either the random walk mobility model with non-trivial velocity or the i.i.d. mobility model. By the term non-trivial velocity, we mean that the velocity of nodes v is Θ(1). We then compare with the critical transmission range for stationary k-hop clustered networks. We also study the transmission power versus delay trade-off and the average energy consumption per flow among different types of networks. We show that random walk mobility with non-trivial velocity increases connectivity in k-hop clustered networks, and thus significantly decreases the energy consumption and improves the power-delay trade-off. The decrease of energy consumption per flow is shown to be Θ(logn nd ) in clustered networks. These results provide insights on network design and fundamental guidelines on building a large-scale wireless network.", "Static wireless networks are by now quite well understood mathematically through the random geometric graph model. By contrast, there are relatively few rigorous results on the practically important case of mobile networks. In this paper we consider a natural extension of the random geometric graph model to the mobile setting by allowing nodes to move in space according to Brownian motion. We study three fundamental questions in this model: detection (the time until a given target point---which may be either fixed or moving---is detected by the network), coverage (the time until all points inside a finite box are detected by the network), and percolation (the time until a given node is able to communicate with the giant component of the network). We derive precise asymptotics for these problems by combining ideas from stochastic geometry, coupling and multi-scale analysis. We also give an application of our results to analyze the time to broadcast a message in a mobile network.", "In a random key graph (RKG) of n nodes each node is randomly assigned a key ring of Kn cryptographic keys from a pool of Pn keys. Two nodes can communicate directly if they have at least one common key in their key rings. We assume that the n nodes are distributed uniformly in [0, l]2. In addition to the common key requirement, we require two nodes to also be within rn of each other to be able to have a direct edge. Thus we have a random graph in which the RKG is superposed on the familiar random geometric graph (RGG). For such a random graph, we obtain tight bounds on the relation between Kn, Pn and rn for the graph to be asymptotically almost surely connected.", "ean distance is at most r, for some prescribed r. We show that monotone properties for this class of graphs have sharp thresholds by reducing the problem to bounding the bottleneck matching on two sets of n points distributed uniformly in [0, 1] d . We present upper bounds on the threshold width, and show that our bound is sharp for d = 1 and at most a sublogarithmic factor away for d ≥ 2. Interestingly, the threshold width is much sharper for random geometric graphs than for Bernoulli random graphs. Further, a random geometric graph is shown to be a subgraph, with high probability, of another independently drawn random geometric graph with a slightly larger radius; this property is shown to have no analogue for Bernoulli random graphs." ] }
1408.4102
2952289865
Randomized experiments on social networks pose statistical challenges, due to the possibility of interference between units. We propose new methods for estimating attributable treatment effects in such settings. The methods do not require partial interference, but instead require an identifying assumption that is similar to requiring nonnegative treatment effects. Network or spatial information can be used to customize the test statistic; in principle, this can increase power without making assumptions on the data generating process.
The most common identifying assumption is that the units form groups (such as households or villages) that do not interfere with each other; this is termed partial interference @cite_18 . The paper @cite_1 derives unbiased point estimates under partial interference, and variance bounds on the estimation error under a stronger condition termed stratified interference. Asymptotically normal estimates are given in @cite_8 , again assuming stratified interference, and finite sample error bounds are derived in @cite_12 . For settings where partial interference does not apply, more general exposure models have been investigated by @cite_13 @cite_21 @cite_23 @cite_11 @cite_6 , with rigorous results if one assumes knowledge of the network dynamics, such as who influences whom. As a result, they may not be suitable when the underlying social mechanisms are not well understood. The recent paper @cite_15 also studies biased estimation of treatment effects under weaker assumptions than partial or fully modeled interference, which is similar in spirit to this present work.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_1", "@cite_6", "@cite_23", "@cite_15", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "1996564297", "2036443429", "", "2012797835", "", "", "1888549093", "", "1995906082", "" ], "abstract": [ "During the past 20 years, social scientists using observational studies have generated a large and inconclusive literature on neighborhood effects. Recent workers have argued that estimates of neighborhood effects based on randomized studies of housing mobility, such as the “Moving to Opportunity” (MTO) demonstration, are more credible. These estimates are based on the implicit assumption of no interference between units; that is, a subject's value on the response depends only on the treatment to which that subject is assigned, not on the treatment assignments of other subjects. For the MTO studies, this assumption is not reasonable. Although little work has been done on the definition and estimation of treatment effects when interference is present, interference is common in studies of neighborhood effects and in many other social settings (e.g., schools and networks), and when data from such studies are analyzed under the “no-interference assumption,” very misleading inferences can result. Furthermore, ...", "Recently, there has been increasing interest in making causal inference when interference is possible. In the presence of interference, treatment may have several types of effects. In this article, we consider inference about such effects when the population consists of groups of individuals where interference is possible within groups but not between groups. A two-stage randomization design is assumed where in the first stage groups are randomized to different treatment allocation strategies and in the second stage individuals are randomized to treatment or control conditional on the strategy assigned to their group in the first stage. For this design, the asymptotic distributions of estimators of the causal effects are derived when either the number of individuals per group or the number of groups grows large. Under certain homogeneity assumptions, the asymptotic distributions provide justification for Wald-type confidence intervals (CIs) and tests. Empirical results demonstrate that the Wald CIs have good coverage in finite samples and are narrower than CIs based on either the Chebyshev or Hoeffding inequalities provided the number of groups is not too small. The methods are illustrated by two examples which consider the effects of cholera vaccination and an intervention to encourage voting.", "", "A fundamental assumption usually made in causal inference is that of no interference between individuals (or units); that is, the potential outcomes of one individual are assumed to be unaffected by the treatment assignment of other individuals. However, in many settings, this assumption obviously does not hold. For example, in the dependent happenings of infectious diseases, whether one person becomes infected depends on who else in the population is vaccinated. In this article, we consider a population of groups of individuals where interference is possible between individuals within the same group. We propose estimands for direct, indirect, total, and overall causal effects of treatment strategies in this setting. Relations among the estimands are established; for example, the total causal effect is shown to equal the sum of direct and indirect causal effects. Using an experimental design with a two-stage randomization procedure (first at the group level, then at the individual level within groups), un...", "", "", "Estimating the effects of interventions in networks is complicated when the units are interacting, such that the outcomes for one unit may depend on the treatment assignment and behavior of many or all other units (i.e., there is interference). When most or all units are in a single connected component, it is impossible to directly experimentally compare outcomes under two or more global treatment assignments since the network can only be observed under a single assignment. Familiar formalism, experimental designs, and analysis methods assume the absence of these interactions, and result in biased estimators of causal effects of interest. While some assumptions can lead to unbiased estimators, these assumptions are generally unrealistic, and we focus this work on realistic assumptions. Thus, in this work, we evaluate methods for designing and analyzing randomized experiments that aim to reduce this bias and thereby reduce overall error. In design, we consider the ability to perform random assignment to treatments that is correlated in the network, such as through graph cluster randomization. In analysis, we consider incorporating information about the treatment assignment of network neighbors. We prove sufficient conditions for bias reduction through both design and analysis in the presence of potentially global interference. Through simulations of the entire process of experimentation in networks, we measure the performance of these methods under varied network structure and varied social behaviors, finding substantial bias and error reductions. These improvements are largest for networks with more clustering and data generating processes with both stronger direct effects of the treatment and stronger interactions between units.", "", "Interference is said to be present when the exposure or treatment received by one individual may affect the outcomes of other individuals. Such interference can arise in settings in which the outcomes of the various individuals come about through social interactions. When interference is present, causal inference is rendered considerably more complex, and the literature on causal inference in the presence of interference has just recently begun to develop. In this article we summarise some of the concepts and results from the existing literature and extend that literature in considering new results for finite sample inference, new inverse probability weighting estimators in the presence of interference and new causal estimands of interest.", "" ] }
1408.4245
2102460333
Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators’ interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example.
In the same year also presented @cite_3 another taxonomy of five crowdsourcing genres: initiatory human computation, distributed human computation, social game-based human computation with volunteers, paid engineers and online players, which is similar to the previously mentioned.
{ "cite_N": [ "@cite_3" ], "mid": [ "2108216421" ], "abstract": [ "Human computation is a technique that makes use of human abilities for computation to solve problems. The human computation problems are the problems those computers are not good at solving but are trivial for humans. In this paper, we give a survey of various human computation systems which are categorized into initiatory human computation, distributed human computation and social game-based human computation with volunteers, paid engineers and online players. For the existing large number of social games, some previous works defined various types of social games, but the recent developed social games cannot be categorized based on the previous works. In this paper, we define the categories and the characteristics of social games which are suitable for all existing ones. Besides, we present a survey on the performance aspects of human computation system. This paper gives a better understanding on human computation system." ] }
1408.4245
2102460333
Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators’ interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example.
Many studies following the early ones are focused on classification of whether a crowdsourced project belongs to a specific class of the given taxonomy. For instance, study of correlation between crowdsourcing genres @cite_6 , quality assessment @cite_10 , and guidelines on corpus annotation through crowdsourcing @cite_5 align various best practices among the established genres.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_6" ], "mid": [ "172170795", "1992269204", "1988584482" ], "abstract": [ "Crowdsourcing is an emerging collaborative approach that can be used for the acquisition of annotated corpora and a wide range of other linguistic resources. Although the use of this approach is intensifying in all its key genres (paid-for crowdsourcing, games with a purpose, volunteering-based approaches), the community still lacks a set of best-practice guidelines similar to the annotation best practices for traditional, expert-based corpus acquisition. In this paper we focus on the use of crowdsourcing methods for corpus acquisition and propose a set of best practice guidelines based in our own experiences in this area and an overview of related literature. We also introduce GATE Crowd, a plugin of the GATE platform that relies on these guidelines and offers tool support for using crowdsourcing in a more principled and efficient manner.", "Novel social media collaboration platforms, such as games with a purpose and mechanised labour marketplaces, are increasingly used for enlisting large populations of non-experts in crowdsourced knowledge acquisition processes. Climate Quiz uses this paradigm for acquiring environmental domain knowledge from non-experts. The game's usage statistics and the quality of the produced data show that Climate Quiz has managed to attract a large number of players but noisy input data and task complexity led to low player engagement and suboptimal task throughput and data quality. To address these limitations, the authors propose embedding the game into a hybrid-genre workflow, which supplements the game with a set of tasks outsourced to micro-workers, thus leveraging the complementary nature of games with a purpose and mechanised labour platforms. Experimental evaluations suggest that such workflows are feasible and have positive effects on the game's enjoyment level and the quality of its output.", "Although the field has led to promising early results, the use of crowdsourcing as an integral part of science projects is still regarded with skepticism by some, largely due to a lack of awareness of the opportunities and implications of utilizing these new techniques. We address this lack of awareness, firstly by highlighting the positive impacts that crowdsourcing has had on Natural Language Processing research. Secondly, we discuss the challenges of more complex methodologies, quality control, and the necessity to deal with ethical issues. We conclude with future trends and opportunities of crowdsourcing for science, including its potential for disseminating results, making science more accessible, and enriching educational programs." ] }
1408.4245
2102460333
Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators’ interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example.
In 2013, aggregated most of the previous studies in their very well-done survey. The mentioned work emphasizes three intuitive and well-separated genres of crowdsourcing @cite_17 :
{ "cite_N": [ "@cite_17" ], "mid": [ "2056584528" ], "abstract": [ "Crowdsourcing has emerged as a new method for obtaining annotations for training models for machine learning. While many variants of this process exist, they largely differ in their methods of motivating subjects to contribute and the scale of their applications. To date, there has yet to be a study that helps the practitioner to decide what form an annotation application should take to best reach its objectives within the constraints of a project. To fill this gap, we provide a faceted analysis of crowdsourcing from a practitioner's perspective, and show how our facets apply to existing published crowdsourced annotation applications. We then summarize how the major crowdsourcing genres fill different parts of this multi-dimensional space, which leads to our recommendations on the potential opportunities crowdsourcing offers to future annotation efforts." ] }
1408.4245
2102460333
Linguistic resources can be populated with data through the use of such approaches as crowdsourcing and gamification when motivated people are involved. However, current crowdsourcing genre taxonomies lack the concept of cooperation, which is the principal element of modern video games and may potentially drive the annotators’ interest. This survey on crowdsourcing taxonomies and cooperation in linguistic resources provides recommendations on using cooperation in existent genres of crowdsourcing and an evidence of the efficiency of cooperation using a popular Russian linguistic resource created through crowdsourcing as an example.
There are other attempts to create a taxonomy of crowdsourcing genres. Zwass investigated the phenomena of @cite_12 and proposed a taxonomy of user-created digital content which includes the following: knowledge compendia, consumer reviews, multimedia content, blogs, mashups, virtual worlds. The resulted taxonomy appears to be too general and, since it was not intended, does not fit the natural language processing field perfectly.
{ "cite_N": [ "@cite_12" ], "mid": [ "2158393623" ], "abstract": [ "Enabled by the Internet-Web compound, co-creation of value by consumers has emerged as a major force in the marketplace. In sponsored co-creation, which takes place at the behest of producers, the activities of consumers drive or support the producers' business models. Autonomous co-creation is a wide range of consumer activities that amount to consumer-side production of value. Thus, individuals and communities have become a significant, and growing, productive force in e-commerce. To recognize co-creation, so broadly understood, as a fundamental area of e-commerce research, it is necessary to attain an integrated research perspective on this greatly varied, yet cohering, domain. The enabling information technology needs to be developed to suit the context. Toward these ends, the paper analyzes the intellectual space underlying co-creation research and proposes an inclusive taxonomy of Web-based co-creation, informed both by the extant multidisciplinary research and by results obtained in the natural laboratory of the Web. The essential directions of co-creation research are outlined, and some promising avenues of future work discussed. The taxonomic framework and the research perspective lay a foundation for the future development of co-creation theory and practice. The certainty of turbulent developments in e-commerce means that the taxonomic framework will require ongoing revision and expansion, as will any future framework." ] }
1408.4001
2952573620
In this work, we study the problem of clearing contamination spreading through a large network where we model the problem as a graph searching game. The problem can be summarized as constructing a search strategy that will leave the graph clear of any contamination at the end of the searching process in as few steps as possible. We show that this problem is NP-hard even on directed acyclic graphs and provide an efficient approximation algorithm. We experimentally observe the performance of our approximation algorithm in relation to the lower bound on several large online networks including Slashdot, Epinions and Twitter. The experiments reveal that in most cases our algorithm performs near optimally.
The notion of search time for undirected graphs was introduced by Brandenburg and Herrmann @cite_13 . They note that the classical goal of the graph searching game where the minimal search number is computed aims to minimize the number of resources used and as such corresponds to space complexity. They study the length of a search strategy which corresponds to the time complexity of searching a graph. They ask, how fast can a team of @math searchers clear a graph (if at all), and conversely how many searchers are needed to search a graph in time @math .
{ "cite_N": [ "@cite_13" ], "mid": [ "1566624605" ], "abstract": [ "Graph searching is the game of capturing a fugitive by a team of searchers in a network. There are equivalent characterizations in terms of path-width, interval thickness, and vertex separation. So far the interest has mainly focused on the search number of a graph, which is the minimal the number of searchers to win the game, and accordingly on the width and the thickness. These parameters measure the needed resources and correspond to space complexity. As its dual, we introduce the search time, which has not yet been studied in graph searching. We prove that all main results on graph searching can be generalized to include search time, such as monotone or recontamination free graph searching, and the characterizations in terms of path-width, interval graphs, and vertex separation, for which we introduce appropriate length parameters. We establish the NP-completeness of both search-width and search-time. Finally we investigate the speed-up by an extra searcher. There are ’good’ classes of graphs where a single extra searcher reduces the search time to one half and ’bad’ ones where some extra searchers are no real help." ] }
1408.4389
1487822924
With the extensive application of submodularity, its generalizations are constantly being proposed. However, most of them are tailored for special problems. In this paper, we focus on quasi-submodularity, a universal generalization, which satisfies weaker properties than submodularity but still enjoys favorable performance in optimization. Similar to the diminishing return property of submodularity, we first define a corresponding property called the single sub-crossing , then we propose two algorithms for unconstrained quasi-submodular function minimization and maximization, respectively. The proposed algorithms return the reduced lattices in @math iterations, and guarantee the objective function values are strictly monotonically increased or decreased after each iteration. Moreover, any local and global optima are definitely contained in the reduced lattices. Experimental results verify the effectiveness and efficiency of the proposed algorithms on lattice reduction.
Quasi-supermodularity stems from economic fields. Milgrom and Shannon @cite_10 first propose the definition of quasi-supermodularity. They find that the maximizer of a quasi-supermodular function is monotone as the parameter changes. In combinatorial optimization, for quasi-submodular functions, this property means the set of minimizers has a nested structure, which is the foundation of the proposed UQSFMin algorithm. Based on the theorem above, suppose we start from @math , if @math , @math , then we can set @math . This theorem ensures that there exists a chain structure of minimizers. This is a general principle. First, it works in submodular cases, for submodularity is a strict subset of quasi-submodularity. Moreover, when the superdifferential in @cite_21 is not superdifferential for non-submodular quasi-submodular functions, such as the determinant function and the multiplicatively separable functions, this principle can also hold. In @cite_10 , only quasi-submodular function minimization (or equivalently, quasi-supermodular function maximization) is considered. For quasi-submodular function maximization, there is no existing study.
{ "cite_N": [ "@cite_21", "@cite_10" ], "mid": [ "2951977866", "2045665199" ], "abstract": [ "We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub- and super-differentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular min- imization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many state-of-the-art maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.", "The authors derive a necessary and sufficient condition for the solution set of an optimization problem to be monotonic in the parameters of the problem. In addition, they develop practical methods for checking the condition and demonstrate its applications to the classical theories of the competitive firm, the monopolist, the Bertrand oligopolist, consumer and growth theory, game theory, and general equilibrium analysis. Copyright 1994 by The Econometric Society." ] }
1408.4151
2286681888
In this paper, we consider a resource allocation with carrier aggregation optimization problem in long term evolution (LTE) cellular networks. In our proposed model, users are running elastic or inelastic traffic. Each user equipment (UE) is assigned an application utility function based on the type of its application. Our objective is to allocate multiple carriers resources optimally among users in their coverage area while giving the user the ability to select one of the carriers to be its primary carrier and the others to be its secondary carriers. The UE's decision is based on the carrier price per unit bandwidth. We present a price selective centralized resource allocation with carrier aggregation algorithm to allocate multiple carriers resources optimally among users while providing a minimum price for the allocated resources. In addition, we analyze the convergence of the algorithm with different carriers rates. Finally, we present simulation results for the performance of the proposed algorithm.
In @cite_13 , the authors introduced bandwidth proportional fair resource allocation with logarithmic utilities. The algorithms at the links are based on Lagrange multiplier methods of optimization theory. In @cite_4 , the authors used sigmoidal-like utility functions to represent real-time applications. In @cite_5 , the authors proposed weighted aggregated utility functions for the elastic and inelastic traffic. An optimal resource allocation algorithm is presented in @cite_0 and @cite_9 to allocate a single carrier resources optimally among mobile users. In @cite_12 , two-stage resource allocation algorithm is proposed to allocate the eNodeB resources among users running multiple applications at a time. In @cite_6 , a resource allocation optimization problem is presented for two groups of users. The two groups are public safety users group and commercial users group. In @cite_15 , the authors presented a resource allocation with users discrimination algorithms to allocate the eNodeB resources optimally among users and their applications. A resource allocation optimization problem with carrier aggregation is presented in @cite_8 to allocate resources from the LTE Advanced carrier and the MIMO radar carrier to each UE in a LTE Advanced cell based on the running application of the UE.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2156568423", "1985004753", "2005007320", "2073839497", "2013722958", "2582633462", "1564794196", "2159715570", "" ], "abstract": [ "The Internet has been a startling and dramatic success. Originally designed to link together a small group of researchers, the Internet is now used by many millions of people. However, multimedia applications, with their novel traffic characteristics and service requirements, pose an interesting challenge to the technical foundations of the Internet. We address some of the fundamental architectural design issues facing the future Internet. In particular, we discuss whether the Internet should adopt a new service model, how this service model should be invoked, and whether this service model should include admission control. These architectural issues are discussed in a nonrigorous manner, through the use of a utility function formulation and some simple models. While we do advocate some design choices over others, the main purpose here is to provide a framework for discussing the various architectural alternatives. >", "Spectrum sharing is a promising solution for the problem of spectrum congestion. We consider a spectrum sharing scenario between a multiple-input multiple-output (MIMO) radar and Long Term Evolution (LTE) Advanced cellular system. In this paper, we consider resource allocation optimization problem with carrier aggregation. The LTE Advanced system has N BS base stations (BS) which it operates in the radar band on a sharing basis. Our objective is to allocate resources from the LTE Advanced carrier and the MIMO radar carrier to each user equipment (UE) in an LTE Advanced cell based on the running application of UE. Each user application is assigned a utility function based on the type of application. We propose a carrier aggregation resource allocation algorithm to allocate the LTE Advanced and the radar carriers' resources optimally among users based on the type of user application. The algorithm gives priority to users running inelastic traffic when allocating resources. Finally we present simulation results on the performance of the proposed carrier aggregation resource allocation algorithm.", "In this paper, we consider resource allocation optimization problem in the fourth generation long-term evolution (4G-LTE) with elastic and inelastic real-time traffic. Mobile users are running either delay-tolerant or real-time applications. The users applications are approximated by logarithmic or sigmoidal-like utility functions. Our objective is to allocate resources according to the utility proportional fairness policy. Prior utility proportional fairness resource allocation algorithms fail to converge for high-traffic situations. We present a robust algorithm that solves the drawbacks in prior algorithms for the utility proportional fairness policy. Our robust optimal algorithm allocates the optimal rates for both high-traffic and low-traffic situations. It prevents fluctuation in the resource allocation process. In addition, we show that our algorithm provides traffic-dependent pricing for network providers. This pricing could be used to flatten the network traffic and decrease the cost per bandwidth for the users. Finally, numerical results are presented on the performance of the proposed algorithm.", "In this paper, we consider resource allocation optimization problem in fourth generation long term evolution (4G-LTE) for public safety and commercial users running elastic or inelastic traffic. Each mobile user can run delay-tolerant or real-time applications. In our proposed model, each user equipment (UE) is assigned a utility function that represents the application type running on the UE. Our objective is to allocate the resources from a single evolved node B (eNodeB) to each user based on the user application that is represented by the utility function assigned to that user. We consider two groups of users, one represents public safety users with elastic or inelastic traffic and the other represents commercial users with elastic or inelastic traffic. The public safety group is given priority over the commercial group and within each group the inelastic traffic is prioritized over the elastic traffic. Our goal is to guarantee a minimum quality of service (QoS) that varies based on the user type, the user application type and the application target rate. A rate allocation algorithm is presented to allocate the eNodeB resources optimally among public safety and commercial users. Finally, the simulation results are presented on the performance of the proposed rate allocation algorithm.", "In this paper, we introduce an approach for resource allocation of elastic and inelastic adaptive real-time traffic in fourth generation long term evolution (4G-LTE) system. In our model, we use logarithmic and sigmoidal-like utility functions to represent the users applications running on different user equipments (UE)s. We present a resource allocation optimization problem with utility proportional fairness policy, where the fairness among users is in utility percentage (i.e user satisfaction with the service) of the corresponding applications. Our objective is to allocate the resources to the users with priority given to the adaptive real-time application users. In addition, a minimum resource allocation for users with elastic and inelastic traffic should be guaranteed. Our goal is that every user subscribing for the mobile service should have a minimum quality-of-service (QoS) with a priority criterion. We prove that our resource allocation optimization problem is convex and therefore the optimal solution is tractable. We present a distributed algorithm to allocate evolved NodeB (eNodeB) resources optimally with a priority criterion. Finally, we present simulation results for the performance of our rate allocation algorithm.", "", "In this paper, we consider resource allocation optimization problem in cellular networks for different types of users running multiple applications simultaneously. In our proposed model, each user application is assigned a utility function that represents the application type running on the user equipment (UE). The network operators assign a subscription weight to each UE based on its subscription. Each UE assigns an application weight to each of its applications based on the instantaneous usage percentage of the application. Additionally, UEs with higher priority assign applications target rates to their applications. Our objective is to allocate the resources optimally among the UEs and their applications from a single evolved node B (eNodeB) based on a utility proportional fairness policy with priority to realtime application users. A minimum quality of service (QoS) is guaranteed to each UE application based on the UE subscription weight, the UE application weight and the UE application target rate. We propose a two-stage rate allocation algorithm to allocate the eNodeB resources among users and their applications. Finally, we present simulation results for the performance of our rate allocation algorithm.", "This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.", "" ] }
1408.3934
1603551211
The use of short text messages in social media and instant messaging has become a popular communication channel during the last years. This rising popularity has caused an increment in messaging threats such as spam, phishing or malware as well as other threats. The processing of these short text message threats could pose additional challenges such as the presence of lexical variants, SMS-like contractions or advanced obfuscations which can degrade the performance of traditional filtering solutions. By using a real-world SMS data set from a large telecommunications operator from the US and a social media corpus, in this paper we analyze the effectiveness of machine learning filters based on linguistic and behavioral patterns in order to detect short text spam and abusive users in the network. We have also explored different ways to deal with short text message challenges such as tokenization and entity detection by using text normalization and substring clustering techniques. The obtained results show the validity of the proposed solution by enhancing baseline approaches.
Regarding non-content features the use of the grey phone space has been applied in order to detect spammers targeting randomly generated subscriber phone numbers @cite_24 . Additional metadata such as sender location, network usage and call detail records have been shown useful for mining behavioral patters of SMS spammers @cite_28 . Also, both sending and temporal features such as message and recipient counts per specified periods of time @cite_16 , @cite_0 can be used in order to detect abusive SMS senders in mobile networks using a probabilistic model.
{ "cite_N": [ "@cite_24", "@cite_28", "@cite_0", "@cite_16" ], "mid": [ "2138872119", "2135332490", "1973644085", "1968955064" ], "abstract": [ "In this paper, we present the design of Greystar, an innovative defense system for combating the growing SMS spam traffic in cellular networks. By exploiting the fact that most SMS spammers select targets randomly from the finite phone number space, Greystar monitors phone numbers from the grey phone space (which are associated with data only devices like laptop data cards and machine-to-machine communication devices like electricity meters) and employs a novel statistical model to detect spam numbers based on their footprints on the grey phone space. Evaluation using five month SMS call detail records from a large US cellular carrier shows that Greystar can detect thousands of spam numbers each month with very few false alarms and 15 of the detected spam numbers have never been reported by spam recipients. Moreover, Greystar is much faster in detecting SMS spam than existing victim spam reports, reducing spam traffic by 75 during peak hours.", "The Short Messaging Service (SMS), one of the most successful cellular services, generates millions of dollars in revenue for mobile operators. Estimates indicate that billions of text messages are traveling the airwaves daily. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500 . In this paper we present, to the best of our knowledge, the first analysis of SMS spam traffic from a tier-1 cellular operator. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. Beyond the expected results, such as a large load of text messages sent out to a wide target list, other interesting findings are made. For example, the results indicate that the great majority of the spammers connect to the network with just a handful of different hardware models. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers.", "Short Message Service text messages are indispensable, but they face a serious problem from spamming. This service-side solution uses graph data mining to distinguish spammers from nonspammers and detect spam without checking a message's contents.", "Short messaging service (SMS) is one of the fastest-growing telecom value-added services worldwide. However, mobile message spam is a side effect for ordinary mobile phone users that seriously troubles their daily life and, as a result, threatens the revenue of telecom operators. In this paper, we present an SMS antispam system that combines behavior-based social network and temporal (spectral) analysis to detect spammers with both high precision and recall. The system infrastructure and the proposed approximate neighborhood index solution, which solves the scalability issue of social networks, are described in detail. Experimental results demonstrate that our proposed system achieves excellent discrimination between spammers and legitimates, and even with fixed recall at 95 , the online system and offline detection subsystems maintain a precision of about 98 and 99.5 , respectively." ] }
1408.3809
2951245411
Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Principal Components (HOPC) within an adaptive spatio-temporal support volume around that point. Based on this descriptor, we present a novel method to detect Spatio-Temporal Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that the proposed descriptor and STKP detector outperform state-of-the-art algorithms on three benchmark human activity datasets. We also introduce a new multiview public dataset and show the robustness of our proposed method to viewpoint variations.
On the other hand, some methods ViewInvariantJoint3D,Wang2012,eigenjoints use the human joint positions extracted by the OpenNI tracking framework (OpenNI) @cite_10 as interest points. For example, Yang and Tian @cite_1 proposed pairwise 3D joint position differences in each frame and temporal differences across frames to represent an action. Since 3D joints cannot capture all the discriminative information, the action recognition accuracy is compromised. @cite_20 extended the previous approach by computing the histogram of occupancy pattern of a fixed region around each joint in a frame. In the temporal dimension, they used low frequency Fourier components as features and an SVM to find a discriminative set of joints. It is important to note that the estimated joint positions are not reliable and can fail when the human subject is not in an upright and frontal view position (e.g. lying on sofa) or when there is clutter around the subject.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_20" ], "mid": [ "2073139398", "2060280062", "2143267104" ], "abstract": [ "In this paper, we propose an effective method to recognize human actions from 3D positions of body joints. With the release of RGBD sensors and associated SDK, human body joints can be extracted in real time with reasonable accuracy. In our method, we propose a new type of features based on position differences of joints, EigenJoints, which combine action information including static posture, motion, and offset. We further employ the Naive-Bayes-Nearest-Neighbor (NBNN) classifier for multi-class action classification. The recognition results on the Microsoft Research (MSR) Action3D dataset demonstrate that our approach significantly outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to recognize actions on the MSR Action3D dataset. We observe 15–20 frames are sufficient to achieve comparable results to that using the entire video sequences.", "We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms." ] }
1408.3809
2951245411
Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Principal Components (HOPC) within an adaptive spatio-temporal support volume around that point. Based on this descriptor, we present a novel method to detect Spatio-Temporal Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that the proposed descriptor and STKP detector outperform state-of-the-art algorithms on three benchmark human activity datasets. We also introduce a new multiview public dataset and show the robustness of our proposed method to viewpoint variations.
Action recognition methods based on depth maps can be divided into holistic @cite_26 @cite_18 @cite_25 @cite_7 @cite_33 and local approaches @cite_20 @cite_37 @cite_22 @cite_4 . Holistic methods use global features such as silhouettes and space-time volume information. For example, @cite_25 sampled boundary pixels from 2D silhouettes as a bag of features. @cite_7 added temporal derivative of 2D projections to get Depth Motion Maps (DMM). @cite_33 computed silhouettes in 3D by using the space-time occupancy patterns. Recently, Oreifej and Liu @cite_26 extended histogram of oriented 3D normals @cite_35 to 4D by adding time derivative. The gradient vector was normalized to unit magnitude and projected on a refined basis of 600-cell Polychrome to make histograms. The last component of normalized gradient vector was inverse of the gradient magnitude. As a result, information from very strong derivative locations, such as edges and silhouettes, may get suppressed @cite_18 . The proposed HOPC descriptor is more informative than HON4D as it captures the spread of data in three principal directions. Thus, HOPC achieves more action recognition accuracy than exiting methods on three benchmark datasets.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_37", "@cite_33", "@cite_7", "@cite_22", "@cite_4", "@cite_25", "@cite_20" ], "mid": [ "1591677984", "2010676632", "2085735683", "", "2173046955", "2008824967", "2020163092", "2169251375", "2144380653", "2143267104" ], "abstract": [ "We propose a feature, the Histogram of Oriented Normal Vectors (HONV), designed specifically to capture local geometric characteristics for object recognition with a depth sensor. Through our derivation, the normal vector orientation represented as an ordered pair of azimuthal angle and zenith angle can be easily computed from the gradients of the depth image. We form the HONV as a concatenation of local histograms of azimuthal angle and zenith angle. Since the HONV is inherently the local distribution of the tangent plane orientation of an object surface, we use it as a feature for object detection classification tasks. The object detection experiments on the standard RGB-D dataset [1] and a self-collected Chair-D dataset show that the HONV significantly outperforms traditional features such as HOG on the depth image and HOG on the intensity image, with an improvement of 11.6 in average precision. For object classification, the HONV achieved 5.0 improvement over state-of-the-art approaches.", "We propose an algorithm which combines the discriminative information from depth images as well as from 3D joint positions to achieve high action recognition accuracy. To avoid the suppression of subtle discriminative information and also to handle local occlusions, we compute a vector of many independent local features. Each feature encodes spatiotemporal variations of depth and depth gradients at a specific space-time location in the action volume. Moreover, we encode the dominant skeleton movements by computing a local 3D joint position difference histogram. For each joint, we compute a 3D space-time motion volume which we use as an importance indicator and incorporate in the feature vector for improved action discrimination. To retain only the discriminant features, we train a random decision forest (RDF). The proposed algorithm is evaluated on three standard datasets and compared with nine state-of-the-art algorithms. Experimental results show that, on the average, the proposed algorithm outperform all other algorithms in accuracy and have a processing speed of over 112 frames second.", "We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks.", "", "This paper presents Space-Time Occupancy Patterns (STOP), a new visual representation for 3D action recognition from sequences of depth maps. In this new representation, space and time axes are divided into multiple segments to define a 4D grid for each depth map sequence. The advantage of STOP is that it preserves spatial and temporal contextual information between space-time cells while being flexible enough to accommodate intra-action variations. Our visual representation is validated with experiments on a public 3D human action dataset. For the challenging cross-subject test, we significantly improved the recognition accuracy from the previously reported 74.7 to 84.8 . Furthermore, we present an automatic segmentation and time alignment method for online recognition of depth sequences.", "In this paper, we propose an effective method to recognize human actions from sequences of depth maps, which provide additional body shape and motion information for action recognition. In our approach, we project depth maps onto three orthogonal planes and accumulate global activities through entire video sequences to generate the Depth Motion Maps (DMM). Histograms of Oriented Gradients (HOG) are then computed from DMM as the representation of an action video. The recognition results on Microsoft Research (MSR) Action3D dataset show that our approach significantly outperforms the state-of-the-art methods, although our representation is much more compact. In addition, we investigate how many frames are required in our framework to recognize actions on the MSR Action3D dataset. We observe that a short sub-sequence of 30-35 frames is sufficient to achieve comparable results to that operating on entire video sequences.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "We study the problem of action recognition from depth sequences captured by depth cameras, where noise and occlusion are common problems because they are captured with a single commodity camera. In order to deal with these issues, we extract semi-local features called random occupancy pattern ROP features, which employ a novel sampling scheme that effectively explores an extremely large sampling space. We also utilize a sparse coding approach to robustly encode these features. The proposed approach does not require careful parameter tuning. Its training is very fast due to the use of the high-dimensional integral image, and it is robust to the occlusions. Our technique is evaluated on two datasets captured by commodity depth cameras: an action dataset and a hand gesture dataset. Our classification results are superior to those obtained by the state of the art approaches on both datasets.", "This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms." ] }
1408.3809
2951245411
Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Principal Components (HOPC) within an adaptive spatio-temporal support volume around that point. Based on this descriptor, we present a novel method to detect Spatio-Temporal Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that the proposed descriptor and STKP detector outperform state-of-the-art algorithms on three benchmark human activity datasets. We also introduce a new multiview public dataset and show the robustness of our proposed method to viewpoint variations.
Depth based local methods use local features where a set of interest points are extracted from the depth sequence and a feature descriptor is computed for each interest point. For example, @cite_0 used interest point detector proposed by Doll ' a @cite_22 and proposed a Comparative Coding Descriptor (CCD). Due to the presence of noise in depth sequences, simply extending color-based interest point detectors such as @cite_28 and @cite_22 may degrade the efficiency of these detectors @cite_26 .
{ "cite_N": [ "@cite_0", "@cite_28", "@cite_22", "@cite_26" ], "mid": [ "26349190", "", "2020163092", "2085735683" ], "abstract": [ "Improving human action recognition in videos is restricted by the inherent limitations of the visual data. In this paper, we take the depth information into consideration and construct a novel dataset of human daily actions. The proposed ACT42 dataset provides synchronized data from 4 views and 2 sources, aiming to facilitate the research of action analysis across multiple views and multiple sources. We also propose a new descriptor of depth information for action representation, which depicts the structural relations of spatiotemporal points within action volume using the distance information in depth data. In experimental validation, our descriptor obtains superior performance to the state-of-the-art action descriptors designed for color information, and more robust to viewpoint variations. The fusion of features from different sources is also discussed, and a simple but efficient method is presented to provide a baseline performance on the proposed dataset.", "", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks." ] }
1408.3773
1976537743
We propose a four-stage hierarchical resource allocation scheme for the downlink of a large-scale small-cell network in the context of orthogonal frequency-division multiple access (OFDMA). Since interference limits the capabilities of such networks, resource allocation and interference management are crucial. However, obtaining the globally optimum resource allocation is exponentially complex and mathematically intractable. Here, we develop a partially decentralized algorithm to obtain an effective solution. The three major advantages of our work are as follows: 1) as opposed to a fixed resource allocation, we consider load demand at each access point (AP) when allocating spectrum; 2) to prevent overloaded APs, our scheme is dynamic in the sense that as the users move from one AP to the other, so do the allocated resources, if necessary, and such considerations generally result in huge computational complexity, which brings us to the third advantage: 3) we tackle complexity by introducing a hierarchical scheme comprising four phases: user association, load estimation, interference management via graph coloring, and scheduling. We provide mathematical analysis for the first three steps modeling the user and AP locations as Poisson point processes. Finally, we provide the results of numerical simulations to illustrate the efficacy of our scheme.
A more realistic simulation-based study of small-cell deployment in a heterogeneous network was reported by @cite_34 . The results suggest either coordination among layers or orthogonal spectrum allocation to improve outage rate. The authors of @cite_33 propose a combination of fractional frequency reuse (FFR) and orthogonal spectrum allocation in a two-tier network differentiating between commercial and home-based femtocells.
{ "cite_N": [ "@cite_34", "@cite_33" ], "mid": [ "2073851917", "1978180069" ], "abstract": [ "The main objective of the paper is to investigate and compare the downlink performance of different LTE heterogeneous network (HetNet) deployment solutions. By adding small cells to the existing macro overlay, network coverage and capacity can be significantly enhanced to accommodate the fast growth of mobile broadband traffic. Emphasis is put on how to optimally assign the spectrum for the different networks layers in an evolved HetNet including outdoor and indoor small cells. The study is conducted for a \"Hot-Zone\" scenario, i.e. a high-traffic area within a realistic dense urban deployment. A broadband traffic volume growth by a factor of 50 compared to today's levels is assumed. The investigated deployment schemes are outdoor pico-only, indoor femto-only and joint pico-femto deployments, all combined with an overlay macro layer. The results indicate that the best network coverage performance with a minimum user data rate of 1 Mbps is achieved when deploying small cells on dedicated channels rather than co-channel deployment. Furthermore, the joint pico and femto deployment turns out to be the right trade-off between increased base station density and enhanced network capacity.", "Heterogeneous networks consisting of small cells (e.g. femtocells) are capable of achieving high capacity and improving indoor cellular coverage area, while the fractional frequency reuse (FFR) scheme has been proposed for upcoming and future cellular systems to improve spectral efficiency in cellular OFDM networks. In two-tier networks of macrocells layered with femtocells, the resource allocation will most likely be sharing the same licensed spectrum. In this paper, we formulate a simple femtocells resource allocation strategies that allows a femtocells base station (FBS) to allocate more resources (co-channel) in high usage areas such as commercial FBS (cFBS) and orthogonal resource allocation for randomly deployed home user FBS (hFBS). We analyze our strategy in a multi-cell systems using area spectral efficiency (ASE) in composite fading consisting of Nakagami-m fading, path-loss and log-normal shadowing. Analytical and simulation results show that our simple resource allocation strategies are able to reduce inter-tier interferences and to offer an improvement in the overall spectral efficiency of the two-tier systems." ] }
1408.3773
1976537743
We propose a four-stage hierarchical resource allocation scheme for the downlink of a large-scale small-cell network in the context of orthogonal frequency-division multiple access (OFDMA). Since interference limits the capabilities of such networks, resource allocation and interference management are crucial. However, obtaining the globally optimum resource allocation is exponentially complex and mathematically intractable. Here, we develop a partially decentralized algorithm to obtain an effective solution. The three major advantages of our work are as follows: 1) as opposed to a fixed resource allocation, we consider load demand at each access point (AP) when allocating spectrum; 2) to prevent overloaded APs, our scheme is dynamic in the sense that as the users move from one AP to the other, so do the allocated resources, if necessary, and such considerations generally result in huge computational complexity, which brings us to the third advantage: 3) we tackle complexity by introducing a hierarchical scheme comprising four phases: user association, load estimation, interference management via graph coloring, and scheduling. We provide mathematical analysis for the first three steps modeling the user and AP locations as Poisson point processes. Finally, we provide the results of numerical simulations to illustrate the efficacy of our scheme.
An ambitious goal in dense networks is to achieve optimal but decentralized resource allocation. The problem of decentralized power allocation was first addressed by @cite_20 . They showed that there exists a fully distributed algorithm which requires only local information if there exists a common, known, SINR at which the system performance is globally optimum, and there exists a feasible but unknown power vector that achieves this SINR. Unfortunately, these assumptions are hard to satisfy in practice @cite_24 . The proposed distributed algorithms in @cite_5 @cite_25 maximize the total system capacity ignoring user rate requirements and fairness among the users both within and among cells while @cite_1 aims for proportional fairness ignoring individual user rate requirements. To obtain a distributed solution, the authors in @cite_5 @cite_25 simplify the network model to an interference-ideal" network where the total interference is constant and independent of user location in the cell.
{ "cite_N": [ "@cite_1", "@cite_24", "@cite_5", "@cite_25", "@cite_20" ], "mid": [ "2061634004", "1582249331", "2153082703", "2116436484", "2138960193" ], "abstract": [ "In small cell networks (SCNs) co-channel interference is an important issue, and necessitates the use of interference mitigation strategies that allocate resources efficiently. This work discusses a distributed utility-based algorithm for downlink resource allocation (i.e., power and scheduling weights per carrier) in multicarrier SCNs. The proposed distributed downlink resource allocation (DDRA) algorithm aims to maximize the sum utility of the whole system. To achieve this goal, each base station (BS) selects the resource allocation strategy to maximize a surplus function comprising both, own cell utility and interference prices (that reflect the interference that is caused to neighboring cells). Two different utility functions are considered: max-rate and proportional fair-rate. For performance evaluation, a SCN deployed in a single story WINNER office building is considered. Simulation results show that the proposed algorithm is effective in enhancing not only the sum data rate of a SCN, but also the degree of fairness in resource sharing among users.", "We give an asynchronous adaptive algorithm for power control in cellular radio systems, which relaxes the demands of coordination and synchrony between the various mobiles and base stations. It relaxes the need for strict clock synchronization and also allows different links to update their power at different rates; unpredictable, bounded propagation delays are taken into account. The algorithm uses only local measurements and incorporates receiver noise. The overall objective is to minimize transmitters’ powers in a Pareto sense while giving each link a Carrier-to-Interference ratio which is not below a prefixed target. The condition for the existence and uniqueness of such a power distribution is obtained. Conditions are obtained for the asynchronous adaptation to converge to the optimal solution at a geometric rate. These conditions are surprisingly not burdensome.", "We analyze the sum capacity of multicell wireless networks with full resource reuse and channel-driven opportunistic scheduling in each cell. We address the problem of finding the co-channel (throughout the network) user assignment that results in the optimal joint multicell capacity, under a resource-fair constraint and a standard power control strategy. This problem in principle requires processing the complete co-channel gain information, and thus, has so far been justly considered unpractical due to complexity and channel gain signaling overhead. However, we expose here the following key result: The multicell optimal user scheduling problem admits a remarkably simple and fully distributed solution for large networks. This result is proved analytically for an idealized network. From this constructive proof, we propose a practical algorithm that is shown to achieve near maximum capacity for realistic cases of simulated networks of even small sizes.", "Joint optimization of transmit power and scheduling in wireless data networks promises significant system-wide capacity gains. However, this problem is known to be NP-hard and thus difficult to tackle in practice. We analyze this problem for the downlink of a multicell full reuse network with the goal of maximizing the overall network capacity. We propose a distributed power allocation and scheduling algorithm which provides significant capacity gain for any finite number of users. This distributed cell coordination scheme, in effect, achieves a form of dynamic spectral reuse, whereby the amount of reuse varies as a function of the underlying channel conditions and only limited inter-cell signaling is required.", "For wireless cellular communication systems, one seeks a simple effective means of power control of signals associated with randomly dispersed users that are reusing a single channel in different cells. By effecting the lowest interference environment, in meeting a required minimum signal-to-interference ratio of rho per user, channel reuse is maximized. Distributed procedures for doing this are of special interest, since the centrally administered alternative requires added infrastructure, latency, and network vulnerability. Successful distributed powering entails guiding the evolution of the transmitted power level of each of the signals, using only focal measurements, so that eventually all users meet the rho requirement. The local per channel power measurements include that of the intended signal as well as the undesired interference from other users (plus receiver noise). For a certain simple distributed type of algorithm, whenever power settings exist for which all users meet the rho requirement, the authors demonstrate exponentially fast convergence to these settings. >" ] }
1408.3317
2949954803
We propose a new method for controlled system synthesis on non-deterministic automata, which includes the synthesis for deadlock-freeness, as well as invariant and reachability expressions. Our technique restricts the behavior of a Kripke-structure with labeled transitions, representing the uncontrolled system, such that it adheres to a given requirement specification in an expressive modal logic. while all non-invalidating behavior is retained. This induces maximal permissiveness in the context of supervisory control. Research presented in this paper allows a system model to be constrained according to a broad set of liveness, safety and fairness specifications of desired behavior, and embraces most concepts from Ramadge-Wonham supervisory control, including controllability and marker-state reachability. Synthesis is defined in this paper as a formal construction, which allowed a careful validation of its correctness using the Coq proof assistant.
Ramadge-Wonham supervisory control @cite_4 defines a broadly-embraced meth-odology for controller synthesis on deterministic plant models for requirements specified using automata. It defines a number of key elements in the relationship between plant and controlled system, such as controllability, marker-state reachability, deadlock-freeness and maximal permissiveness. Despite the fact that a strictly separated controller offers advantages from a developmental or implementational point of view, we argue that increased abstraction and flexibility justifies research into control synthesis for non-deterministic models. In addition, we emphasize that the automata-based description of desired behavior in the Ramadge-Wonham framework @cite_4 does not allow the specification of requirements of existential nature. For instance, in this framework it is not possible to specify that a step labeled with a particular event exist, hence the choice of modal logic as our requirement formalism.
{ "cite_N": [ "@cite_4" ], "mid": [ "1979349468" ], "abstract": [ "The paper studies the control of a class of discrete event processes, i.e., processes that are discrete, asynchronous and possibly nondeterministic. The controlled process is described as the generator of a formal language, while the controller, or supervisor, is constructed from a recognizer for a specified target language that incorporates the desired closed-loop system behavior. The existence problem for a supervisor is reduced to finding the largest controllable language contained in a given legal language. Two examples are provided." ] }
1408.3317
2949954803
We propose a new method for controlled system synthesis on non-deterministic automata, which includes the synthesis for deadlock-freeness, as well as invariant and reachability expressions. Our technique restricts the behavior of a Kripke-structure with labeled transitions, representing the uncontrolled system, such that it adheres to a given requirement specification in an expressive modal logic. while all non-invalidating behavior is retained. This induces maximal permissiveness in the context of supervisory control. Research presented in this paper allows a system model to be constrained according to a broad set of liveness, safety and fairness specifications of desired behavior, and embraces most concepts from Ramadge-Wonham supervisory control, including controllability and marker-state reachability. Synthesis is defined in this paper as a formal construction, which allowed a careful validation of its correctness using the Coq proof assistant.
Research in @cite_5 relates Ramadge-Wonham supervisory control to an equivalent model-checking problem, resulting in important observations regarding the mutual exchangeability and complexity analysis of both problems. Despite the fact that research in @cite_5 is limited to a deterministic setting, and synthesis results are not guaranteed to be maximally permissive, it does incorporate a quite expressible set of @math -calculus requirements. Other research based upon a dual approach between control synthesis and model checking studies the incremental effects of transition removal upon the validity of @math -calculus formulas @cite_14 , based on @cite_2 .
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_2" ], "mid": [ "1985861264", "", "1791618115" ], "abstract": [ "Model checking and supervisor synthesis have been successful in solving different design problems related to discrete systems in the last decades. In this paper, we analyze some advantages and drawbacks of these approaches and combine them for mutual improvement. We achieve this through a generalization of the supervisory control problem proposed by Ramadge and Wonham. The objective of that problem is to synthesize a supervisor which constrains a system's behavior according to a given specification, ensuring controllability and coaccessibility. By introducing a new representation of the solution using systems of μ-calculus equations, we are able to handle these two conditions separately and thus to exchange the coaccessibility requirement by any condition that could be used in model checking. Well-known results on μ-calculus model checking allow us to easily assess the computational complexity of any generalization. Moreover, the model checking approach also delivers algorithms to solve the generalized synthesis problem. We include an example in which the coaccessibility requirement is replaced by fairness constraints. The paper also contains an analysis of related work by several authors.", "", "We develop a model-checking algorithm for a logic that permits propositions to be defined with greatest and least fixed points of mutually recursive systems of equations. This logic is as expressive as the alternation-free fragment of the modal mu-calculus identified by Emerson and Lei, and it may therefore be used to encode a number of temporal logics and behavioral preorders. Our algorithm determines whether a process satisfies a formula in time proportional to the product of the sizes of the process and the formula; this improves on the best known algorithm for similar fixed-point logics." ] }
1408.3317
2949954803
We propose a new method for controlled system synthesis on non-deterministic automata, which includes the synthesis for deadlock-freeness, as well as invariant and reachability expressions. Our technique restricts the behavior of a Kripke-structure with labeled transitions, representing the uncontrolled system, such that it adheres to a given requirement specification in an expressive modal logic. while all non-invalidating behavior is retained. This induces maximal permissiveness in the context of supervisory control. Research presented in this paper allows a system model to be constrained according to a broad set of liveness, safety and fairness specifications of desired behavior, and embraces most concepts from Ramadge-Wonham supervisory control, including controllability and marker-state reachability. Synthesis is defined in this paper as a formal construction, which allowed a careful validation of its correctness using the Coq proof assistant.
Research by D'Ippolito and others @cite_11 , @cite_18 is based upon the framework of the world machine model for the synthesis of liveness properties, stated in fluent temporal logic. A distinction is made between controlled and monitored behavior, and between system goals and environment assumptions @cite_11 . A controller is then derived from a winning strategy in a two-player game between original and required behavior, as expressed in terms of the notion of generalized reactivity, as introduced in @cite_11 . Research in @cite_11 also emphasizes the fact that pruning-based synthesis is not adequate for control of non-deterministic models, and it defines synthesis of liveness goals under a maximality criterion, referred to as best-effort controller. However, this maximality requirement is trace-based and is therefore not able to signify inclusion of all possible infinite behaviors. In addition, some results in @cite_11 are based upon the assumption of a deterministic plant specification.
{ "cite_N": [ "@cite_18", "@cite_11" ], "mid": [ "2084477406", "2039568153" ], "abstract": [ "We present SGR(1), a novel synthesis technique and methodological guidelines for automatically constructing event-based behavior models. Our approach works for an expressive subset of liveness properties, distinguishes between controlled and monitored actions, and differentiates system goals from environment assumptions. We show that assumptions must be modeled carefully in order to avoid synthesizing anomalous behavior models. We characterize nonanomalous models and propose assumption compatibility, a sufficient condition, as a methodological guideline.", "We present a novel technique for synthesising behaviour models that works for an expressive subset of liveness properties and conforms to the foundational requirements engineering World Machine model, dealing explicitly with assumptions on environment behaviour and distinguishing controlled and monitored actions. This is the first technique that conforms to what is considered best practice in requirements specifications: distinguishing prescriptive and descriptive assertions. Most previous attempts at using synthesis of behavioural models were restricted to handling only safety properties. Those that did support liveness were inadequate for synthesis of operational event based models as they did not include the bespoke distinction between system goals and environment assumptions." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
Recent approaches have formulated multi-frame, multi-object tracking as a min-cost network flow optimization problem @cite_2 @cite_9 @cite_23 , where the optimal flow in a connected graph of detections encodes the selected tracks. While earlier min-cost network flow optimization methods have used linear programming, recently proposed solutions to the min-cost flow optimization include push-relabel methods @cite_2 , successive shortest paths @cite_9 @cite_23 , and dynamic programming @cite_9 . To ensure globally optimal and efficient solutions, previous methods have often restricted the cost to unary terms over all edges. While non-unary terms break the optimality of solutions in general, dependencies between detections have been enforced by greedy approaches, such as greedily eliminating the overlapping detections after each step of a sequential selection of distinct tracks in @cite_9 . This non-global optimization approach, however, cannot recover from early suboptimal decisions.
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_2" ], "mid": [ "2016135469", "2171243491", "" ], "abstract": [ "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.", "" ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
Additional dependencies among detections can also be incorporated into the min-cost network flow tracking by modifying the underlying graph structure. Butt and Collins @cite_24 follows this approach and minimizes the modified objective using Lagrangian methods. While the method works well for the particular type of introduced cost, generalizing this method to the new types of pairwise costs would require appropriate modifications of the graph structure which is non-trivial in general. Moreover, combining multiple costs within such a framework would be difficult. In contrast, our framework allows addition of terms without any modification to the underlying optimization framework.
{ "cite_N": [ "@cite_24" ], "mid": [ "2127084114" ], "abstract": [ "We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
@cite_26 and @cite_22 @cite_17 formulate the problem in a framework that first selects and then connects them using a learned distance measure @cite_26 or a CRF @cite_22 @cite_17 . Long term occlusions are handled in @cite_26 by merging appearance and motion similarity. While @cite_22 @cite_17 propose to alternate between discrete and continuous optimizations in order to minimize several cost functions, the presence of two levels of optimization makes theoretical or empirical guarantees of optimality hard to give. Unlike this work, we use a convex relaxation in our approach that allows us to give an empirical guarantee of optimality to our solutions.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_17" ], "mid": [ "1966136723", "2130258433", "2083049794" ], "abstract": [ "This paper addresses the problem of simultaneous tracking of multiple targets in a video. We first apply object detectors to every video frame. Pairs of detection responses from every two consecutive frames are then used to build a graph of tracklets. The graph helps transitively link the best matching tracklets that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the maximum-weight independent set (MWIS) of the graph. We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity and contextual constraints between object detections, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. Our results demonstrate advantages of simultaneously accounting for soft and hard contextual constraints in multitarget tracking. We outperform the state of the art on the benchmark datasets.", "When tracking multiple targets in crowded scenarios, modeling mutual exclusion between distinct targets becomes important at two levels: (1) in data association, each target observation should support at most one trajectory and each trajectory should be assigned at most one observation per frame, (2) in trajectory estimation, two trajectories should remain spatially separated at all times to avoid collisions. Yet, existing trackers often sidestep these important constraints. We address this using a mixed discrete-continuous conditional random field (CRF) that explicitly models both types of constraints: Exclusion between conflicting observations with super modular pairwise terms, and exclusion between trajectories by generalizing global label costs to suppress the co-occurrence of incompatible labels (trajectories). We develop an expansion move-based MAP estimation scheme that handles both non-sub modular constraints and pairwise global label costs. Furthermore, we perform a statistical analysis of ground-truth trajectories to derive appropriate CRF potentials for modeling data fidelity, target dynamics, and inter-target occlusion.", "Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
Other methods @cite_14 @cite_16 @cite_25 use offline or online training to learn a similarity measure between tracklets. These methods do not provide any optimality guarantee, though. In addition, training might be difficult in some conditions. For example, online training to discriminate appearances might be erroneous when objects move very close to each other (Figure ). We avoid such problems by using pairwise terms to robustify the tracker to detection errors.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_16" ], "mid": [ "2122088301", "2035153336", "2134905085" ], "abstract": [ "We propose a learning-based hierarchical approach of multi-target tracking from a single camera by progressively associating detection responses into longer and longer track fragments (tracklets) and finally the desired target trajectories. To define tracklet affinity for association, most previous work relies on heuristically selected parametric models; while our approach is able to automatically select among various features and corresponding non-parametric models, and combine them to maximize the discriminative power on training data by virtue of a HybridBoost algorithm. A hybrid loss function is used in this algorithm because the association of tracklet is formulated as a joint problem of ranking and classification: the ranking part aims to rank correct tracklet associations higher than other alternatives; the classification part is responsible to reject wrong associations when no further association should be done. Experiments are carried out by tracking pedestrians in challenging datasets. We compare our approach with state-of-the-art algorithms to show its improvement in terms of tracking accuracy.", "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.", "We propose a learning-based Conditional Random Field (CRF) model for tracking multiple targets by progressively associating detection responses into long tracks. Tracking task is transformed into a data association problem, and most previous approaches developed heuristical parametric models or learning approaches for evaluating independent affinities between track fragments (tracklets). We argue that the independent assumption is not valid in many cases, and adopt a CRF model to consider both tracklet affinities and dependencies among them, which are represented by unary term costs and pairwise term costs respectively. Unlike previous methods, we learn the best global associations instead of the best local affinities between tracklets, and transform the task of finding the best association into an energy minimization problem. A RankBoost algorithm is proposed to select effective features for estimation of term costs in the CRF model, so that better associations have lower costs. Our approach is evaluated on challenging pedestrian data sets, and are compared with state-of-art methods. Experiments show effectiveness of our algorithm as well as improvement in tracking performance." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
Incorporation of pairwise terms into the min-cost network flow formulation has been previously attempted by Choi and Savarese @cite_12 . Their work, however, is focused on jointly optimizing tracking and activity recognition. In contrast, we focus on tracking in particular, and propose a generic framework enabling inclusion of multiple types of pairwise costs and providing empirical measures of small suboptimality.
{ "cite_N": [ "@cite_12" ], "mid": [ "100367037" ], "abstract": [ "We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
We propose an algorithm that incorporates quadratic pairwise costs into the traditional min-cost flow network. Unlike previous methods @cite_23 @cite_14 , which either build on top of min-cost flow solutions @cite_22 or change the network structure @cite_24 , we propose a modification to the standard optimization algorithm. Such quadratic costs can represent several useful properties like similar motion of people in a rally, co-occurrence of tracks for different parts of the same object instance and others.
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_22", "@cite_23" ], "mid": [ "2127084114", "2122088301", "2130258433", "2171243491" ], "abstract": [ "We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints.", "We propose a learning-based hierarchical approach of multi-target tracking from a single camera by progressively associating detection responses into longer and longer track fragments (tracklets) and finally the desired target trajectories. To define tracklet affinity for association, most previous work relies on heuristically selected parametric models; while our approach is able to automatically select among various features and corresponding non-parametric models, and combine them to maximize the discriminative power on training data by virtue of a HybridBoost algorithm. A hybrid loss function is used in this algorithm because the association of tracklet is formulated as a joint problem of ranking and classification: the ranking part aims to rank correct tracklet associations higher than other alternatives; the classification part is responsible to reject wrong associations when no further association should be done. Experiments are carried out by tracking pedestrians in challenging datasets. We compare our approach with state-of-the-art algorithms to show its improvement in terms of tracking accuracy.", "When tracking multiple targets in crowded scenarios, modeling mutual exclusion between distinct targets becomes important at two levels: (1) in data association, each target observation should support at most one trajectory and each trajectory should be assigned at most one observation per frame, (2) in trajectory estimation, two trajectories should remain spatially separated at all times to avoid collisions. Yet, existing trackers often sidestep these important constraints. We address this using a mixed discrete-continuous conditional random field (CRF) that explicitly models both types of constraints: Exclusion between conflicting observations with super modular pairwise terms, and exclusion between trajectories by generalizing global label costs to suppress the co-occurrence of incompatible labels (trajectories). We develop an expansion move-based MAP estimation scheme that handles both non-sub modular constraints and pairwise global label costs. Furthermore, we perform a statistical analysis of ground-truth trajectories to derive appropriate CRF potentials for modeling data fidelity, target dynamics, and inter-target occlusion.", "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts." ] }
1408.3304
2951115265
Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the "tracking-by-detection" paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.
While in such a case obtaining the global optimum is NP-hard @cite_13 , we outline an approach to obtain near optimum solutions, while we empirically verify its optimality. We present a linear relaxation to the quadratic term that is fast to optimize, followed by a Frank-Wolfe based rounding heuristic to obtain an integer solution.
{ "cite_N": [ "@cite_13" ], "mid": [ "2013603106" ], "abstract": [ "The quadratic assignment problem (QAP), one of the most difficult problems in the NP-hard class, models many real-life problems in several areas such as facilities location, parallel and distributed computing, and combinatorial data analysis. Combinatorial optimization problems, such as the traveling salesman problem, maximal clique and graph partitioning can be formulated as a QAP. In this paper, we present some of the most important QAP formulations and classify them according to their mathematical sources. We also present a discussion on the theoretical resources used to define lower bounds for exact and heuristic algorithms. We then give a detailed discussion of the progress made in both exact and heuristic solution methods, including those formulated according to metaheuristic strategies. Finally, we analyze the contributions brought about by the study of different approaches." ] }
1408.3044
2117560809
It is a known fact that, given two rooted binary phylogenetic trees, the concept of maximum acyclic agreement forests is sufficient to compute hybridization networks with minimum hybridization number. In this work, we demonstrate by first presenting an algorithm and then showing its correctness, that this concept is also sufficient in the case of multiple input trees. More precisely, we show that for computing minimum hybridization networks for multiple rooted binary phylogenetic trees on the same set of taxa it suffices to take only maximum acyclic agreement forests into account. Moreover, this article contains a proof showing that the minimum hybridization number for a set of rooted binary phylogenetic trees on the same set of taxa can be also computed by solving subproblems referring to common clusters of the input trees.
The well-known work of Baroni @cite_11 contains a proof showing that the hybridization number of rooted binary phylogenetic @math -trees can be computed by simply summing up the hybridization numbers of its common clusters. More precisely, given two rooted binary phylogenetic @math -trees @math and @math containing a common cluster @math , then @math , where @math and @math refers to the respective input tree in which the common cluster has been replaced by a new taxon @math .
{ "cite_N": [ "@cite_11" ], "mid": [ "2127498940" ], "abstract": [ "We describe some new and recent results that allow for the analysis and representation of reticulate evolution by nontree networks. In particular, we (1) present a simple result to show that, despite the presence of reticulation, there is always a well-defined underlying tree that corresponds to those parts of life that do not have a history of reticulation; (2) describe and apply new theory for determining the smallest number of hybridization events required to explain conflicting gene trees; and (3) present a new algorithm to determine whether an arbitrary rooted network can be realized by contemporaneous reticulation events. We illustrate these results with examples." ] }
1408.3297
2159201721
We present the results of a comprehensive multi-pass analysis of visualization paper keywords supplied by authors for their papers published in the IEEE Visualization conference series (now called IEEE VIS) between 1990–2015. From this analysis we derived a set of visualization topics that we discuss in the context of the current taxonomy that is used to categorize papers and assign reviewers in the IEEE VIS reviewing process. We point out missing and overemphasized topics in the current taxonomy and start a discussion on the importance of establishing common visualization terminology. Our analysis of research topics in visualization can, thus, serve as a starting point to (a) help create a common vocabulary to improve communication among different visualization sub-groups, (b) facilitate the process of understanding differences and commonalities of the various research sub-fields in visualization, (c) provide an understanding of emerging new research trends, (d) facilitate the crucial step of finding the right reviewers for research submissions, and (e) it can eventually lead to a comprehensive taxonomy of visualization research. One additional tangible outcome of our work is an online query tool ( http: keyvis.org ) that allows visualization researchers to easily browse the 3952 keywords used for IEEE VIS papers since 1990 to find related work or make informed keyword choices.
We are not the first to have made an effort to summarize a large set of visualization papers in order to understand topics or trends. One of the earliest such efforts was a summary and clustering of visualization research papers by Voegele @cite_35 in 1995 in the form of a two-dimensional clustering of all visualization papers up to this point. Other efforts have focused on specific aspects of visualization research. Sedlmair al @cite_34 , for example, did a thorough analysis of all design study papers to summarize practices and pitfalls of design study approaches. Further, Lam al @cite_25 studied the practice of evaluations in Information Visualization papers which was then extended to include all visualization papers by Isenberg al @cite_24 . Others have surveyed, for instance, the literature on interactive visualization @cite_26 @cite_17 , on tree visualizations @cite_33 , on quality metrics in high-dimensional data visualization @cite_11 , on human-computer collaborative problem-solving @cite_22 , or on visualization on interactive surfaces @cite_37 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_26", "@cite_11", "@cite_33", "@cite_22", "@cite_24", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2327791771", "2531006627", "2113386367", "2157379557", "2159367517", "2048107685", "1992743299", "1970569592", "2058203255", "2161133721" ], "abstract": [ "Information visualization and data visualization are often viewed as similar, but distinct domains, and they have drawn an increasingly broad range of interest from diverse sectors of academia and industry. This study systematically analyzes and compares the intellectual landscapes of the two domains between 2000 and 2014. The present study is based on bibliographic records retrieved from the Web of Science. Using a topic search and a citation expansion, we collected two sets of data in each domain. Then, we identified emerging trends and recent developments in information visualization and data visualization, captivated in intellectual landscapes, landmark articles, bursting keywords, and citation trends of the domains. We found out that both domains have computer engineering and applications as their shared grounds. Our study reveals that information visualization and data visualization have scrutinized algorithmic concepts underlying the domains in their early years. Successive literature citing the datasets focuses on applying information and data visualization techniques to biomedical research. Recent thematic trends in the fields reflect that they are also diverging from each other. In data visualization, emerging topics and new developments cover dimensionality reduction and applications of visual techniques to genomics. Information visualization research is scrutinizing cognitive and theoretical aspects. In conclusion, information visualization and data visualization have co-evolved. At the same time, both fields are distinctively developing with their own scientific interests.", "We present a systematic overview of the state-of-the art in research at the intersection of interactive displays and visualization. As the access to and analysis of information is becoming increasingly important anywhere and at any time, researchers have begun to investigate the role of interactive displays as data analysis platforms. Visualization applications play a crucial role in data analysis and development of dedicated systems and tools for small to large interactive displays to support such application contexts is underway. Researchers have investigated how to support data analysis with visualizations with dedicated interaction modalities such as touch, tangibles, or pens; have developed and studied applications for interactive displays such as tabletops or large wall displays; have studied the support of collaborative analysis around interactive displays; and provided toolkits and guidelines for writing software and designing a technical setup for data analysis with visualizations on interactive displays. We contribute a systematic and quantitative assessment of the literature from 10 different venues, an open repository of papers, and a code-set that can be used to categorize the research space. Our work points out that research has so far largely focused on the development of interaction techniques, for multi-touch tabletop devices, and 2D spatial and abstract visualizations.", "Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Normanpsilas Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) system-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2-4) and evaluation (5-7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.", "In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research.", "Tree visualization is one of the best-studied areas of information visualization; researchers have developed more than 200 visualization and layout techniques for trees. The treevis.net project aims to provide a hand-curated bibliographical reference to this ever-growing wealth of techniques. It offers a visual overview that users can filter to a desired subset along the design criteria of dimensionality, edge representation, and node alignment. Details, including links to the original publications, can be brought up on demand. Treevis.net has become a community effort, with researchers sending in preprints of their tree visualization techniques to be published or pointing out additional information.", "Visual Analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on humanand machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.", "We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90 of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.", "Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.", "We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.", "Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction." ] }
1408.3297
2159201721
We present the results of a comprehensive multi-pass analysis of visualization paper keywords supplied by authors for their papers published in the IEEE Visualization conference series (now called IEEE VIS) between 1990–2015. From this analysis we derived a set of visualization topics that we discuss in the context of the current taxonomy that is used to categorize papers and assign reviewers in the IEEE VIS reviewing process. We point out missing and overemphasized topics in the current taxonomy and start a discussion on the importance of establishing common visualization terminology. Our analysis of research topics in visualization can, thus, serve as a starting point to (a) help create a common vocabulary to improve communication among different visualization sub-groups, (b) facilitate the process of understanding differences and commonalities of the various research sub-fields in visualization, (c) provide an understanding of emerging new research trends, (d) facilitate the crucial step of finding the right reviewers for research submissions, and (e) it can eventually lead to a comprehensive taxonomy of visualization research. One additional tangible outcome of our work is an online query tool ( http: keyvis.org ) that allows visualization researchers to easily browse the 3952 keywords used for IEEE VIS papers since 1990 to find related work or make informed keyword choices.
In other disciplines, specific techniques have been used to analyze the scientific literature more broadly: to get a better sense of global research trends, links and patterns within the scientific literature. Co-word analysis is one approach among others ( , co-citation analysis) that has tackled the problem by analyzing the scientific literature according to the co-occurrence of keywords, words in titles, abstracts, or even in the full texts of scientific articles @cite_28 @cite_1 @cite_6 @cite_10 @cite_3 @cite_29 . Callon al @cite_14 , in particular, wrote a seminal book on the topic that provides several methods that others have used and extended upon.
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_29", "@cite_1", "@cite_3", "@cite_6", "@cite_10" ], "mid": [ "2487779721", "2029088861", "2148204985", "107389347", "2142944882", "643559210", "1537982310" ], "abstract": [ "", "The goal of this paper is to show how co-word analysis techniques can be used to study interactions between academic and technological research. It is based upon a systematic content analysis of publications in the polymer science field over a period of 15 years. The results concern a.) the evolution of research in different subject areas and the patterns of their interaction; b.) a description of subject area “life cycles”; c.) an analysis of ”research trajectories” given factors of stability and change in a research network; d.) the need to use both science push and technology pull theories to explain the interaction dynamics of a research field. The co-word techniques developed in this paper should help to build a bridge between research in scientometrics and work underway to better understand the economics of innovation.", "In this paper the adequacy of the co-word method for mapping the structure of scientific inquiry is explored. Co-word analysis of both the keywords and the titles of a set of papers in acidification research' is undertaken and the results are found to be comparable, though the keyword-derived results provide greater detail. This strongly suggests that keyword indexing doest not, as has sometimes been claimed, distort coword findings. It also points to differences between titles (which often emphasize the supposed originality of an article) and keywords (which tend to show the relationship between the paper and other publications). The paper also explores important differences between the methodological assumptions that underlie the Paris Keele co-word clustering algorithms and the factor analysis method for creating clusters.", "The use of topic models to analyze domain-specific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expert-provided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.", "This paper describes recent developments in the co-word method and illustrates, for the case of acid rain research, the way in which the method can be used to detect (a) the themes of research to be found in a given area of science, (b) the relationships between those themes, (c) the extent to which they are central to the area in question and (d) the degree to which they are internally structured. It is also suggested that the method may be used to draw comparative research profiles for different countries. Though the data used are only preliminiary, it is argued that the method has now been developed to the point where its results are both quite robust and easily assimilable. It is, accordingly, now an appropriate tool for policy analysis.", "Preface Historical Overview A Short History The Emergence of Institutions Metrics: Approaches, Methods, Techniques Measurement: Concepts and Issues Selecting a Metric Inputs to Science and Technology Outputs from Science and Technology: Categories and Metrics Economic and Financial Metrics Bibliometric Measures: Publications and Citations Co-Word Analysis and Mapping of Science and Technology The Metric of Patents The Metric of Peer Review The Metric of Process Outcomes Performance of Science and Technology Applications: The Value--In Practice--of Science and Technology Metrics and Evaluation of Academic Science and Technology Metrics and Evaluation of Industrial Science and Technology Science, Technology, and Strategy Metrics and Evaluation of Public-Sector Science and Technology Methods and Evaluation of National Innovation Systems Values, Ethics, and Implications Selected Bibliography Index", "" ] }
1408.3297
2159201721
We present the results of a comprehensive multi-pass analysis of visualization paper keywords supplied by authors for their papers published in the IEEE Visualization conference series (now called IEEE VIS) between 1990–2015. From this analysis we derived a set of visualization topics that we discuss in the context of the current taxonomy that is used to categorize papers and assign reviewers in the IEEE VIS reviewing process. We point out missing and overemphasized topics in the current taxonomy and start a discussion on the importance of establishing common visualization terminology. Our analysis of research topics in visualization can, thus, serve as a starting point to (a) help create a common vocabulary to improve communication among different visualization sub-groups, (b) facilitate the process of understanding differences and commonalities of the various research sub-fields in visualization, (c) provide an understanding of emerging new research trends, (d) facilitate the crucial step of finding the right reviewers for research submissions, and (e) it can eventually lead to a comprehensive taxonomy of visualization research. One additional tangible outcome of our work is an online query tool ( http: keyvis.org ) that allows visualization researchers to easily browse the 3952 keywords used for IEEE VIS papers since 1990 to find related work or make informed keyword choices.
Co-word analysis has been used in different research areas, , polymer chemistry @cite_28 , acid rain research @cite_3 , or education @cite_7 . Others further restricted the scope of the literature to specific countries, such as Hu and Liu al's @cite_38 @cite_20 co-word analysis on library and information science in China. The closest co-word analysis studies to our work are Coulter al's @cite_8 work on the software engineering community, Hoonlor al's @cite_5 general investigation of the computer science literature, and, most recently, Liu al's @cite_23 analysis of the human-computer interaction literature. Liu al examined papers of the ACM CHI conference from 1994--2013, identified research themes and their evolution, and classified individual keywords as popular, core, or backbone topics. We employ similar approaches as used in particular in Liu al's work. Naturally, however, we differ as our focus is on a different research community, visualization, with different keywords, trends, and patterns and and a different historical evolution.
{ "cite_N": [ "@cite_38", "@cite_7", "@cite_8", "@cite_28", "@cite_3", "@cite_23", "@cite_5", "@cite_20" ], "mid": [ "1980990409", "2120445341", "2084773904", "2029088861", "2142944882", "2110466318", "2031177898", "2012593731" ], "abstract": [ "This study aims to reveal the intellectual structure of Library and Information Science (LIS) in China during the period 2008---2012 utilizing co-word analysis. The status and trends of LIS in China are achieved by measuring the correlation coefficient of selected keywords extracted from relevant journals in the Chinese Journal Full-Text Database. In co-word analysis, multivariate statistical analysis and social network analysis are applied to obtain 13 clusters of keywords, a two-dimensional map, centrality and density of clusters, a strategic diagram and a relation network. Based on these results, the following conclusions can be drawn: (i) LIS in China has some established and well-developed research topics; (ii) a few emerging topics have a great potential for development; and (iii), the research topics in this LIS field are largely decentralized as a whole, where there are many marginal and immature topics.", "The field of distance education is composed of a multiplicity of topics leading to a vast array of research literature. However, the research does not provide a chronological picture of the topics it addresses, making it difficult to develop an overview of the evolution and trends in the literature. To address this issue, a co-word analysis was performed on the abstracts of research articles found in two prominent North American research journals (N = 517), the American Journal of Distance Education and the Journal of Distance Education, between 1987 and 2005. The analysis yielded underlying trends and themes for three different periods (pre-Web, emerging Web, and maturing Web). Additionally, similarity index analyses were conducted across time periods. The pre-Web era was characterized by the need for quality and development. The emerging Web era was characterized by the development of theory. The maturing Web era was characterized by interaction and the use of tools for communication. The results demonstrate that the North American distance education research literature is characterized by having few consistent and focused lines of inquiry. Conclusions are provided.", "This empirical research demonstrates the effectiveness of content analysis to map the research literature of the software engineering discipline. The results suggest that certain research themes in software engineering have remained constant, but with changing thrusts. Other themes have arisen, matured, and then faded as major research topics, while still others seem transient or immature. Co-word analysis is the specific technique used. This methodology identifies associations among publication descriptors (indexing terms) from the ACM Computing Classification System and produces networks of descriptors that reveal these underlying patterns. This methodology is applicable to other domains with a supporting corpus of textual data. While this study utilizes index terms from a fixed taxonomy, that restriction is not inherent; the descriptors can be generated from the corpus. Hence, co-word analysis and the supporting software tools employed here can provide unique insights into any discipline's evolution.", "The goal of this paper is to show how co-word analysis techniques can be used to study interactions between academic and technological research. It is based upon a systematic content analysis of publications in the polymer science field over a period of 15 years. The results concern a.) the evolution of research in different subject areas and the patterns of their interaction; b.) a description of subject area “life cycles”; c.) an analysis of ”research trajectories” given factors of stability and change in a research network; d.) the need to use both science push and technology pull theories to explain the interaction dynamics of a research field. The co-word techniques developed in this paper should help to build a bridge between research in scientometrics and work underway to better understand the economics of innovation.", "This paper describes recent developments in the co-word method and illustrates, for the case of acid rain research, the way in which the method can be used to detect (a) the themes of research to be found in a given area of science, (b) the relationships between those themes, (c) the extent to which they are central to the area in question and (d) the degree to which they are internally structured. It is also suggested that the method may be used to draw comparative research profiles for different countries. Though the data used are only preliminiary, it is argued that the method has now been developed to the point where its results are both quite robust and easily assimilable. It is, accordingly, now an appropriate tool for policy analysis.", "This study employs hierarchical cluster analysis, strategic diagrams and network analysis to map and visualize the intellectual landscape of the CHI conference on Human Computer Interaction through the use of co-word analysis. The study quantifies and describes the thematic evolution of the field based on a total of 3152 CHI articles and their associated 16035 keywords published between 1994 and 2013. The analysis is conducted for two time periods (1994-2003, 2004-2013) and a comparison between them highlights the underlying trends in our community. More significantly, this study identifies the evolution of major themes in the discipline, and highlights individual topics as popular, core, or backbone research topics within HCI.", "Keywords in the ACM Digital Library and IEEE Xplore digital library and in NSF grants anticipate future CS research.", "The aim of this study is to map the intellectual structure of digital library (DL) field in China during the period of 2002---2011. Co-word analysis was employed to reveal the patterns of DL field in China through measuring the association strength of keywords in relevant journals. Data was collected from Chinese Journal Full-Text Database during the period of 2002---2011. And then, the co-occurrence matrix of keywords was analyzed by the methods of multivariate statistical analysis and social network analysis. The results mainly include five parts: seven clusters of keywords, a two-dimensional map, the density and centrality of clusters, a strategic diagram, and a relation network. The results show that there are some hot research topics and marginal topics in DL field in China, but the research topics are relatively decentralized compared with the international studies." ] }
1408.3297
2159201721
We present the results of a comprehensive multi-pass analysis of visualization paper keywords supplied by authors for their papers published in the IEEE Visualization conference series (now called IEEE VIS) between 1990–2015. From this analysis we derived a set of visualization topics that we discuss in the context of the current taxonomy that is used to categorize papers and assign reviewers in the IEEE VIS reviewing process. We point out missing and overemphasized topics in the current taxonomy and start a discussion on the importance of establishing common visualization terminology. Our analysis of research topics in visualization can, thus, serve as a starting point to (a) help create a common vocabulary to improve communication among different visualization sub-groups, (b) facilitate the process of understanding differences and commonalities of the various research sub-fields in visualization, (c) provide an understanding of emerging new research trends, (d) facilitate the crucial step of finding the right reviewers for research submissions, and (e) it can eventually lead to a comprehensive taxonomy of visualization research. One additional tangible outcome of our work is an online query tool ( http: keyvis.org ) that allows visualization researchers to easily browse the 3952 keywords used for IEEE VIS papers since 1990 to find related work or make informed keyword choices.
In the visualization and data analysis literature, the closest work to ours is Chuang al's @cite_1 machine learning tool for topic model diagnostics, and G "o rg al's @cite_19 visual text analysis using Jigsaw, and the CiteVis tool @cite_16 . These lines of work are not per se co-word analyses. However, their data sources also include visualization research papers. In contrast, we primarily focus on the results of our analysis of themes and trends in the visualization literature rather than on the description of any specific tool or algorithm.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_1" ], "mid": [ "2084939613", "1488271674", "107389347" ], "abstract": [ "Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: An academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.", "Biblio sciento infor-metrics : terminological issues and early historical developments -- The empirical foundations of bibliometrics : the Science citation index -- The philosophical foundations of bibliometrics : Bernal, Merton, Price, Garfield, and Small -- The mathematical foundations of bibliometrics -- Maps and paradigms : bibliographic citations at the service of the history and sociology of science -- Impact factor and the evaluation of scientists : bibliographic citations at the service of science policy and management -- On the shoulders of dwarfs : citation as rhetorical device and the criticisms to the normative model -- Measuring scientific communication in the twentieth century : from bibliometrics to cybermetrics.", "The use of topic models to analyze domain-specific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expert-provided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
Within the predictive models category, in the literature, there is an abundance of modeling problems set up to use a set of variables recorded over a single historical interval, e.g first 3 weeks of the course, and to predict an event at a single timepoint. For example, using data or surveys collected from the first 4 modules of a course to forecast stopout after the midterm. In some cases, however, when predictive models are built for a number of time points in the course as in @cite_44 the model is not built to .
{ "cite_N": [ "@cite_44" ], "mid": [ "2106717332" ], "abstract": [ "In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to accurately classify some e-learning students, whereas another may succeed, three decision schemes, which combine in different ways the results of the three machine learning techniques, were also tested. The method was examined in terms of overall accuracy, sensitivity and precision and its results were found to be significantly better than those reported in relevant literature." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
In contrast, we identify 91 different predictive modeling problems within a single MOOC. We take pains to not include any variable that would arise at or after the time point of our predictions, i.e. beyond the lag interval. We do this so we can understand the impact of different timespans of historical information on predicting at different time intervals forward. In other words, our study is the first to the best of our knowledge to systematically define multiple prediction problems so predictions could be made during every week of the course, each week to the end of the course. @cite_44 provide an excellent summary of studies that fall in category.
{ "cite_N": [ "@cite_44" ], "mid": [ "2106717332" ], "abstract": [ "In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to accurately classify some e-learning students, whereas another may succeed, three decision schemes, which combine in different ways the results of the three machine learning techniques, were also tested. The method was examined in terms of overall accuracy, sensitivity and precision and its results were found to be significantly better than those reported in relevant literature." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
Within the use of behavioral data, the most common behavioral variables used are performance related either prior to or during the course. For example, @cite_44 use prior academic performance (education level), other even use high school GPA, college GPA, freshmen year GPA @cite_24 @cite_11 . Some studies compose variables based on project grade and test grades during the course @cite_44 . In almost all cases, prior academic performance has been found to be the highest predictor of the student persistence @cite_11 @cite_50 @cite_34 .
{ "cite_N": [ "@cite_44", "@cite_24", "@cite_50", "@cite_34", "@cite_11" ], "mid": [ "2106717332", "2044415970", "2140973040", "2036049175", "2078469317" ], "abstract": [ "In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to accurately classify some e-learning students, whereas another may succeed, three decision schemes, which combine in different ways the results of the three machine learning techniques, were also tested. The method was examined in terms of overall accuracy, sensitivity and precision and its results were found to be significantly better than those reported in relevant literature.", "A classification rule was developed to predict undergraduate students& withdrawal from or completion of fully online general education courses. A multivariate technique, predictive discriminant analysis (PDA), was used. High school grade point average and SAT mathematics score were shown to be related to retention in the online university courses. Locus of control and financial aid were able to identify dropout and completion with 74.5 accuracy.", "Abstract This paper focuses on university-level education offered by methods of distance learning in the field of computers and aims at the investigation of the main causes for student dropouts. The presented study is based on the students of the Course of “Informatics”, Faculty of Science and Technology of the Hellenic Open University and investigates the particularities of education provided through the use of computers and technology in general. This paper presents information about the students' profile, the use of computer technology, the percentage of dropouts, as well as a classification of the reasons for dropouts based on interviews with the students. The study shows that dropouts are correlated with the use of technological means and, based on this fact, the Hellenic Open University implemented interventions in the use of such means. It also proves that a correlation exists between dropouts and students' age, but not gender, although female students are more reluctant to start following a course. However, it is also shown that female students' commitment to a course is stronger and thus, they do not drop out as easily as male students do. Furthermore, the results of this study strongly correlate dropouts to the existence of previous education in the field of Informatics or to working with computers, but not to the degree of specialisation in computers. Finally, the paper presents the reasons provided by the students for drooping out, with the main reasons being the inability to estimate the time required for university-level studies and the perceived difficulty of the computers course.", "We hypothesized that college major persistence would be predicted by first-year academic performance and an interest-major composite score that is derived from a student’s entering major and two work task scores. Using a large data set representing 25 four-year institutions and nearly 50,000 students, we randomly split the sample into an estimation sample and a validation sample. Using the estimation sample, we found major-specific coefficients corresponding to the two work task scores that optimized the prediction of major persistence. Then, we applied the estimated coefficients to the validation sample to form an interest-major composite score representing the likelihood of persisting in entering major. Using the validation sample, we then tested a theoretical model for major persistence that incorporated academic preparation, the interest-major composite score, and first-year academic performance. The results suggest that (1) interest-major fit and first-year academic performance work to independently predict whether a student will stay in their entering major and (2) the relative importance of two work task scores in predicting major persistence depends on the entering major. The results support Holland’s theory of person-environment fit and suggest that academic performance and interest-major fit are key constructs for understanding major persistence behavior.", "Many students who start college intending to major in science or engineering do not graduate, or decide to switch to a non-science major. We used the recently developed statistical method of random forests to obtain a new perspective of variables that are associated with persistence to a science or engineering degree. We describe classification trees and random forests and contrast the results from these methods with results from the more commonly used method of logistic regression. Among the variables available in Arizona State University data, high school and freshman year GPAs have highest importance for predicting persistence; other variables such as number of science and engineering courses taken freshman year are important for subgroups of the student population. The method used in this study could be employed in other settings to identify faculty practices, teaching methods, and other factors that are associated with high persistence to a degree." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
Most studies that we surveyed capture variables that are summaries over time. Some of these variables by definition are not time dependent such as @cite_46 and some are usually aggregated for a period in the course (or entire course): @cite_2 . In our work we operationalize variables at multiple time points in the course. In this aspect, perhaps the closest approach to ours is @cite_44 where authors form the time varying variables at different points of the course - different sections of the course.
{ "cite_N": [ "@cite_44", "@cite_46", "@cite_2" ], "mid": [ "2106717332", "135509720", "2143029658" ], "abstract": [ "In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to accurately classify some e-learning students, whereas another may succeed, three decision schemes, which combine in different ways the results of the three machine learning techniques, were also tested. The method was examined in terms of overall accuracy, sensitivity and precision and its results were found to be significantly better than those reported in relevant literature.", "Pearson product–moment correlations found significant relationships between students' grades in the online class and their GPA, attendance at a class orientation session, the number of previous course withdrawals, ASSET reading scores, the number of previous online courses, age, and ACT English scores. Regression analysis found that two variables serve as the best predictors: attendance at an orientation session, and the student's grade point average.", "The academic e-learning practice has to deal with various participation patterns and types of online learners with different support needs. The online instructors are challenged to recognize these and react accordingly. Among the participation patterns, special attention is requested by dropouts, which can perturbate online collaboration. Therefore we are in search of a method of early identification of participation patterns and prediction of dropouts. To do this, we use a quantitative view of participation that takes into account only observable variables. On this background we identify in a field study the participation indicators that are relevant for the course completion, i.e. produce significant differences between the completion and dropout sub-groups. Further we identify through cluster analysis four participation patterns with different support needs. One of them is the dropout cluster that could be predicted with an accuracy of nearly 80 . As a practical consequence, this study recommends a simple, easy-to-implement prediction method for dropouts, which can improve online teaching. As a theoretical consequence, we underline the role of the course didactics for the definition of participation, and call for refining previous attrition models." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
Research studies performed on the same data as ours in this paper show a steady progression in how variables are assembled and progress is made on this data. @cite_10 identify the sources of data in MOOCs and discuss the influences of different factors on persistence and achievement. @cite_16 identifies the demographic and background information about students that is related to performance. @cite_12 assembles 20 different variables that capture aggregate student behavior for the entire course. @cite_49 posits variables on a per week basis and correlates with achievement, thus forming a basis for longitudinal study. Our work, takes a leap forward and forms complex longitudinal variables on a . Later, we attribute the success of our predictive models to the formation of the variables.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_12", "@cite_49" ], "mid": [ "2395035212", "2471699996", "2115903838", "2011447368" ], "abstract": [ "MOOCs gather a rich array of click-stream information from students who interact with the platform. However, without student background information, inferences do not take advantage of a deeper understanding of students’ prior experiences, motivation, and home environment. In this poster, we investigate the predictive power of student background factors as well as student experiences with learning materials provided in the first MITx course, “Circuits and Electronics.” We focus on a group of survey completers who were given background questions, and we use multiple regression methods to investigate the relationship between achievement, online resource use, and student background. Online course providers may be able to better tailor online experiences to students when they know how background characteristics mediate the online experience.", "Abstract “Circuits and Electronics” (6.002x), which began in March 2012, was the first MOOC developed by edX, the consortium led by MIT and Harvard. Over 155,000 students initially registered for 6.002x, which was composed of video lectures, interactive problems, online laboratories, and a discussion forum. As the course ended in June 2012, researchers began to analyze the rich sources of data it generated. This article describes both the first stage of this research, which examined the students’ use of resources by time spent on each, and a second stage that is producing an in-depth picture of who the 6.002x students were, how their own background and capabilities related to their achievement and persistence, and how their interactions with 6.002x’s curricular and pedagogical components contributed to their level of success in the course. Studying Learning in the Worldwide ClassroomResearch into edX’s First MOOC F rom the launch of edX, the joint venture between MIT and Harvard to create and disseminate massive online open courses (MOOCs), the leaders of both institutions have emphasized that research into learning will be one of the initiative’s core missions. As numerous articles in both the academic and popular press have pointed out, the ability of MOOCs to generate a tremendous amount of data opens up considerable opportunities for educational research. edX and Coursera, which together claim almost four and a half million enrollees, have developed platforms that track students’ every click as they use instructional resources, complete assessments, and engage in social interactions. These data have the potential to help researchers identify, at a finer resolution than ever before, what contributes to students’ learning and what hampers their success. The challenge for the research and assessment communities is to determine which questions should be asked and in what priority. How can we set ourselves on a path that will produce useful short-term results while providing a foundation upon which to build? What is economically feasible? What is politically possible? How can research into MOOCs contribute to an understanding of on-campus learning? What do stakeholders—faculty, developers, government agencies, foundations, and, most importantly, students—need in order to realize the potential of digital learning, generally, and massive open online courses, specifically?", "In massive open online courses (MOOCs), low barriers to registration attract large numbers of students with diverse interests and backgrounds, and student use of course content is asynchronous and unconstrained. The authors argue that MOOC data are not only plentiful and different in kind but require reconceptualization—new educational variables or different interpretations of existing variables. The authors illustrate this by demonstrating the inadequacy or insufficiency of conventional interpretations of four variables for quantitative analysis and reporting: enrollment, participation, curriculum, and achievement. Drawing from 230 million clicks from 154,763 registrants for a prototypical MOOC offering in 2012, the authors present new approaches to describing and understanding user behavior in this emerging educational context.", "Massive open online courses (MOOCs) provide learning materials and automated assessments for large numbers of virtual users. Because every interaction is recorded, we can longitudinally model performance over the course of the class. We create a panel model of achievement in an early MOOC to estimate within- and between-user differences. In this study, we hope to contribute to HCI literature by, first, applying quasi-experimental methods to identify behaviors that may support student learning in a virtual environment, and, second, by using a panel model that takes into account the longitudinal, dynamic nature of a multiple-week class." ] }
1408.3382
1792679987
Understanding why students stopout will help in understanding how students learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we build accurate predictive models of MOOC student stopout. We document a scalable, stopout prediction methodology, end to end, from raw source data to model analysis. We attempted to predict stopout for the Fall 2012 offering of 6.002x. This involved the meticulous and crowd-sourced engineering of over 25 predictive features extracted for thousands of students, the creation of temporal and non-temporal data representations for use in predictive modeling, the derivation of over 10 thousand models with a variety of state-of-the-art machine learning techniques and the analysis of feature importance by examining over 70000 models. We found that stop out prediction is a tractable problem. Our models achieved an AUC (receiver operating characteristic area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one week in advance. Even with more difficult prediction problems, such as predicting stop out at the end of the course with only one weeks' data, the models attained AUCs of 0.7.
There are three noteworthy accomplishments of our study when compared to these studies above. First, throughout our study we emphasize on variable feature engineering from the click stream data and thus generate complex features that explain student behavior longitudinally @cite_56 . We attribute success of our models to these variables (more then the models themselves) as we achieve AUC in the range of 0.88-0.90 for one week ahead for the cohort.
{ "cite_N": [ "@cite_56" ], "mid": [ "1646083338" ], "abstract": [ "We examine the process of engineering features for developing models that improve our understanding of learners' online behavior in MOOCs. Because feature engineering relies so heavily on human insight, we argue that extra effort should be made to engage the crowd for feature proposals and even their operationalization. We show two approaches where we have started to engage the crowd. We also show how features can be evaluated for their relevance in predictive accuracy. When we examined crowd-sourced features in the context of predicting stopout, not only were they nuanced, but they also considered more than one interaction mode between the learner and platform and how the learner was relatively performing. We were able to identify different influential features for stop out prediction that depended on whether a learner was in 1 of 4 cohorts defined by their level of engagement with the course discussion forum or wiki. This report is part of a compendium which considers different aspects of MOOC data science and stop out prediction." ] }
1408.3060
2950052605
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and or require real-time prediction.
Numerous methods have been proposed to mitigate this issue. To compare computational cost of these methods we make the following assumptions: We have @math observations and access to an @math with @math algorithm for solving the optimization problem at hand. In other words, the algorithm is linear or worse. This is a reasonable assumption --- almost all data analysis algorithm need to inspect the data at least once to draw inference. Data has @math dimensions. For simplicity we assume that it is dense with density rate @math , i.e. on average @math coordinates are nonzero. The number of nontrivial basis functions is @math . This is well motivated by @cite_14 and it also follows from the fact that e.g. in regularized risk minimization the subgradient of the loss function determines the value of the associated dual variable. We denote the number of (nonlinear) basis functions by @math .
{ "cite_N": [ "@cite_14" ], "mid": [ "2156512439" ], "abstract": [ "In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT &T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem." ] }
1408.3060
2950052605
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and or require real-time prediction.
@cite_20 focused on compressing function expansions after the problem was solved by means of reduced-set expansions. That is, one first solves the full optimization problem at @math cost and subsequently one minimizes the discrepancy between the full expansion and an expansion on a subset of basis functions. The exponent of @math arises from the fact that we need to compute @math kernels @math times. Evaluation of the reduced function set costs at least @math operations per instance and @math storage, since each kernel function @math requires storage of @math .
{ "cite_N": [ "@cite_20" ], "mid": [ "2119821739" ], "abstract": [ "The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition." ] }
1408.3060
2950052605
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and or require real-time prediction.
A promising alternative to is to design new ones that are immediately compatible with scalable data analysis. A recent instance of such work is the algorithm of @cite_8 who map observations @math into set membership indicators @math , where @math denotes the random partitioning chosen at iterate @math and @math indicates the particular set.
{ "cite_N": [ "@cite_8" ], "mid": [ "1516082955" ], "abstract": [ "We present Random Partition Kernels, a new class of kernels derived by demonstrating a natural connection between random partitions of objects and kernels between those objects. We show how the construction can be used to create kernels from methods that would not normally be viewed as random partitions, such as Random Forest. To demonstrate the potential of this method, we propose two new kernels, the Random Forest Kernel and the Fast Cluster Kernel, and show that these kernels consistently outperform standard kernels on problems involving real-world datasets. Finally, we show how the form of these kernels lend themselves to a natural approximation that is appropriate for certain big data problems, allowing @math inference in methods such as Gaussian Processes, Support Vector Machines and Kernel PCA." ] }
1408.3060
2950052605
Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that storing and computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks proposed by Rahimi and Recht (2009) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. Our method applies to any translation invariant and any dot-product kernel, such as the popular RBF kernels and polynomial kernels. We prove that the approximation is unbiased and has low variance. Experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and or require real-time prediction.
A promising alternative was proposed by @cite_25 under the moniker of . In contrast to previous work the authors attempt to obtain an function space expansion directly. This works for translation invariant kernel functions by performing the following operations: Generate a (Gaussian) random matrix @math of size @math . For each observation @math compute @math and apply a nonlinearity @math to each coordinate separately, i.e. @math . The approach requires @math storage both at training and test time. Training costs @math operations and prediction on a new observation costs @math . This is potentially much cheaper than reduced set kernel expansions. The experiments in showed that performance was very competitive with conventional RBF kernel approaches while providing dramatically simplified code.
{ "cite_N": [ "@cite_25" ], "mid": [ "2123395972" ], "abstract": [ "Randomized neural networks are immortalized in this AI Koan: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. \"What are you doing?\" asked Minsky. \"I am training a randomly wired neural net to play tic-tac-toe,\" Sussman replied. \"Why is the net wired randomly?\" asked Minsky. Sussman replied, \"I do not want it to have any preconceptions of how to play.\" Minsky then shut his eyes. \"Why do you close your eyes?\" Sussman asked his teacher. \"So that the room will be empty,\" replied Minsky. At that moment, Sussman was enlightened. We analyze shallow random networks with the help of concentration of measure inequalities. Specifically, we consider architectures that compute a weighted sum of their inputs after passing them through a bank of arbitrary randomized nonlinearities. We identify conditions under which these networks exhibit good classification performance, and bound their test error in terms of the size of the dataset and the number of random nonlinearities." ] }
1408.2770
1846944496
We present a game-theoretic model for the spread of deviant behavior in online social networks. We utilize a two-strategy framework wherein each player's behavior is classified as normal or deviant and evolves according to the cooperate-defect payoff scheme of the classic prisoner's dilemma game. We demonstrate convergence of individual behavior over time to a final strategy vector and indicate counterexamples to this convergence outside the context of prisoner's dilemma. Theoretical results are validated on a real-world dataset collected from a popular online forum.
Several recent works focus on detecting social spammers @cite_12 @cite_20 @cite_19 @cite_1 @cite_29 @cite_4 . Social spammers, according to this body of work, are users controlled either by humans or bots, who use social networking sites and in particular their social connections to promote products, advertise events, or simply post useless and or inappropriate comments. @cite_20 studied social spammers in online social networks. To do so, they deployed social honeypots for harvesting deceptive spam profiles from social networking communities, and created spam classifiers using machine learning methods (e.g., SVM) based on a variety of features. Similarly, Kantchelian @cite_19 developed an approach for detecting comment spam by leveraging the level of a comment; he showed that spammers' comments have low information levels. @cite_1 also proposed a comprehensive framework for social spam detection, based on social network content, attaining a high accuracy of detection. Our focus is not on social spammers only but rather on activities related to vandalism and misbehavior, which may or may not be generated by social spammers.
{ "cite_N": [ "@cite_4", "@cite_29", "@cite_1", "@cite_19", "@cite_12", "@cite_20" ], "mid": [ "", "2001706110", "146417747", "2123449924", "2098395374", "1996802155" ], "abstract": [ "", "Online social networks, such as Twitter, have soared in popularity and in turn have become attractive targets of spam. In fact, spammers have evolved their strategies to stay ahead of Twitter's anti-spam measures in this short period of time. In this paper, we investigate the strategies Twitter spammers employ to reach relevant target audiences. Due to their targeted approaches to send spam, we see evidence of a large number of the spam accounts forming relationships with other Twitter users, thereby becoming deeply embedded in the social network. We analyze nearly 20 million tweets from about 7 million Twitter accounts over a period of five days. We identify a set of 14,230 spam accounts that manage to live longer than the other 73 of other spam accounts in our data set. We characterize their behavior, types of tweets they use, and how they target their audience. We find that though spam campaigns changed little from a recent work by , spammer strategies evolved much in the same short time span, causing us to sometimes find contradictory spammer behavior from what was noted in 's work. Specifically, we identify four major strategies used by 2 3rd of the spammers in our data. The most popular of these was one where spammers targeted their own followers. The availability of various kinds of services that help garner followers only increases the popularity of this strategy. The evolution in spammer strategies we observed in our work suggests that studies like ours should be undertaken frequently to keep up with spammer evolution.", "The availability of microblogging, like Twitter and Sina Weibo, makes it a popular platform for spammers to unfairly overpower normal users with unwanted content via social networks, known as social spamming. The rise of social spamming can significantly hinder the use of microblogging systems for effective information dissemination and sharing. Distinct features of microblogging systems present new challenges for social spammer detection. First, unlike traditional social networks, microblogging allows to establish some connections between two parties without mutual consent, which makes it easier for spammers to imitate normal users by quickly accumulating a large number of \"human\" friends. Second, microblogging messages are short, noisy, and unstructured. Traditional social spammer detection methods are not directly applicable to microblogging. In this paper, we investigate how to collectively use network and content information to perform effective social spammer detection in microblogging. In particular, we present an optimization formulation that models the social network and content information in a unified framework. Experiments on a real-world Twitter dataset demonstrate that our proposed method can effectively utilize both kinds of information for social spammer detection.", "In this work, we design a method for blog comment spam detection using the assumption that spam is any kind of uninformative content. To measure the \"informativeness\" of a set of blog comments, we construct a language and tokenization independent metric which we call content complexity, providing a normalized answer to the informal question \"how much information does this text contain?\" We leverage this metric to create a small set of features well-adjusted to comment spam detection by computing the content complexity over groupings of messages sharing the same author, the same sender IP, the same included links, etc. We evaluate our method against an exact set of tens of millions of comments collected over a four months period and containing a variety of websites, including blogs and news sites. The data was provided to us with an initial spam labeling from an industry competitive source. Nevertheless the initial spam labeling had unknown performance characteristics. To train a logistic regression on this dataset using our features, we derive a simple mislabeling tolerant logistic regression algorithm based on expectation-maximization, which we show generally outperforms the plain version in precision-recall space. By using a parsimonious hand-labeling strategy, we show that our method can operate at an arbitrary high precision level, and that it significantly dominates, both in terms of precision and recall, the original labeling, despite being trained on it alone. The content complexity metric, the use of a noise-tolerant logistic regression and the evaluation methodology are thus the three central contributions with this work.", "In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77 of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9 of accounts form social relationships with regular Twitter users. Instead, 17 of accounts rely on hijacking trends, while 52 of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.", "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives." ] }
1408.2770
1846944496
We present a game-theoretic model for the spread of deviant behavior in online social networks. We utilize a two-strategy framework wherein each player's behavior is classified as normal or deviant and evolves according to the cooperate-defect payoff scheme of the classic prisoner's dilemma game. We demonstrate convergence of individual behavior over time to a final strategy vector and indicate counterexamples to this convergence outside the context of prisoner's dilemma. Theoretical results are validated on a real-world dataset collected from a popular online forum.
Finally, our work parallels the body of work on free-riding in peer-to-peer systems @cite_5 @cite_26 . Peer-to-peer systems are designed to allow users to connect with others and share resources. Similar to online communities, users are free to access and contribute as much as desired, and few controls are in place. As a result, in p2p systems peers may abuse their connections by exploiting other peers' resources, refusing to share owned resources, sharing broken or corrupted resources, etc., draining the network without contributing it. In online communities, the health of the community is heavily based on individual peers' reactions to selfish behavior, which they may choose to emulate or disengage from. Punishment mechanisms can also be put in place, although these are often considered not to be truly effective. To tackle these issues, the most common solution is the implementation of incentive-based mechanisms. Incentives are applied in certain online forums, whereby end users are given special roles and privileges as a reward for good behavior.
{ "cite_N": [ "@cite_5", "@cite_26" ], "mid": [ "2126996160", "2133145073" ], "abstract": [ "We devise a model to study the phenomenon of free-riding and free-identities in peer-to-peer systems. At the heart of our model is a user of a certain type, an intrinsic and private parameter that reflects the user's willingness to contribute resources to the system. A user decides whether to contribute or free-ride based on how the current contribution cost in the system compares to her type. We study the impact of mechanisms that exclude low type users or, more realistically, penalize free-riders with degraded service. We also consider dynamic scenarios with arrivals and departures of users, and with whitewashers -users who leave the system and rejoin with new identities to avoid reputational penalties. We find that imposing penalty on all users that join the system is effective under many scenarios. In particular, system performance degrades significantly only when the turnover rate among users is high. Finally, we show that the optimal exclusion or penalty level differs significantly from the level that optimizes the performance of contributors only for a limited range of societ al generosity levels.", "The popularity of peer-to-peer (P2P) networks makes them an attractive target to the creators of viruses and other malicious code. Recently a number of viruses designed specifically to spread via P2P networks have emerged. Pollution has also become increasingly prevalent as copyright holders inject multiple decoy versions in order to impede item distribution. In this paper we derive deterministic epidemiological models for the propagation of a P2P virus through a P2P network and the dissemination of pollution. We report on discrete simulations that provide some verification that the models remain sufficiently accurate despite variations in individual peer conduct to provide insight into the behaviour of the system. The paper examines the steady-state behaviour and illustrates how the models may be used to estimate in a computationally efficient manner how effective object reputation schemes will be in mitigating the impact of viruses and preventing the spread of pollution." ] }
1408.2436
2952959227
Motivated by applications to graph morphing, we consider the following : We are given a labelled @math -vertex planar graph, @math , that has @math connected components, and @math isomorphic planar straight-line drawings, @math , of @math . We wish to augment @math by adding vertices and edges to make it connected in such a way that these vertices and edges can be added to @math as points and straight-line segments, respectively, to obtain @math planar straight-line drawings isomorphic to the augmentation of @math . We show that adding @math edges and vertices to @math is always sufficient and sometimes necessary to achieve this goal. The upper bound holds for all @math and @math and is achievable by an algorithm whose running time is @math for @math and whose running time is @math for general values of @math . The lower bound holds for all @math and @math .
To the best of our knowledge, there is little work on compatible connectivity-augmentation of planar graphs, though there is work on isomorphic triangulations of polygons. Refer to Figure . In this setting, the graph @math is a cycle and one has two non-crossing drawings, @math and @math , of @math . The goal is to augment @math (and the two drawings @math and @math ) so that @math becomes a near-triangulation, and @math and @math become (geometric) triangulations of the interiors of the polygons whose boundaries are @math and @math . Aronov al @cite_0 showed that this can always be accomplished with the addition of @math vertices and that @math vertices are sometimes necessary. Kranakis and Urrutia @cite_10 showed that this result can be made sensitive to the number of reflex vertices of @math and @math , so that the number of triangles required is @math where @math and @math are the number of reflex vertices of @math and @math , respectively.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2058517112", "2100595965" ], "abstract": [ "Abstract It is well known that, given two simple n-sided polygons, it may not be possible to triangulate the two polygons in a compatible fashion, if one's choice of triangulation vertices is restricted to polygon corners. Is it always possible to produce compatible triangulations if additional vertices inside the polygon are allowed? We give a positive answer and construct a pair of such triangulations with O(n2) new triangulation vertices. Moreover, we show that there exists a ‘universal’ way of triangulating an n-sided polygon with O(n2) extra triangulation vertices. Finally, we also show that creating compatible triangulations requires a quadratic number of extra vertices in the worst case.", "Assume that an isomorphism between two n-vertex simple polygons, P,Q (with k,l reflex vertices, respectively) is given. We present two algorithms for constructing isomorphic (i.e. adjacency preserving) triangulations of P and Q, respectively. The first algorithm computes isomorphic triangulations of P and Q by introducing at most O((k+l)2) Steiner points and has running time O(n+(k+l)2). The second algorithm computes isomorphic traingulations of P and Q by introducing at most O(kl) Steiner points and has running time O(n+kllog n). The number of Steiner points introduced by the second algorithm is also worst-case optimal. Unlike the O(n2) algorithm of Aronov, Seidel and Souvaine1 our algorithms are sensitive to the number of reflex vertices of the polygons. In particular, our algorithms have linear running time when for the first algorithm, and kl≤n log n for the second algorithm." ] }
1408.2436
2952959227
Motivated by applications to graph morphing, we consider the following : We are given a labelled @math -vertex planar graph, @math , that has @math connected components, and @math isomorphic planar straight-line drawings, @math , of @math . We wish to augment @math by adding vertices and edges to make it connected in such a way that these vertices and edges can be added to @math as points and straight-line segments, respectively, to obtain @math planar straight-line drawings isomorphic to the augmentation of @math . We show that adding @math edges and vertices to @math is always sufficient and sometimes necessary to achieve this goal. The upper bound holds for all @math and @math and is achievable by an algorithm whose running time is @math for @math and whose running time is @math for general values of @math . The lower bound holds for all @math and @math .
Finally, several papers have dealt with the problem of increasing the connectivity of a (single) geometric planar graph while adding few vertices and edges. Abellenas al @cite_1 consider the problem of adding edges to a planar drawing in order to make it 2-edge connected and showed that @math edge are sometimes necessary and @math edges are always sufficient. T 'oth @cite_8 later obtained the tight upper-bound of @math for the same problem. Rutter and Wolff @cite_17 show that finding the minimum number of edges required to achieve 2-edge connectivity is NP-hard.
{ "cite_N": [ "@cite_1", "@cite_17", "@cite_8" ], "mid": [ "2063013286", "2011374244", "2022451101" ], "abstract": [ "Let G be a connected plane geometric graph with n vertices. In this paper, we study bounds on the number of edges required to be added to G to obtain 2-vertex or 2-edge connected plane geometric graphs. In particular, we show that for G to become 2-edge connected, 2n3 additional edges are required in some cases and that 6n7 additional edges are always sufficient. For the special case of plane geometric trees, these bounds decrease to n2 and 2n3, respectively.", "", "It is shown that every connected planar straight line graph with n>=3 vertices has an embedding preserving augmentation to a 2-edge connected planar straight line graph with at most @?(2n-2) 3@? new edges. It is also shown that every planar straight line tree with n>=3 vertices has an embedding preserving augmentation to a 2-edge connected planar topological graph with at most @?n 2@? new edges. These bounds are the best possible. However, for every n>=3, there are planar straight line trees with n vertices that do not have an embedding preserving augmentation to a 2-edge connected planar straight line graph with fewer than 1733n-O(1) new edges." ] }
1408.2764
2951274308
We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting. We then introduce the notion of convex calibration dimension of a multiclass loss matrix, which measures the smallest size' of a prediction space in which it is possible to design a convex surrogate that is calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, we apply our framework to study various subset ranking losses, and use the convex calibration dimension as a tool to show both the existence and non-existence of various types of convex calibrated surrogates for these losses. Our results strengthen recent results of (2010) and (2012) on the non-existence of certain types of convex calibrated surrogates in subset ranking. We anticipate the convex calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
Initial work on consistency of surrogate risk minimization algorithms focused largely on binary classification. For example, @cite_0 showed the consistency of support vector machines with universal kernels for the problem of binary classification; @cite_16 and @cite_9 showed similar results for boosting methods. @cite_7 and @cite_1 studied the calibration of margin-based surrogates for binary classification. In particular, in their seminal work, @cite_7 established that the property of classification calibration' of a surrogate loss is equivalent to its minimization yielding 0-1 consistency, and gave a simple necessary and sufficient condition for convex margin-based surrogates to be calibrated w.r.t. the binary 0-1 loss. More recently, @cite_14 analyzed the calibration of a general family of surrogates termed proper composite surrogates for binary classification. Variants of standard 0-1 binary classification have also been studied; for example, @cite_6 studied consistency for the problem of binary classification with a reject option, and @cite_25 studied calibrated surrogates for cost-sensitive binary classification.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_16", "@cite_25" ], "mid": [ "2101557761", "", "2068221105", "1985554184", "2176867214", "2109445534", "2051922388", "1969623397" ], "abstract": [ "We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and \"classification calibrated\" losses. We also consider the question of the \"best\" surrogate binary loss. We introduce a precise notion of \"best\" and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of \"surrogate tuning\" as well as providing an explicit characterisation of when Bregman divergences on the unit interval are convex in their second argument. Finally, in an appendix we present some new algorithm-independent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are non-robust to misclassification noise.", "", "The probability of error of classification methods based on convex combinations of simple base classifiers by boosting algorithms is investigated. The main result of the paper is that certain regularized boosting algorithms provide Bayes-risk consistent classifiers under the sole assumption that the Bayes classifier may be approximated by a convex combination of the base classifiers. Nonasymptotic distribution-free bounds are also developed which offer interesting new insight into how boosting works and help explain its success in practical classification problems.", "This paper proposes evaluation methods based on the use of non-dichotomous relevance judgements in IR experiments. It is argued that evaluation methods should credit IR methods for their ability to retrieve highly relevant documents. This is desirable from the user point of view in modem large IR environments. The proposed methods are (1) a novel application of P-R curves and average precision computations based on separate recall bases for documents of different degrees of relevance, and (2) two novel measures computing the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. We then demonstrate the use of these evaluation methods in a case study on the effectiveness of query types, based on combinations of query structures and expansion, in retrieving documents of various degrees of relevance. The test was run with a best match retrieval system (In- Query I) in a text database consisting of newspaper articles. The results indicate that the tested strong query structures are most effective in retrieving highly relevant documents. The differences between the query types are practically essential and statistically significant. More generally, the novel evaluation methods and the case demonstrate that non-dichotomous relevance assessments are applicable in IR experiments, may reveal interesting phenomena, and allow harder testing of IR methods.", "We consider the problem of @math -class classification ( @math ), where the classifier can choose to abstain from making predictions at a given cost, say, a factor @math of the cost of misclassification. Designing consistent algorithms for such @math -class classification problems with a reject option' is the main goal of this paper, thereby extending and generalizing previously known results for @math . We show that the Crammer-Singer surrogate and the one vs all hinge loss, albeit with a different predictor than the standard argmax, yield consistent algorithms for this problem when @math . More interestingly, we design a new convex surrogate that is also consistent for this problem when @math and operates on a much lower dimensional space ( @math as opposed to @math ). We also generalize all three surrogates to be consistent for any @math .", "It is shown that various classifiers that are based on minimization of a regularized risk are universally consistent, i.e., they can asymptotically learn in every classification task. The role of the loss functions used in these algorithms is considered in detail. As an application of our general framework, several types of support vector machines (SVMs) as well as regularization networks are treated. Our methods combine techniques from stochastics, approximation theory, and functional analysis", "Recent experiments and theoretical studies show that AdaBoost can overfit in the limit of large time. If running the algorithm forever is suboptimal, a natural question is how low can the prediction error be during the process of AdaBoost? We show under general regularity conditions that during the process of AdaBoost a consistent prediction is generated, which has the prediction error approximating the optimal Bayes error as the sample size increases. This result suggests that, while running the algorithm forever can be suboptimal, it is reasonable to expect that some regularization method via truncation of the process may lead to a near-optimal performance for sufficiently large sample size.", "Surrogate losses underlie numerous state-of-the-art binary classification algorithms, such as support vector machines and boosting. The impact of a surrogate loss on the statistical performance of an algorithm is well-understood in symmetric classification settings, where the misclassification costs are equal and the loss is a margin loss. In particular, classification-calibrated losses are known to imply desirable properties such as consistency. While numerous efforts have been made to extend surrogate loss-based algorithms to asymmetric settings, to deal with unequal misclassification costs or training data imbalance, considerably less attention has been paid to whether the modified loss is still calibrated in some sense. This article extends the theory of classification-calibrated losses to asymmetric problems. As in the symmetric case, it is shown that calibrated asymmetric surrogate losses give rise to excess risk bounds, which control the expected misclassification cost in terms of the excess surrogate risk. This theory is illustrated on the class of uneven margin losses, and the uneven hinge, squared error, exponential, and sigmoid losses are treated in detail." ] }
1408.2869
1662336426
In the classical Gaussian SVM classification we use the feature space projection transforming points to normal distributions with fixed covariance matrices (identity in the standard RBF and the covariance of the whole dataset in Mahalanobis RBF). In this paper we add additional information to Gaussian SVM by considering local geometry-dependent feature space projection. We emphasize that our approach is in fact an algorithm for a construction of the new Gaussian-type kernel. We show that better (compared to standard RBF and Mahalanobis RBF) classification results are obtained in the simple case when the space is preliminary divided by k-means into two sets and points are represented as normal distributions with a covariances calculated according to the dataset partitioning. We call the constructed method C @math RBF, where @math stands for the amount of clusters used in k-means. We show empirically on nine datasets from UCI repository that C @math RBF increases the stability of the grid search (measured as the probability of finding good parameters).
In the recent years there is a growing interest in the fields of metric learning @cite_2 . Among others, Mahalanobis metric learning for the RBF SVM has been proposed @cite_1 . More computationally feasible solutions, which are similarly justified include performing preprocessing step. One of such approaches is a search for the smallest volume bounding ellipsoid @cite_13 which is used to define the Mahalanobis kernel. Our approach is similar to this idea as it also performs a preprocessing in order to find some data characteristics but instead of optimization procedure we use cheap clustering technique.
{ "cite_N": [ "@cite_13", "@cite_1", "@cite_2" ], "mid": [ "", "207809393", "2106053110" ], "abstract": [ "", "This paper introduces a novel classification approach which improves the performance of support vector machines (SVMs) by learning a distance metric. The metric learned is a Mahalanobis metric previously trained so that examples from different classes are separated with a large margin. The learned metric is used to define a kernel function for SVM classification. In this context, the metric can be seen as a linear transformation of the original inputs before applying an SVM classifier that uses Euclidean distances. This transformation increases the separability of classes in the transformed space where the classification is applied. Experiments demonstrate significant improvements in classification tasks on various data sets.", "The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner." ] }
1408.2869
1662336426
In the classical Gaussian SVM classification we use the feature space projection transforming points to normal distributions with fixed covariance matrices (identity in the standard RBF and the covariance of the whole dataset in Mahalanobis RBF). In this paper we add additional information to Gaussian SVM by considering local geometry-dependent feature space projection. We emphasize that our approach is in fact an algorithm for a construction of the new Gaussian-type kernel. We show that better (compared to standard RBF and Mahalanobis RBF) classification results are obtained in the simple case when the space is preliminary divided by k-means into two sets and points are represented as normal distributions with a covariances calculated according to the dataset partitioning. We call the constructed method C @math RBF, where @math stands for the amount of clusters used in k-means. We show empirically on nine datasets from UCI repository that C @math RBF increases the stability of the grid search (measured as the probability of finding good parameters).
Many researchers investigated possible fusion of k-means and SVMs -- these approaches spans from using k-means to reduce the training set size @cite_7 through reduction of support vectors count @cite_14 to even incorporating the process of finding centroids directly into the optimization problem @cite_6 . However, in our work k-means is used in a completely different manner, only as a selection method for the partition. Instead of reducing amount of available information (by either removing training samples or support vectors) it introduces additional kind of knowledge into the process.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_7" ], "mid": [ "2000277463", "2621615146", "1967880316" ], "abstract": [ "Support vector machines (SVM) have been applied to build classifiers, which can help users make well-informed business decisions. Despite their high generalisation accuracy, the response time of SVM classifiers is still a concern when applied into real-time business intelligence systems, such as stock market surveillance and network intrusion detection. This paper speeds up the response of SVM classifiers by reducing the number of support vectors. This is done by the K-means SVM (KMSVM) algorithm proposed in this paper. The KMSVM algorithm combines the K-means clustering technique with SVM and requires one more input parameter to be determined: the number of clusters. The criterion and strategy to determine the input parameters in the KMSVM algorithm are given in this paper. Experiments compare the KMSVM algorithm with SVM on real-world databases, and the results show that the KMSVM algorithm can speed up the response time of classifiers by both reducing support vectors and maintaining a similar testing accuracy to SVM.", "In many problems of machine learning, the data are distributed nonlinearly. One way to address this kind of data is training a nonlinear classifier such as kernel support vector machine (kernel SVM). However, the computational burden of kernel SVM limits its application to large scale datasets. In this paper, we propose a Clustered Support Vector Machine (CSVM), which tackles the data in a divide and conquer manner. More specifically, CSVM groups the data into several clusters, followed which it trains a linear support vector machine in each cluster to separate the data locally. Meanwhile, CSVM has an additional global regularization, which requires the weight vector of each local linear SVM aligning with a global weight vector. The global regularization leverages the information from one cluster to another, and avoids over-fitting in each cluster. We derive a data-dependent generalization error bound for CSVM, which explains the advantage of CSVM over linear SVM. Experiments on several benchmark datasets show that the proposed method outperforms linear SVM and some other related locally linear classifiers. It is also comparable to a fine-tuned kernel SVM in terms of prediction performance, while it is more efficient than kernel SVM.", "Support Vector Machine (SVM) is one of the most popular and effective classification algorithms and has attracted much attention in recent years. As an important large margin classifier, SVM dedicates to find the optimal separating hyperplane between two classes, thus can give outstanding generalization ability for it. In order to find the optimal hyperplane, we commonly take most of the labeled records as our training set. However, the separating hyperplane is only determined by a few crucial samples (Support Vectors, SVs), we needn't train SVM model on the whole training set. This paper presents a novel approach based on clustering algorithm, in which only a small subset was selected from the original training set to act as our final training set. Our algorithm works to select the most informative samples using K-means clustering algorithm, and the SVM classifier is built through training on those selected samples. Experiments show that our approach greatly reduces the scale of training set, thus effectively saves the training and predicting time of SVM, and at the same time guarantees the generalization performance." ] }
1408.2196
2952606938
Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG-Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.
A variety of AL algorithms have been proposed in the literature employing various query strategies. One of the most popular strategy is called the uncertainty sampling (US), where the active learner queries the point whose label is uncertain @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2085989833" ], "abstract": [ "The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness." ] }
1408.2196
2952606938
Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG-Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.
The uncertainty in the label is usually calculated using entropy, or variance of the label distribution. The authors in @cite_6 have introduced the query-bycommittee (QBC) strategy where a committee of potential models, which all agree with the currently labelled data is maintained and, the point where most of the committee members disagree with, is considered for querying. Other strategies include the maximum expected reduction in error @cite_2 or variance reducing query strategies such as @cite_7 to query the optimal point.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_2" ], "mid": [ "", "2080021732", "2127816222" ], "abstract": [ "", "We propose an algorithm called query by commitee , in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement . The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms.", "Active and semi-supervised learning are important techniques when labeled data are scarce. We combine the two under a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The semi-supervised learning problem is then formulated in terms of a Gaussian random field on this graph, the mean of which is characterized in terms of harmonic functions. Active learning is performed on top of the semisupervised learning scheme by greedily selecting queries from the unlabeled data to minimize the estimated expected classification error (risk); in the case of Gaussian fields the risk is efficiently computed using matrix methods. We present experimental results on synthetic data, handwritten digit recognition, and text classification tasks. The active learning scheme requires a much smaller number of queries to achieve high accuracy compared with random query selection." ] }
1408.2196
2952606938
Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG-Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.
Recently, the Random exploration has been used in different domains such as recommender system (RS) and information retrieval. For example, in @cite_0 @cite_1 , authors model RS as a contextual bandit problem. The authors propose an algorithm which perform random recommendation according to the risk of upsetting the user. However, to our knowledge there has been only one paper addressing the random exploration in active learning. Authors in @cite_3 address this problem by randomly choosing between exploration and exploitation at each round, and then receive feedback on how effective is the exploration. The impact of exploration is measured by the induced change in the learned classifier when an exploratory example is labelled and added to the training set. The active learner updates the probability of exploring in subsequent rounds based on the feedback it has received. However none of the optimisation techniques is used to compute the optimal exploration and the work has only been done to improve the uncertainty sampling technique.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_3" ], "mid": [ "116854235", "", "2123108597" ], "abstract": [ "Most existing approaches in Mobile Context-Aware Recommender Systems focus on recommending relevant items to users taking into account contextual information, such as time, location, or social aspects. However, none of them has considered the problem of user's content evolution. We introduce in this paper an algorithm that tackles this dynamicity. It is based on dynamic exploration exploitation and can adaptively balance the two aspects by deciding which user's situation is most relevant for exploration or exploitation. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.", "", "Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise." ] }
1408.2327
2107505626
Many of the ordinal regression models that have been proposed in the literature can be seen as methods that minimize a convex surrogate of the zero-one, absolute, or squared loss functions. A key property that allows to study the statistical implications of such approximations is that of Fisher consistency. Fisher consistency is a desirable property for surrogate loss functions and implies that in the population setting, i.e., if the probability distribution that generates the data were available, then optimization of the surrogate would yield the best possible model. In this paper we will characterize the Fisher consistency of a rich family of surrogate loss functions used in the context of ordinal regression, including support vector ordinal regression, ORBoosting and least absolute deviation. We will see that, for a family of surrogate loss functions that subsumes support vector ordinal regression and ORBoosting, consistency can be fully characterized by the derivative of a real-valued function at zero, as happens for convex margin-based surrogates in binary classification. We also derive excess risk bounds for a surrogate of the absolute error that generalize existing risk bounds for binary classification. Finally, our analysis suggests a novel surrogate of the squared error loss. We compare this novel surrogate with competing approaches on 9 different datasets. Our method shows to be highly competitive in practice, outperforming the least squares loss on 7 out of 9 datasets.
Fisher consistency of binary and multiclass classification for the zero-one loss has been studied for a variety of surrogate loss functions (see e.g. @cite_0 @cite_2 @cite_1 @cite_3 ). Some of the results in this paper generalize known results for binary classification to the ordinal regression setting. In particular, provide a characterization of the Fisher consistency for convex margin-based surrogates that we extend to the all threshold (AT) and immediate threshold (IT) family of surrogate loss functions. The excess error bound that we provide for the AT surrogate also generalizes the excess error bound given in [Section 2.3] Bartlett2003 .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "2116444583", "1480538416", "2101557761", "2023163512" ], "abstract": [ "Two-category support vector machines (SVM) have been very popular in the machine learning community for classification problems. Solving multicategory problems by a series of binary classifiers is quite common in the SVM paradigm; however, this approach may fail under various circumstances. We propose the multicategory support vector machine (MSVM), which extends the binary SVM to the multicategory case and has good theoretical properties. The proposed method provides a unifying framework when there are either equal or unequal misclassification costs. As a tuning criterion for the MSVM, an approximate leave-one-out cross-validation function, called Generalized Approximate Cross Validation, is derived, analogous to the binary case. The effectiveness of the MSVM is demonstrated through the applications to cancer classification using microarray data and cloud classification with satellite radiance profiles.", "Binary classification is a well studied special case of the classification problem. Statistical properties of binary classifiers, such as consistency, have been investigated in a variety of settings. Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that one can lose consistency in generalizing a binary classification method to deal with multiple classes. We study a rich family of multiclass methods and provide a necessary and sufficient condition for their consistency. We illustrate our approach by applying it to some multiclass methods proposed in the literature.", "We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and \"classification calibrated\" losses. We also consider the question of the \"best\" surrogate binary loss. We introduce a precise notion of \"best\" and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of \"surrogate tuning\" as well as providing an explicit characterisation of when Bregman divergences on the unit interval are convex in their second argument. Finally, in an appendix we present some new algorithm-independent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are non-robust to misclassification noise.", "We study how closely the optimal Bayes error rate can be approximately reached using a classification algorithm that computes a classifier by minimizing a convex upper bound of the classification error function. The measurement of closeness is characterized by the loss function used in the estimation. We show that such a classification scheme can be generally regarded as a (nonmaximum-likelihood) conditional in-class probability estimate, and we use this analysis to compare various convex loss functions that have appeared in the literature. Furthermore, the theoretical insight allows us to design good loss functions with desirable properties. Another aspect of our analysis is to demonstrate the consistency of certain classification methods using convex risk minimization. This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines. It also shows their limitations and suggests possible improvements." ] }
1408.2116
2951931459
In many wireless networks, there is no fixed physical backbone nor centralized network management. The nodes of such a network have to self-organize in order to maintain a virtual backbone used to route messages. Moreover, any node of the network can be a priori at the origin of a malicious attack. Thus, in one hand the backbone must be fault-tolerant and in other hand it can be useful to monitor all network communications to identify an attack as soon as possible. We are interested in the minimum problem, a generalization of the classical minimum Vertex Cover problem, which allows to obtain a connected backbone. Recently, DelbotLP13 proposed a new centralized algorithm with a constant approximation ratio of @math for this problem. In this paper, we propose a distributed and self-stabilizing version of their algorithm with the same approximation guarantee. To the best knowledge of the authors, it is the first distributed and fault-tolerant algorithm for this problem. The approach followed to solve the considered problem is based on the construction of a connected minimal clique partition. Therefore, we also design the first distributed self-stabilizing algorithm for this problem, which is of independent interest.
From a self-stabilizing point of view, Kiniwa @cite_34 proposed the first self-stabilizing algorithm for this problem which constructs a 2-approximate vertex cover in general networks with unique nodes identifier and under a fair distributed daemon. This algorithm is based on the construction of a maximal matching which allows to obtain a 2-approximation vertex cover by selecting the extremities of the matching edges. Turau @cite_0 considered the same problem in anonymous networks and gave an 3-approximation algorithm under a distributed daemon. Since it is impossible to construct a maximal matching in an anonymous network, this algorithm establishes first a bicolored graph of the network allowing then to construct a maximal matching to obtain a vertex cover. Turau @cite_47 designed a self-stabilizing vertex cover algorithm with approximation ratio of 2 in anonymous networks under an unfair distributed daemon. This algorithm uses the algorithm in @cite_0 executed several times on parts of the graph to improve the quality of the constructed solution.
{ "cite_N": [ "@cite_0", "@cite_47", "@cite_34" ], "mid": [ "1981311956", "2064150382", "1855768119" ], "abstract": [ "Abstract The non-computability of many distributed tasks in anonymous networks is well known. This paper presents a deterministic self-stabilizing algorithm to compute a ( 3 − 2 Δ + 1 ) -approximation of a minimum vertex cover in anonymous networks. The algorithm operates under the distributed unfair scheduler, stabilizes after O ( n + m ) moves respectively O ( Δ ) rounds, and requires O ( log n ) storage per node. Recovery from a single fault is reached within a constant time and the contamination number is O ( Δ ) . For trees the algorithm computes a 2 -approximation of a minimum vertex cover.", "This paper presents a deterministic self-stabilizing algorithm that approximates a minimum vertex cover in anonymous networks with ratio 2 using the distributed scheduler and the link-register model with composite atomicity. No algorithm with a better approximation ratio can exist. The algorithm stabilizes in O(min n, Δ2, Δ log3 n ) rounds and requires O(Δ) memory per node.", "A vertex cover of a graph is a subset of vertices such that each edge has at least one endpoint in the subset. Determining the minimum vertex cover is a well-known NP-complete problem in a sequential setting. Several techniques, e.g., depth-first search, a local ratio theorem, and semidefinite relaxation, have given good approximation algorithms. However, some of them cannot be applied to a distributed setting, in particular self-stabilizing algorithms. Thus only a 2-approximation solution based on a self-stabilizing maximal matching has been obviously known until now. In this paper we propose a new self-stabilizing vertex cover algorithm that achieves (2–1 Δ)-approximation ratio, where Δ is the maximum degree of a given network. We first introduce a sequential (2–1 Δ)-approximation algorithm that uses a maximal matching with the high-degree-first order of vertices. Then we present a self-stabilizing algorithm based on the same idea, and show that the output of the algorithm is the same as that of the sequential one." ] }
1408.1276
2949322006
We present a generic and automated approach to re-identifying nodes in anonymized social networks which enables novel anonymization techniques to be quickly evaluated. It uses machine learning (decision forests) to matching pairs of nodes in disparate anonymized sub-graphs. The technique uncovers artefacts and invariants of any black-box anonymization scheme from a small set of examples. Despite a high degree of automation, classification succeeds with significant true positive rates even when small false positive rates are sought. Our evaluation uses publicly available real world datasets to study the performance of our approach against real-world anonymization strategies, namely the schemes used to protect datasets of The Data for Development (D4D) Challenge. We show that the technique is effective even when only small numbers of samples are used for training. Further, since it detects weaknesses in the black-box anonymization scheme it can re-identify nodes in one social network when trained on another.
Anonymizing social networks has proven to be a tough challenge. presented @cite_12 active (using sybil nodes) and passive attacks based on searching patterns in sub-graphs to learn relationships between pre-selected target individuals of an anonymised social graph. Group membership has been shown @cite_8 to be sufficient to identify an individual in a social network.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2163263459", "2143445293" ], "abstract": [ "In a social network, nodes correspond topeople or other social entities, and edges correspond to social links between them. In an effort to preserve privacy, the practice of anonymization replaces names with meaningless unique identifiers. We describe a family of attacks such that even from a single anonymized copy of a social network, it is possible for an adversary to learn whether edges exist or not between specific targeted pairs of nodes.", "Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates and have millions of registered users. In this paper, we introduce a novel de-anonymization attack that exploits group membership information that is available on social networking sites. More precisely, we show that information about the group memberships of a user (i.e., the groups of a social network to which a user belongs) is sufficient to uniquely identify this person, or, at least, to significantly reduce the set of possible candidates. That is, rather than tracking a user's browser as with cookies, it is possible to track a person. To determine the group membership of a user, we leverage well-known web browser history stealing attacks. Thus, whenever a social network user visits a malicious website, this website can launch our de-anonymization attack and learn the identity of its visitors. The implications of our attack are manifold, since it requires a low effort and has the potential to affect millions of social networking users. We perform both a theoretical analysis and empirical measurements to demonstrate the feasibility of our attack against Xing, a medium-sized social network with more than eight million members that is mainly used for business relationships. Furthermore, we explored other, larger social networks and performed experiments that suggest that users of Facebook and LinkedIn are equally vulnerable." ] }
1408.1276
2949322006
We present a generic and automated approach to re-identifying nodes in anonymized social networks which enables novel anonymization techniques to be quickly evaluated. It uses machine learning (decision forests) to matching pairs of nodes in disparate anonymized sub-graphs. The technique uncovers artefacts and invariants of any black-box anonymization scheme from a small set of examples. Despite a high degree of automation, classification succeeds with significant true positive rates even when small false positive rates are sought. Our evaluation uses publicly available real world datasets to study the performance of our approach against real-world anonymization strategies, namely the schemes used to protect datasets of The Data for Development (D4D) Challenge. We show that the technique is effective even when only small numbers of samples are used for training. Further, since it detects weaknesses in the black-box anonymization scheme it can re-identify nodes in one social network when trained on another.
de-anonymize @cite_13 the Kaggle dataset released for link prediction using a pre-crawled version of the same dataset. Their work combines de-anonymization and link prediction using random forests, however, the de-anonymization phase does not use any machine learning. Random forests are used to predict those links which pure de-anonymization could not. Additionally, their work is based on availability of ground truth to mount de-anonymization attack on a dataset which is not adversarial. got very good results just using pure random forests for link prediction, rather than de-anonymization, for the same dataset. In contrast, we do not have directionality available for our graphs and our feature extraction is simple and efficient, an important factor for attacking huge datasets.
{ "cite_N": [ "@cite_13" ], "mid": [ "2951118224" ], "abstract": [ "This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction." ] }
1408.1605
2949338522
We present the results obtained by using an evolution of our CUDA-based solution for the exploration, via a Breadth First Search, of large graphs. This latest version exploits at its best the features of the Kepler architecture and relies on a combination of techniques to reduce both the number of communications among the GPUs and the amount of exchanged data. The final result is a code that can visit more than 800 billion edges in a second by using a cluster equipped with 4096 Tesla K20X GPUs.
Ueno @cite_30 presented a hybrid CPU-GPU implementation of the Graph500 benchmark, using the 2D partitioning proposed in @cite_40 . Their implementation uses the technique introduced by Merrill @cite_8 to create the edge frontier and resort to a novel compression technique to shrink the size of messages. They also implemented a sophisticated method to overlap communication and computation in order to reduce the working memory size of the GPUs.
{ "cite_N": [ "@cite_30", "@cite_40", "@cite_8" ], "mid": [ "", "2141662114", "1985291160" ], "abstract": [ "", "Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth- first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported.", "Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management constructed from efficient prefix sum that achieves an asymptotically optimal O(|V|+|E|) work complexity. Our implementation delivers excellent performance on diverse graphs, achieving traversal rates in excess of 3.3 billion and 8.3 billion traversed edges per second using single and quad-GPU configurations, respectively. This level of performance is several times faster than state-of-the-art implementations both CPU and GPU platforms." ] }
1408.1605
2949338522
We present the results obtained by using an evolution of our CUDA-based solution for the exploration, via a Breadth First Search, of large graphs. This latest version exploits at its best the features of the Kepler architecture and relies on a combination of techniques to reduce both the number of communications among the GPUs and the amount of exchanged data. The final result is a code that can visit more than 800 billion edges in a second by using a cluster equipped with 4096 Tesla K20X GPUs.
A recent work @cite_2 demonstrates the chance of having an effective implementation of a distributed direction-optimizing approach on the BlueGene P by using a 1D partitioning. That partitioning simplifies the parallelization of the bottom-up algorithm but it may require a significant increase in the number of communications. Their results show that the combination of the underlaying architecture and the SPI interface is well suited to the purpose. The authors report that replacing the SPI with MPI incurs a loss of performance by a factor of nearly 5 although the MPI-based implementation cannot be considered optimal. This suggests that the scalability of a distributed implementation may be worse on different network architectures.
{ "cite_N": [ "@cite_2" ], "mid": [ "1992011279" ], "abstract": [ "The world of Big Data is changing dramatically right before our eyes-from the amount of data being produced to the way in which it is structured and used. The trend of \"big data growth\" presents enormous challenges, but it also presents incredible scientific and business opportunities. Together with the data explosion, we are also witnessing a dramatic increase in data processing capabilities, thanks to new powerful parallel computer architectures and more sophisticated algorithms. In this paper we describe the algorithmic design and the optimization techniques that led to the unprecedented processing rate of 15.3 trillion edges per second on 64 thousand Blue Gene Q nodes, that allowed the in-memory exploration of a petabyte-scale graph in just a few seconds. This paper provides insight into our parallelization and optimization techniques. We believe that these techniques can be successfully applied to a broader class of graph algorithms." ] }
1408.1416
1587084186
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
Sensor fingerprinting has received significant attention in recent years, primarily in the context of de-anonymizing photos by correlating them to images with a known source. @cite_6 , images taken by different cameras are processed to derive a reference noise pattern that is specific for each sensor. Based on this pattern, additional images are associated with their most likely source. Noise extraction algorithms are a critical part of this approach, and @cite_29 proposes some further enhancements. The image capture pipeline is investigated in @cite_3 , where different stages of the process are revealed to introduce distinct artifacts. These artifacts can be used to design more robust identification algorithms.
{ "cite_N": [ "@cite_29", "@cite_3", "@cite_6" ], "mid": [ "2129240197", "2154889936", "2124695272" ], "abstract": [ "Sensor pattern noises (SPNs), extracted from digital images to serve as the fingerprints of imaging devices, have been proved as an effective way for digital device identification. However, as we demonstrate in this work, the limitation of the current method of extracting SPNs is that the SPNs extracted from images can be severely contaminated by details from scenes, and as a result, the identification rate is unsatisfactory unless images of a large size are used. In this work, we propose a novel approach for attenuating the influence of details from scenes on SPNs so as to improve the device identification rate of the identifier. The hypothesis underlying our SPN enhancement method is that the stronger a signal component in an SPN is, the less trustworthy the component should be, and thus should be attenuated. This hypothesis suggests that an enhanced SPN can be obtained by assigning weighting factors inversely proportional to the magnitude of the SPN components.", "The various image-processing stages in a digital camera pipeline leave telltale footprints, which can be exploited as forensic signatures. These footprints consist of pixel defects, of unevenness of the responses in the charge-coupled device sensor, black current noise, and may originate from proprietary interpolation algorithms involved in color filter array. Various imaging device (camera, scanner, etc.) identification methods are based on the analysis of these artifacts. In this paper, we set to explore three sets of forensic features, namely binary similarity measures, image-quality measures, and higher order wavelet statistics in conjunction with SVM classifiers to identify the originating camera. We demonstrate that our camera model identification algorithm achieves more accurate identification, and that it can be made robust to a host of image manipulations. The algorithm has the potential to discriminate camera units within the same model.", "In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction." ] }
1408.1416
1587084186
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
There are some works that aim to fingerprint a device via the web that go beyond the standard HTTP cookies. Such works are based on software-related features rather than hardware related. Ref. @cite_32 showed that parameters of system configuration such as screen resolution, browser plugins and system fonts as well as the contents of HTTP headers -- User-Agent and Accept -- allow to fingerprint a device. Ref. @cite_10 also showed that good device identification can also be achieved using the values of User-Agent, IP address, cookies and login IDs. These values can be achieved using standard logs of web traffic.
{ "cite_N": [ "@cite_10", "@cite_32" ], "mid": [ "57737232", "1525967479" ], "abstract": [ "Many web services aim to track clients as a basis for analyzing their behavior and providing personalized services. Despite much debate regarding the collection of client information, there have been few quantitative studies that analyze the effectiveness of host-tracking and the associated privacy risks. In this paper, we perform a large-scale study to quantify the amount of information revealed by common host identifiers. We analyze month-long anonymized datasets collected by the Hotmail web-mail service and the Bing search engine, which include millions of hosts across the global IP address space. In this setting, we compare the use of multiple identifiers, including browser information, IP addresses, cookies, and user login IDs. We further demonstrate the privacy and security implications of host-tracking in two contexts. In the first, we study the causes of cookie churn in web services, and show that many returning users can still be tracked even if they clear cookies or utilize private browsing. In the second, we show that host-tracking can be leveraged to improve security. Specifically, by aggregating information across hosts, we uncover a stealthy malicious attack associated with over 75,000 bot accounts that forward cookies to distributed locations. This work was done while Ting-Fang was an intern at Microsoft Research. Martin Abadi is also affiliated with the University of California, Santa Cruz.", "We investigate the degree to which modern web browsers are subject to \"device fingerprinting\" via the version and configuration information that they will transmit to websites upon request. We implemented one possible fingerprinting algorithm, and collected these fingerprints from a large sample of browsers that visited our test side, panopticlick.eff.org. We observe that the distribution of our fingerprint contains at least 18.1 bits of entropy, meaning that if we pick a browser at random, at best we expect that only one in 286,777 other browsers will share its fingerprint. Among browsers that support Flash or Java, the situation is worse, with the average browser carrying at least 18.8 bits of identifying information. 94.2 of browsers with Flash or Java were unique in our sample. By observing returning visitors, we estimate how rapidly browser fingerprints might change over time. In our sample, fingerprints changed quite rapidly, but even a simple heuristic was usually able to guess when a fingerprint was an \"upgraded\" version of a previously observed browser's fingerprint, with 99.1 of guesses correct and a false positive rate of only 0.86 . We discuss what privacy threat browser fingerprinting poses in practice, and what countermeasures may be appropriate to prevent it. There is a tradeoff between protection against fingerprintability and certain kinds of debuggability, which in current browsers is weighted heavily against privacy. Paradoxically, anti-fingerprinting privacy technologies can be self-defeating if they are not used by a sufficient number of people; we show that some privacy measures currently fall victim to this paradox, but others do not." ] }
1408.1416
1587084186
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
In the past several years it has been shown @cite_30 that may web sites identify a web client based on super-cookies". These are identifier which are stored on the local host in various persistent ways outside the control of a browser, hence the browser can not impose that standard restriction as of HTTP cookies.
{ "cite_N": [ "@cite_30" ], "mid": [ "1770989051" ], "abstract": [ "In August 2009, we demonstrated that popular websites were using “Flash cookies” to track users. Some advertisers had adopted this technology because it allowed persistent tracking even where users had taken steps to avoid web profiling. We also demonstrated “respawning” on top sites with Flash technology. This allowed sites to reinstantiate HTTP cookies deleted by a user, making tracking more resistant to users’ privacy-seeking behaviors.In this followup study, we reassess the Flash cookies landscape and examine a new tracking vector, HTML5 local storage and Cache-Cookies via ETags. We found over 5,600 standard HTTP cookies on popular sites, over 4,900 were from third parties. Google-controlled cookies were present on 97 of the top 100 sites, including popular government websites. Seventeen sites were using HTML5, and seven of those sites had HTML5 local storage and HTTP cookies with matching values. Flash cookies were present on 37 of the top 100 sites. We found two sites that were respawning cookies, including one site – hulu.com – where both Flash and cache cookies were employed to make identifiers more persistent. The cache cookie method used ETags, and is capable of unique tracking even where all cookies are blocked by the user and “Private Browsing Mode” is enabled.Our 2009 study is also available at SSRN: http: ssrn.com abstract=1446862." ] }
1408.1416
1587084186
We demonstrate how the multitude of sensors on a smartphone can be used to construct a reliable hardware fingerprint of the phone. Such a fingerprint can be used to de-anonymize mobile devices as they connect to web sites, and as a second factor in identifying legitimate users to a remote server. We present two implementations: one based on analyzing the frequency response of the speakerphone-microphone system, and another based on analyzing device-specific accelerometer calibration errors. Our accelerometer-based fingerprint is especially interesting because the accelerometer is accessible via JavaScript running in a mobile web browser without requesting any permissions or notifying the user. We present the results of the most extensive sensor fingerprinting experiment done to date, which measured sensor properties from over 10,000 mobile devices. We show that the entropy from sensor fingerprinting is sufficient to uniquely identify a device among thousands of devices, with low probability of collision.
Some works deal with remote hardware-based fingerprinting. The most well-known example is @cite_33 which showed how to measure a device's clock skew using ICMP and TCP traffic. The clock's skew is shown as a good device identifier. There is also a body of work that propose remote fingerprinting methods based on wireless traffic, for example, radiometric analysis of IEEE 802.11 transmitters @cite_16 , signal phase identification of bluetooth transmitters @cite_20 , or timing analysis of 802.11 probe request frames @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_33", "@cite_20" ], "mid": [ "1981870545", "2155886813", "2104599106", "1510900102" ], "abstract": [ "We propose a new fingerprinting technique that differentiates between unique devices over a Wireless Local Area Network (WLAN) simply through the timing analysis of 802.11 probe request frames. Our technique can be applied to spoof detection, network reconnaissance, and implementation of access control against masquerading attacks. Experimental results indicate that our technique is consistent and accurate in differentiating between unique devices. In contrast with existing wireless fingerprinting techniques, our technique is passive, non-invasive and does not require the co-operation of fingerprintee hosts.", "We design, implement, and evaluate a technique to identify the source network interface card (NIC) of an IEEE 802.11 frame through passive radio-frequency analysis. This technique, called PARADIS, leverages minute imperfections of transmitter hardware that are acquired at manufacture and are present even in otherwise identical NICs. These imperfections are transmitter-specific and manifest themselves as artifacts of the emitted signals. In PARADIS, we measure differentiating artifacts of individual wireless frames in the modulation domain, apply suitable machine-learning classification tools to achieve significantly higher degrees of NIC identification accuracy than prior best known schemes. We experimentally demonstrate effectiveness of PARADIS in differentiating between more than 130 identical 802.11 NICs with accuracy in excess of 99 . Our results also show that the accuracy of PARADIS is resilient against ambient noise and fluctuations of the wireless channel. Although our implementation deals exclusively with IEEE 802.11, the approach itself is general and will work with any digital modulation scheme.", "We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device's known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semipassive techniques when the fingerprinted device is behind a NAT or firewall, and. also when the device's system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.", "Radio Frequency Fingerprinting (RFF) is a technique, which has been used to identify wireless devices. It essentially involves the detection of the transient signal and the extraction of the fingerprint. The detection phase, in our opinion, is the most challenging yet crucial part of the RFF process. Current approaches, namely Threshold and Bayesian Step Change Detector, which use amplitude characteristics of signals for transient detection, perform poorly with certain types of signals. This paper presents a new algorithm that exploits the phase characteristics for detection purposes. Validation using Bluetooth signals has resulted in a success rate of approximately 85-90 percent. We anticipate that the higher detection rate will result in a higher classification rate and thus support various device authetication schemes in the wireless domain." ] }
1408.1228
2951137876
Humans are social animals, they interact with different communities of friends to conduct different activities. The literature shows that human mobility is constrained by their social relations. In this paper, we investigate the social impact of a person's communities on his mobility, instead of all friends from his online social networks. This study can be particularly useful, as certain social behaviors are influenced by specific communities but not all friends. To achieve our goal, we first develop a measure to characterize a person's social diversity, which we term community entropy'. Through analysis of two real-life datasets, we demonstrate that a person's mobility is influenced only by a small fraction of his communities and the influence depends on the social contexts of the communities. We then exploit machine learning techniques to predict users' future movement based on their communities' information. Extensive experiments demonstrate the prediction's effectiveness.
Thanks to the emerging of LBSNs, mobility as well as its connection with social relations have been intensively studied @cite_19 @cite_15 @cite_28 . There are mainly two directions of research going on in the area. One direction is to use the location information from LBSNs to predict friendships (see e.g. @cite_3 @cite_11 @cite_13 @cite_26 @cite_37 @cite_34 @cite_10 ), the other studies the impact from friendships on locations @cite_2 @cite_26 @cite_8 @cite_37 @cite_39 which is what we focus on in the current work.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_8", "@cite_28", "@cite_10", "@cite_3", "@cite_39", "@cite_19", "@cite_2", "@cite_15", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2069090820", "109765122", "2110953678", "2125761757", "2295520632", "2167686542", "2140535046", "2294749418", "2168346693", "1599541430", "2013315566", "1995629273", "2126895033" ], "abstract": [ "Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.", "In August 2010 Facebook launched Places, a location-based service that allows users to check into points of interest and share their physical whereabouts with friends. The friends who see these events in their News Feed can then respond to these check-ins by liking or commenting on them. These data consisting of the places people go and how their friends react to them are a rich, novel dataset. In this paper we first analyze this dataset to understand the factors that influence where users check in, including previous check-ins, similarity to other places, where their friends check in, time of day, and demographics. We show how these factors can be used to build a predictive model of where users will check in next. Then we analyze how users respond to their friends’ check-ins and which factors contribute to users liking or commenting on them. We show how this can be used to improve the ranking of check-in stories, ensuring that users see only the most relevant updates from their friends and ensuring that businesses derive maximum value from check-ins at their establishments. Finally, we construct a model to predict friendship based on check-in count and show that cocheck-ins has a statistically significant effect on friendship.", "Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10 to 30 of all human movement, while periodic behavior explains 50 to 70 . Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility.", "Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction.", "With the emerging of location-based social networks, study on the relationship between human mobility and social relationships becomes quantitatively achievable. Understanding it correctly could result in appealing applications, such as targeted advertising and friends recommendation. In this paper, we focus on mining users’ relationship based on their mobility information. More specifically, we propose to use distance between two users to predict whether they are friends. We first demonstrate that distance is a useful metric to separate friends and strangers. By considering location popularity together with distance, the difference between friends and strangers gets even larger. Next, we show that distance can be used to perform an effective link prediction. In addition, we discover that certain periods of the day are more social than others. In the end, we use a machine learning classifier to further improve the prediction performance. Extensive experiments on a Twitter dataset collected by ourselves show that our model outperforms the state-of-the-art solution by 30 .", "The pervasiveness of location-acquisition technologies (GPS, GSM networks, etc.) enable people to conveniently log the location histories they visited with spatio-temporal data. The increasing availability of large amounts of spatio-temporal data pertaining to an individual's trajectories has given rise to a variety of geographic information systems, and also brings us opportunities and challenges to automatically discover valuable knowledge from these trajectories. In this paper, we move towards this direction and aim to geographically mine the similarity between users based on their location histories. Such user similarity is significant to individuals, communities and businesses by helping them effectively retrieve the information with high relevance. A framework, referred to as hierarchical-graph-based similarity measurement (HGSM), is proposed for geographic information systems to consistently model each individual's location history and effectively measure the similarity among users. In this framework, we take into account both the sequence property of people's movement behaviors and the hierarchy property of geographic spaces. We evaluate this framework using the GPS data collected by 65 volunteers over a period of 6 months in the real world. As a result, HGSM outperforms related similarity measures, such as the cosine similarity and Pearson similarity measures.", "We propose a novel network-based approach for location estimation in social media that integrates evidence of the social tie strength between users for improved location estimation. Concretely, we propose a location estimator -- FriendlyLocation -- that leverages the relationship between the strength of the tie between a pair of users, and the distance between the pair. Based on an examination of over 100 million geo-encoded tweets and 73 million Twitter user profiles, we identify several factors such as the number of followers and how the users interact that can strongly reveal the distance between a pair of users. We use these factors to train a decision tree to distinguish between pairs of users who are likely to live nearby and pairs of users who are likely to live in different areas. We use the results of this decision tree as the input to a maximum likelihood estimator to predict a user's location. We find that this proposed method significantly improves the results of location estimation relative to a state-of-the-art technique. Our system reduces the average error distance for 80 of Twitter users from 40 miles to 21 miles using only information from the user's friends and friends-of-friends, which has great significance for augmenting traditional social media and enriching location-based services with more refined and accurate location estimates.", "Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., \"checkins\"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services.", "Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities. Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users.", "The spatial structure of large-scale online social networks has been largely unaccessible due to the lack of available and accurate data about people’s location. However, with the recent surging popularity of location-based social services, data about the geographic position of users have been available for the first time, together with their online social connections. In this work we present a comprehensive study of the spatial properties of the social networks arising among users of three main popular online location-based services. We observe robust universal features across them: while all networks exhibit about 40 of links below 100 km, we further discover strong heterogeneity across users, with different characteristic spatial lengths of interaction across both their social ties and social triads. We provide evidence that mechanisms akin to gravity models may influence how these social connections are created over space. Our results constitute the first large-scale study to unravel the socio-spatial properties of online location-based social networks.", "The ubiquity of mobile devices and the popularity of location-based-services have generated, for the first time, rich datasets of people's location information at a very high fidelity. These location datasets can be used to study people's behavior - for example, social studies have shown that people, who are seen together frequently at the same place and at the same time, are most probably socially related. In this paper, we are interested in inferring these social connections by analyzing people's location information, which is useful in a variety of application domains from sales and marketing to intelligence analysis. In particular, we propose an entropy-based model (EBM) that not only infers social connections but also estimates the strength of social connections by analyzing people's co-occurrences in space and time. We examine two independent ways: diversity and weighted frequency, through which co-occurrences contribute to social strength. In addition, we take the characteristics of each location into consideration in order to compensate for cases where only limited location information is available. We conducted extensive sets of experiments with real-world datasets including both people's location data and their social connections, where we used the latter as the ground-truth to verify the results of applying our approach to the former. We show that our approach outperforms the competitors.", "We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals’ physical locations over time.", "This paper examines the location traces of 489 users of a location sharing social network for relationships between the users' mobility patterns and structural properties of their underlying social network. We introduce a novel set of location-based features for analyzing the social context of a geographic region, including location entropy, which measures the diversity of unique visitors of a location. Using these features, we provide a model for predicting friendship between two users by analyzing their location trails. Our model achieves significant gains over simpler models based only on direct properties of the co-location histories, such as the number of co-locations. We also show a positive relationship between the entropy of the locations the user visits and the number of social ties that user has in the network. We discuss how the offline mobility of users can have implications for both researchers and designers of online social networks." ] }
1408.1228
2951137876
Humans are social animals, they interact with different communities of friends to conduct different activities. The literature shows that human mobility is constrained by their social relations. In this paper, we investigate the social impact of a person's communities on his mobility, instead of all friends from his online social networks. This study can be particularly useful, as certain social behaviors are influenced by specific communities but not all friends. To achieve our goal, we first develop a measure to characterize a person's social diversity, which we term community entropy'. Through analysis of two real-life datasets, we demonstrate that a person's mobility is influenced only by a small fraction of his communities and the influence depends on the social contexts of the communities. We then exploit machine learning techniques to predict users' future movement based on their communities' information. Extensive experiments demonstrate the prediction's effectiveness.
The main difference between previous works and ours is the way of treating friends. We consider users' friends at a community level while most of them treat them the same (except for the paper @cite_39 which introduce social strength', which is based on common features but not on communities). Moreover, our location predictor doesn't need any user's own information but his friends' to achieve a promising result, especially for users' with high community entropies. Other minor differences include the prediction target: we want to predict users' certain locations in the future not their home @cite_2 @cite_39 @cite_33 or a dynamic sequences of locations @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_2", "@cite_33", "@cite_39" ], "mid": [ "2069090820", "2168346693", "50479354", "2140535046" ], "abstract": [ "Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.", "Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities. Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users.", "Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals' locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74 of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.", "We propose a novel network-based approach for location estimation in social media that integrates evidence of the social tie strength between users for improved location estimation. Concretely, we propose a location estimator -- FriendlyLocation -- that leverages the relationship between the strength of the tie between a pair of users, and the distance between the pair. Based on an examination of over 100 million geo-encoded tweets and 73 million Twitter user profiles, we identify several factors such as the number of followers and how the users interact that can strongly reveal the distance between a pair of users. We use these factors to train a decision tree to distinguish between pairs of users who are likely to live nearby and pairs of users who are likely to live in different areas. We use the results of this decision tree as the input to a maximum likelihood estimator to predict a user's location. We find that this proposed method significantly improves the results of location estimation relative to a state-of-the-art technique. Our system reduces the average error distance for 80 of Twitter users from 40 miles to 21 miles using only information from the user's friends and friends-of-friends, which has great significance for augmenting traditional social media and enriching location-based services with more refined and accurate location estimates." ] }
1408.1228
2951137876
Humans are social animals, they interact with different communities of friends to conduct different activities. The literature shows that human mobility is constrained by their social relations. In this paper, we investigate the social impact of a person's communities on his mobility, instead of all friends from his online social networks. This study can be particularly useful, as certain social behaviors are influenced by specific communities but not all friends. To achieve our goal, we first develop a measure to characterize a person's social diversity, which we term community entropy'. Through analysis of two real-life datasets, we demonstrate that a person's mobility is influenced only by a small fraction of his communities and the influence depends on the social contexts of the communities. We then exploit machine learning techniques to predict users' future movement based on their communities' information. Extensive experiments demonstrate the prediction's effectiveness.
We focus on understanding users' mobility behavior from social network communities. The authors of @cite_14 tackle the inverse problem, i.e., they exploit users' mobility information to detect communities. They first attach weights to the edges in a social network based on the check-in information, then the social network is modified by removing all edges with small weights. In the end, a community detection algorithm (louvain method @cite_29 ) is used on the modified social graph to discover communities. The experimental results show that their method is able to discover more meaningful communities, such as place-focused communities, compared to the standard community detection algorithm.
{ "cite_N": [ "@cite_29", "@cite_14" ], "mid": [ "2131681506", "2013581884" ], "abstract": [ "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "Discovering groups of online friends who go to the same physical places has numerous potential applications including privacy management, friend recommendation, and contact grouping as in Google+ circles. Until recently, little information was available about places visited by users of online social networking services, so community detection on the social graph could not take this into account. With the rise of services such as Foursquare, Gowalla, and Facebook Places, where users check in to named venues and share their location with their friends, we now have the right data to make this possible. In this work, we propose a way to extract place-focused communities from the social graph by annotating its edges with check-in information. Using traces from two online social networks with location sharing, we show that we can extract groups of friends who meet face-to-face, with many possible benefits for online social services." ] }
1408.1228
2951137876
Humans are social animals, they interact with different communities of friends to conduct different activities. The literature shows that human mobility is constrained by their social relations. In this paper, we investigate the social impact of a person's communities on his mobility, instead of all friends from his online social networks. This study can be particularly useful, as certain social behaviors are influenced by specific communities but not all friends. To achieve our goal, we first develop a measure to characterize a person's social diversity, which we term community entropy'. Through analysis of two real-life datasets, we demonstrate that a person's mobility is influenced only by a small fraction of his communities and the influence depends on the social contexts of the communities. We then exploit machine learning techniques to predict users' future movement based on their communities' information. Extensive experiments demonstrate the prediction's effectiveness.
More recently, @cite_38 analyze mobility behaviors of pairs of friends and groups of friends (communities). They focus on comparing the difference between individual mobility and group mobility. For example, they discover that a user is more likely to meet a friend at a place where they have not visited before; while he will choose a familiar place when meeting a group of friends.
{ "cite_N": [ "@cite_38" ], "mid": [ "2078865294" ], "abstract": [ "We analyze two large datasets from technological networks with location and social data: user location records from an online location-based social networking service, and anonymized telecommunications data from a European cellphone operator, in order to investigate the differences between individual and group behavior with respect to physical location. We discover agreements between the two datasets: firstly, that individuals are more likely to meet with one friend at a place they have not visited before, but tend to meet at familiar locations when with a larger group. We also find that groups of individuals are more likely to meet at places that their other friends have visited, and that the type of a place strongly affects the propensity for groups to meet there. These differences between group and solo mobility has potential technological applications, for example, in venue recommendation in location-based social networks." ] }
1408.1440
153642241
A large number of streaming applications use reliable transport protocols such as TCP to deliver content over the Internet. However, head-of-line blocking due to packet loss recovery can often result in unwanted behavior and poor application layer performance. Transport layer coding can help mitigate this issue by helping to recover from lost packets without waiting for retransmissions. We consider the use of an on-line network code that inserts coded packets at strategic locations within the underlying packet stream. If retransmissions are necessary, additional coding packets are transmitted to ensure the receiver's ability to decode. An analysis of this scheme is provided that helps determine both the expected in-order packet delivery delay and its variance. Numerical results are then used to determine when and how many coded packets should be inserted into the packet stream, in addition to determining the trade-offs between reducing the in-order delay and the achievable rate. The analytical results are finally compared with experimental results to provide insight into how to minimize the delay of existing transport layer protocols.
A resurgence of interest in coding at the transport layer has taken place to help overcome TCP's poor performance in wireless networks. Sundararajan et. al. @cite_24 first proposed TCP with Network Coding (TCP NC). They insert a coding shim between the TCP and IP layers that introduces redundancy into the network in order to spoof TCP into believing the network is error-free. Loss-Tolerant TCP (LT-TCP) @cite_21 @cite_15 @cite_8 is another approach using Reed-Solomon (RS) codes and explicit congestion notification (ECN) to overcome random packet erasures and improve performance. In addition, Coded TCP (CTCP) @cite_11 uses RLNC @cite_6 and a modified additive-increase, multiplicative decrease (AIMD) algorithm for maintaining high throughput in high packet erasure networks. While these proposals have shown coding can help increase throughput, especially in challenged networks, only anecdotal evidence has been provided showing the benefits for time sensitive applications.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_6", "@cite_24", "@cite_15", "@cite_11" ], "mid": [ "1993520373", "2042955355", "2048235391", "2134795104", "1590243065", "1978227705" ], "abstract": [ "TCP is the ubiquitous transport protocol in the Internet. However, in a wireless ad-hoc environment where links are unreliable, TCP causes a number of performance issues. The key reason behind this is that TCP considers all packet losses to be due to congestion and reduces its send rate, which is not necessarily appropriate in a lossy ad-hoc environment. In prior work, we have designed Loss Tolerant TCP (LT-TCP) [1], [2], which extends TCP towards making it more efficient and robust in a wireless ad-hoc environment. LT-TCP uses Explicit Congestion Notification (ECN) and forward error correction (FEC) to mitigate the effects of random packet losses. The protocol uses these mechanisms to both distinguish between congestion and other losses, and recover from losses due to lossy wireless links. In this paper, we describe the implementation of LT-TCP in the Linux operating system kernel, and present and analyze initial performance results for the protocol on lossy links. Results show that LT-TCP provides much improved performance over TCP over lossy connections that model ad-hoc networks. In addition, it shows that performance of LT-TCP is nearly linear with loss rate, whereas TCP suffers disproportionately as loss rate increases. These promising implementation results point to further experimentation for LT-TCP, including a push towards Internet standards bodies.", "With increasing dependence on wireless networks as an integral part of the communication infrastructure, it is critical that data link and transport layer protocols perform reasonably under potentially severe lossy conditions. A key strategy is to use hybrid ARQ (HARQ) with erasure codes (a.k.a. forward error correction or FEC) sent both proactively and reactively in response to feedback about dynamic loss statistics. A challenge is to design HARQ to satisfy multiple objectives such as high goodput, low latency and negligible residual loss rate. In this paper, we analyze the performance benefits and trade-offs of these reliability strategies (hybrid ARQ+FEC). We derive expressions for the expected good-put (and overhead in terms of FEC wastage), latency, and residual loss for a given raw erasure loss process (e.g. uniform and bursty loss models). We show how the analysis can be used to explain and provide specialized design guidance for link-layer HARQ that is subject to tight delay constraints and a recently designed transport layer HARQ scheme (called loss-tolerant TCP). We validate our analysis by comparing the predictions with values obtained from simulations performed on the link and transport layer HARQ strategies with ns-2. We believe that such an analysis could also have value for other adaptive protocols using network coding and incremental redundancy techniques.", "We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks", "The theory of network coding promises significant benefits in network performance, especially in lossy networks and in multicast and multipath scenarios. To realize these benefits in practice, we need to understand how coding across packets interacts with the acknowledgment (ACK)-based flow control mechanism that forms a central part of today's Internet protocols such as transmission control protocol (TCP). Current approaches such as rateless codes and batch-based coding are not compatible with TCP's retransmission and sliding-window mechanisms. In this paper, we propose a new mechanism called TCP NC that incorporates network coding into TCP with only minor changes to the protocol stack, thereby allowing incremental deployment. In our scheme, the source transmits random linear combinations of packets currently in the congestion window. At the heart of our scheme is a new interpretation of ACKs-the sink acknowledges every degree of freedom (i.e., a linear combination that reveals one unit of new information) even if it does not reveal an original packet immediately. Thus, our new TCP ACK rule takes into account the network coding operations in the lower layer and enables a TCP-compatible sliding-window approach to network coding. Coding essentially masks losses from the congestion control algorithm and allows TCP NC to react smoothly to losses, resulting in a novel and effective approach for congestion control over lossy networks such as wireless networks. An important feature of our solution is that it allows intermediate nodes to perform re-encoding of packets, which is known to provide significant throughput gains in lossy networks and multicast scenarios. Simulations show that our scheme, with or without re-encoding inside the network, achieves much higher throughput compared to TCP over lossy wireless links. We present a real-world implementation of this protocol that addresses the practical aspects of incorporating network coding and decoding with TCP's window management mechanism. We work with TCP-Reno, which is a widespread and practical variant of TCP. Our implementation significantly advances the goal of designing a deployable, general, TCP-compatible protocol that provides the benefits of network coding.", "TCP performance over wireless links suffers substantially when packet error rates increase beyond about 1 - 5 . This paper proposes end-end mechanisms to improve TCP performance over lossy networks with potentially much higher packet loss rates. Our proposed scheme separates congestion indications from wireless packet erasures by exploiting ECN. Timeout effects due to packet erasures are combated using a dynamic and adaptive Forward Error Correction (FEC) scheme that includes adaptation of TCP’s Maximum Segment Size. Proactive and reactive FEC overhead enhance TCP SACK to protect original segments and retransmissions respectively. Dynamically changing the MSS tailors the number of segments in the window for optimal performance. SACK and timeout mechanisms are used as a last resort. ns-2 simulations show that our scheme substantially improves TCP performance even for packet loss rates up to 30 , thus extending the dynamic range and performance of TCP over networks with lossy (e.g., wireless) links.", "United States. Dept. of Defense. Assistant Secretary of Defense for Research & Engineering (United States. Air Force Contract FA8721-05-C-0002)" ] }
1408.1440
153642241
A large number of streaming applications use reliable transport protocols such as TCP to deliver content over the Internet. However, head-of-line blocking due to packet loss recovery can often result in unwanted behavior and poor application layer performance. Transport layer coding can help mitigate this issue by helping to recover from lost packets without waiting for retransmissions. We consider the use of an on-line network code that inserts coded packets at strategic locations within the underlying packet stream. If retransmissions are necessary, additional coding packets are transmitted to ensure the receiver's ability to decode. An analysis of this scheme is provided that helps determine both the expected in-order packet delivery delay and its variance. Numerical results are then used to determine when and how many coded packets should be inserted into the packet stream, in addition to determining the trade-offs between reducing the in-order delay and the achievable rate. The analytical results are finally compared with experimental results to provide insight into how to minimize the delay of existing transport layer protocols.
On the other hand, a large body of research investigating the delay of coding in different settings has taken place. In general, most of these works can be summarized by Figure . The coding delay of chunked and overlapping chunked codes ( @cite_5 ) (shown in Figure (a)), network coding in time-division duplexing (TDD) channels ( @cite_17 @cite_10 @cite_0 ), and network coding in line networks where coding also occurs at intermediate nodes ( @cite_7 ) is well understood. In addition, a non-asymptotic analysis of the delay distributions of random linear network coding (RLNC) ( @cite_20 ) and various multicast scenarios ( @cite_4 @cite_13 @cite_9 ) using a variant of the scheme in Figure (b) have also been investigated. Furthermore, the research that looks at the in-order packet delay is provided in @cite_19 and @cite_23 for uncoded systems, while @cite_14 , @cite_3 , and @cite_1 considers the in-order packet delay for non-systematic coding schemes similar to the one shown in Figure (b). However, these non-systematic schemes may not be the optimum strategy in networks or communication channels with a long @math .
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_5", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "2116661213", "1986127338", "2141789914", "2127832277", "2032843418", "2112250850", "2089522821", "2159384192", "1569052096", "2140560546", "", "2131646038", "2101605991", "1514396891" ], "abstract": [ "This paper analyzes the gains in delay performance resulting from network coding. We consider a model of file transmission to multiple receivers from a single base station. Using this model, we show that gains in delay performance from network coding with or without channel side information can be substantial compared to conventional scheduling methods for downlink transmission.", "Understanding the delay behavior of network coding with a fixed number of receivers, small field sizes and a limited number of encoded symbols is a key step towards its applicability in real-time communication systems with stringent delay constraints. Previous results are typically asymptotic in nature and focus mainly on the average delay performance. Seeking to characterize the complete delay distribution of random linear network coding, we present a brute-force methodology that is feasible for up to four receivers, limited field and generation sizes. The key idea is to fix the pattern of packet erasures and to try out all possible encodings for various system and channel parameters. Our findings, which are valid for both decoding delay and ordered-delivery delay, can be used to optimize network coding protocols with respect not only to their average but also to their worst-case performance.", "We consider the problem of minimizing delay when broadcasting over erasure channels with feedback. A sender wishes to communicate the same set of µ messages to several receivers over separate erasure channels. The sender can broadcast a single message or a combination (encoding) of messages at each timestep. Receivers provide feedback as to whether the transmission was received. If at some time step a receiver cannot identify a new message, delay is incurred. Our notion of delay is motivated by real-time applications that request progressively refined input, such as the successive refinement of an image encoded using multiple description coding. Our setup is novel because it combines coding techniques with feedback information to the end of minimizing delay. It allows Θ(µ) benefits as compared to previous approaches for offline algorithms, while feedback allows online algorithms to achieve smaller delay than online algorithms without feedback. Our main complexity results are that the offline minimization problem is NP-hard when the sender only schedules single messages and that the general problem remains NP-hard even when coding is allowed. However we show that coding does offer delay and complexity gains over scheduling. We also discuss online heuristics and evaluate their performance through simulations.", "We analyze a simple network where a source and a receiver are connected by a line of erasure channels of different reliabilities. Recent prior work has shown that random linear network coding can achieve the min-cut capacity and therefore the asymptotic rate is determined by the worst link of the line network. In this paper we investigate the delay for transmitting a batch of packets, which is a function of all the erasure probabilities and the number of packets in the batch. We show a monotonicity result on the delay function and derive simple expressions which characterize the expected delay behavior of line networks. Further, we use a martingale bounded differences argument to show that the actual delay is tightly concentrated around its expectation.", "In an unreliable single-hop broadcast network setting, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size. Our model consists of a source transmitting packets of a single flow to a set of n users over independent time-correlated erasure channels. The source performs random linear network coding (RLNC) over k (coding window size) packets and broadcasts them to the users. We note that the broadcast throughput of RLNC must vanish with increasing n, for any fixed k. Hence, in contrast to other works in the literature, we investigate how the coding window size k must scale for increasing n. Our analysis reveals that the coding window size of Θ(ln(n)) represents a phase transition rate, below which the throughput converges to zero, and above which, it converges to the broadcast capacity. Further, we characterize the asymptotic distribution of decoding delay and provide approximate expressions for the mean and variance of decoding delay for the scaling regime of k=ω(ln(n)). These asymptotic expressions reveal the impact of channel correlations on the throughput and delay performance of RLNC. We also show that how our analysis can be extended to other rateless block coding schemes such as the LT codes. Finally, we comment on the extension of our results to the cases of dependent channels across users and asymmetric channel model.", "Throughput and per-packet delay can present strong trade-offs that are important in the cases of delay sensitive applications. We investigate such trade-offs using a random linear network coding scheme for one or more receivers in single hop wireless packet erasure broadcast channels. We capture the delay sensitivities across different types of network applications using a class of delay metrics based on the norms of packet arrival times. With these delay metrics, we establish a unified framework to characterize the rate and delay requirements of applications and to optimize system parameters. In the single receiver case, we demonstrate the trade-off between average packet delay, which we view as the inverse of throughput, and maximum inorder inter-arrival delay for various system parameters. For a single broadcast channel with multiple receivers having different delay constraints and feedback delays, we jointly optimize the coding parameters and time-division scheduling parameters at the transmitter. We formulate the optimization problem as a Generalized Geometric Program (GGP). This approach allows the transmitter to adjust adaptively the coding and scheduling parameters for efficient allocation of network resources under varying delay constraints. In the case where the receivers are served by multiple non-interfering wireless broadcast channels, the same optimization problem is formulated as a Signomial Program, which is NP-hard in general. We provide approximation methods using successive formulation of geometric programs and show the convergence of approximations.", "We propose a new feedback-based adaptive coding scheme for a packet erasure broadcast channel. The main performance metric of interest is the delay. We consider two types of delay - decoding delay and delivery delay. Decoding delay is the time difference between the instant when the packet is decoded at an arbitrary receiver and the instant when it arrived at the sender. Delivery delay also includes the period when a decoded packet waits in a resequencing buffer at the receiver until all previous packets have also been decoded. This notion of delay is motivated by applications that accept packets only in order. Our coding scheme has the innovation guarantee property and is hence throughput optimal. It also allows efficient queue management. It uses the simple strategy of mixing only the oldest undecoded packet of each receiver, and therefore extends to any number of receivers. We conjecture that this scheme achieves the asymptotically optimal delivery (and hence decoding) delay. The asymptotic behavior is studied in the limit as the load factor of the system approaches capacity. This conjecture is verified through simulations.", "A new random linear network coding scheme for reliable communications for time division duplexing channels is proposed. The setup assumes a packet erasure channel and that nodes cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receiver to acknowledge (ACK) the number of degrees of freedom, if any, that are required to decode correctly the information. We provide an analysis of this problem to show that there is an optimal number of coded data packets, in terms of mean completion time, to be sent before stopping to listen. This number depends on the latency, probabilities of packet erasure and ACK erasure, and the number of degrees of freedom that the receiver requires to decode the data. This scheme is optimal in terms of the mean time to complete the transmission of a fixed number of data packets. We show that its performance is very close to that of a full duplex system, while transmitting a different number of coded packets can cause large degradation in performance, especially if latency is high. Also, we study the throughput performance of our scheme and compare it to existing half-duplex Go-back-N and Selective Repeat ARQ schemes. Numerical results, obtained for different latencies, show that our scheme has similar performance to the Selective Repeat in most cases and considerable performance gain when latency and packet error probability is high.", "Protocols such as TCP require packets to be accepted (i.e., delivered to the receiving application) in the order they are transmitted at the sender. Packets are sometimes mis-ordered in the network. In order to deliver the arrived packets to the application in sequence, the receiver's transport layer needs to temporarily buffer out-of-order packets and resequence them as more packets arrive. Even when the application can consume the packets infinitely fast, the packets may still be delayed for resequencing. In this paper, we model packet mis-ordering by adding an IID random propagation delay to each packet and analyze the required buffer size for packet resequencing and the resequencing delay for an average packet. We demonstrate that these two quantities can be significant and show how they scale with the network bandwidth.", "We consider streaming over a blockage channel with long feedback delay, as arises in, e.g., real-time satellite communication from a comm-on-the-move (COTM) terminal. For this problem, we introduce a definition of delay that captures the real-time nature of the problem, which we show grows at least as fast as O(log(k)) for memoryless channels, where k corresponds to the number of packets in the transmission. Moreover, a tradeoff exists between this delay and a natural notion of throughput we introduce to capture the bandwidth requirements of the communication. We develop and analyze an efficient \"multi-burst\" transmission (MBT) protocol for achieving good delay-throughput tradeoffs within this framework, which we show to be robust and near-optimal within the class of retransmission protocols with fixed schedules. The MBT protocol can be augmented with coding for additional performance gains. Simulations validate the new protocols, including when peak bandwidth and delay constraints are imposed.", "", "We study an online random linear network coding approach for time division duplexing (TDD) channels under Poisson arrivals. We model the system as a bulk-service queue with variable bulk size and with feedback, i.e., when a set of packets are serviced at a given time, they might be reintroduced to the queue to form part of the next service batch. We show that there is an optimal number of coded data packets that the sender should transmit back-to-back before stopping to wait for an acknowledgement from the receiver. This number depends on the latency, probability of packet erasure, degrees of freedom at the receiver, the size of the coding window, and the arrival rate of the Poisson process. Random network coding is performed across a moving window of packets that depends on the packets in the queue, design constraints on the window size, and the feedback sent from the receiver. We study the mean time between generating a packet at the source and it being seen\", but not necessarily decoded, at the receiver. We also analyze the mean time between a decoding event and the next, defined as the decoding of all the packets that have been previously seen\" and those packets involved in the current window of packets. Inherently, a decoding event implies an in-order decoding of a batch of data packets. We present numerical results illustrating the trade-off between mean delay and mean time between decoding events.", "We present an expression for the delay distribution of Random Linear Network Coding over an erasure channel with a given loss probability. In contrast with previous contributions, our analysis is non- asymptotic in the sense that it is valid for any field size and any number of symbols. The results confirm that GF(16) already offers near-optimal decoding delay, whereas smaller field sizes (e.g. requiring only XOR operations) induce heavy tails in the delay distribution. A comparison with Automatic Repeat reQuest (ARQ) techniques (with perfect feedback) is also included.", "We study random linear network coding for broadcasting in time division duplexing channels. We assume a packet erasure channel with nodes that cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receivers to acknowledge the number of degrees of freedom, if any, that are required to decode correctly the information. We study the mean time to complete the transmission of a block of packets to all receivers. We also present a bound on the number of stops to wait for acknowledgement in order to complete transmission with probability at least 1 − e, for any e ≫ 0. We present analysis and numerical results showing that our scheme outperforms optimal scheduling policies for broadcast, in terms of the mean completion time. We provide a simple heuristic to compute the number of coded packets to be sent before stopping that achieves close to optimal performance with the advantage of a considerable reduction in the search time." ] }
1408.1440
153642241
A large number of streaming applications use reliable transport protocols such as TCP to deliver content over the Internet. However, head-of-line blocking due to packet loss recovery can often result in unwanted behavior and poor application layer performance. Transport layer coding can help mitigate this issue by helping to recover from lost packets without waiting for retransmissions. We consider the use of an on-line network code that inserts coded packets at strategic locations within the underlying packet stream. If retransmissions are necessary, additional coding packets are transmitted to ensure the receiver's ability to decode. An analysis of this scheme is provided that helps determine both the expected in-order packet delivery delay and its variance. Numerical results are then used to determine when and how many coded packets should be inserted into the packet stream, in addition to determining the trade-offs between reducing the in-order delay and the achievable rate. The analytical results are finally compared with experimental results to provide insight into how to minimize the delay of existing transport layer protocols.
Possibly the closest work to ours is that done by Joshi et. al. @cite_18 @cite_16 and T " o m " o sk " o zi et. al. @cite_2 . Bounds on the expected in-order delay and a study of the rate delay trade-offs using a time-invariant coding scheme is provided in @cite_18 and @cite_16 where they assume feedback is instantaneous, provided in a block-wise manner, or not available at all. A generalized example of their coding scheme is shown in Figure (c). While their analysis provides insight into the benefits of coding for streaming applications, their model is similar to a half-duplex communication channel where the sender transmits a finite block a information and then waits for the receiver to send feedback. Unfortunately, it is unclear if their analysis can be extended to full-duplex channels or models where feedback does not provide complete information about the receiver's state-space. Finally, the work in @cite_2 considers the in-order delay of online network coding where feedback determines the source packets used to generate coded packets. However, they only provide experimental results and do not attempt an analysis.
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_2" ], "mid": [ "2015912927", "1834697222", "2213095347" ], "abstract": [ "We consider the problem of minimizing playback delay in streaming over a packet erasure channel with fixed bandwidth. When packets have to be played in order, the expected delay inherently grows with time. We analyze two cases, namely no feedback and instantaneous feedback. We find that in both cases the delay grows logarithmically with the time elapsed since the start of transmission, and we evaluate the growth constant, i.e. the pre-log term, as a function of the transmission bandwidth (relative to the source bandwidth). The growth constant with feedback is strictly better that the one without, but they have the same asymptotic value in the limit of infinite bandwidth.", "Unlike traditional file transfer where only total delay matters, streaming applications impose delay constraints on each packet and require them to be in order. To achieve fast in-order packet decoding, we have to compromise on the throughput. We study this trade-off between throughput and in-order decoding delay, and in particular how it is affected by the frequency of block-wise feedback, whereby the source receives full channel state feedback at periodic intervals. Our analysis shows that for the same throughput, having more frequent feedback significantly reduces the in-order decoding delay. For any given block-wise feedback delay, we present a spectrum of coding schemes that span different throughput-delay tradeoffs. One can choose an appropriate coding scheme from these, depending upon the delaysensitivity and bandwidth limitations of the application.", "Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional overall delays due to processing overheads. This paper describes the delay reduction achieved through online network coding approaches with a limit on the number of packets to be mixed before decoding and a systematic encoding structure. We use the inorder per packet delay as our key performance metric. This metric captures the elapsed time between (network) encoding RTP packets and completely decoding the packets in-order on the receiver side. Our solutions are implemented and evaluated on a point-to-point link between a Raspberry Pi device and a network (de)coding enabled software running on a regular PC. We find that our sliding window approach (more feedback) outperforms all other tested mechanisms in terms of per-+packet delay including the Reed-Solomon encoding employing the systematic approach, a random linear network coding approach, and our proposed on-the-fly network coding approach (which relies on less feedback).We show gains in order of magnitudes between our sliding window and the other approaches when we manage the redundancy transmission adaptively. This low per-packet delay and the inherent reliability of our schemes make these solutions particularly suitable for real-time multimedia delivery in contrast to other classical and network coding strategies." ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
Multidimensional data is one of the fundamental classes of data @cite_11 . As such, there is a large corpus of existing works on the visualization of multidimensional data @cite_37 @cite_3 @cite_5 @cite_17 . These include glyph-based approaches @cite_29 @cite_15 @cite_4 , pixel based techniques @cite_1 , or stacked plots @cite_42 , along with more geometric techniques such as scatterplots @cite_20 @cite_0 and parallel coordinates @cite_26 . Scatterplot matrices @cite_22 @cite_7 are particularly common, as they show the relationships between all dimensions simultaneously. However, they scale poorly with the number of dimensions as it takes @math views to represent all the dimensions, so each individual view will be quite small. Scagnostics can be used to single out plots of particular interest, but this would hide most of the dimensions, not to mention associations across plots would be even more difficult, nescessitating brushing and linking techniques.
{ "cite_N": [ "@cite_37", "@cite_11", "@cite_4", "@cite_26", "@cite_22", "@cite_7", "@cite_29", "@cite_42", "@cite_1", "@cite_3", "@cite_0", "@cite_5", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "", "2138199375", "1532469653", "", "2025320861", "", "2158307450", "", "2121155062", "", "", "2151530263", "1977554562", "2061917106", "2319794630" ], "abstract": [ "", "A useful starting point for designing advanced graphical user interfaces is the visual information seeking Mantra: overview first, zoom and filter, then details on demand. But this is only a starting point in trying to understand the rich and varied set of information visualizations that have been proposed in recent years. The paper offers a task by data type taxonomy with seven data types (one, two, three dimensional data, temporal and multi dimensional data, and tree and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts).", "", "", "Introduction. Portraying the distribution of a set of data. Comparing data distributions. Studying two-dimensional data. Studying multi-dimensional data. Plotting multivariate data. Assessing distributional assumptions data. Developing and assessing regression models. General principles and techniques. References. Appendix: tables of data sets. Index.", "", "Abstract A novel method of representing multivariate data is presented. Each point in k-dimensional space, k≤18, is represented by a cartoon of a face whose features, such as length of nose and curvature of mouth, correspond to components of the point. Thus every multivariate observation is visualized as a computer-drawn face. This presentation makes it easy for the human mind to grasp many of the essential regularities and irregularities present in the data. Other graphical representations are described briefly.", "", "Discusses how the VisDB system supports the query specification process by representing the result visually. The main idea behind the system stems from the view of relational database tables as sets of multidimensional data where the number of attributes corresponds to the number of dimensions. In such a view, it is often unclear. In this system, each display pixel represents one database item. Pixels are arranged and colored to indicate the item's relevance to a user query and to give a visual impression of the resulting data set. >", "", "", "Never before in history has data been generated at such high volumes as it is today. Exploring and analyzing the vast volumes of data is becoming increasingly difficult. Information visualization and visual data mining can help to deal with the flood of information. The advantage of visual data exploration is that the user is directly involved in the data mining process. There are a large number of information visualization techniques which have been developed over the last decade to support the exploration of large data sets. In this paper, we propose a classification of information visualization and visual data mining techniques which is based on the data type to be visualized, the visualization technique, and the interaction and distortion technique. We exemplify the classification using a few examples, most of them referring to techniques and systems presented in this special section.", "Abstract A number of points in k dimensions are displayed by associating with each point a symbol: a drawing of a tree or a castle. All symbols have the same structure derived from a hierarchical clustering algorithm applied to the k variables (dimensions) over all points, but their parts are coded according to the coordinates of each individual point. Trees and castles show general size effects, the change of whole clusters of variables from point to point, trends, and outliers. They are especially appropriate for evaluating the clustering of variables and for observing clusters of points. Their major advantage over earlier attempts to represent multivariate observations (such as profiles, stars, faces, boxes, and Andrews's curves) lies in their matching of relationships between variables to relationships between features of the representing symbol. Several examples are given, including one with 48 variables.", "Abstract The scatterplot is one of our most powerful tools for data analysis. Still, we can add graphical information to scatterplots to make them considerably more powerful. These graphical additions, faces of sorts, can enhance capabilities that scatterplots already have or can add whole new capabilities that faceless scatterplots do not have at all. The additions we discuss here—some new and some old—are (a) sunflowers, (b) category codes, (c) point cloud sizings, (d) smoothings for the dependence of y on x (middle smoothings, spread smoothings, and upper and lower smoothings), and (e) smoothings for the bivariate distribution of x and y (pairs of middle smoothings, sum-difference smoothings, scale-ratio smoothings, and polar smoothings). The development of these additions is based in part on a number of graphical principles that can be applied to the development of statistical graphics in general.", "" ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
Self Organizing Maps (SOMs) have also been used to great effect for multidimensional data, where the points are organized in a space filling manner, and can then be colored according to the component planes @cite_31 . However, as with simply coloring the points in a projection based method, the human eye is not precise enough to discern point values on color alone.
{ "cite_N": [ "@cite_31" ], "mid": [ "2088709281" ], "abstract": [ "The self-organizing map SOM is an efficient tool for visualization of multidimensional numerical data. In this paper, an overview and categorization of both old and new methods for the visualization of SOM is presented. The purpose is to give an idea of what kind of information can be acquired from different presentations and how the SOM can best be utilized in exploratory data visualization. Most of the presented methods can also be applied in the more general case of first making a vector quantization e.g. k-means and then a vector projection e.g. Sammon's mapping." ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
Many data projection and representation techniques can result in a high degree of overplot, where many data points map to the same area of the screen. Data reduction techniques, such as sampling @cite_44 @cite_38 or clustering @cite_33 @cite_8 , can be used to address the issue of overplot by creating an abstracted overview, but in doing so, they do not show the entire dataset. In order to explore the entire dataset from such an overview, semantic zooming such as fisheye lensing @cite_21 with user interaction is necessary. Several works @cite_27 @cite_10 @cite_13 @cite_16 employ such fisheye lensing. Many of them do so purely geometrically without the need for semantic abstraction. However, they do not account for points that are precisely collocated. @cite_6 address this issue by displacing overplotted points to the nearest free pixel. While this method does allow for all points to be drawn with minimum overplot, there is no visual indication of this displacement having occurred. Also, all of these methods use rectilinear distortion, which substantially limits their flexibility.
{ "cite_N": [ "@cite_13", "@cite_38", "@cite_33", "@cite_8", "@cite_21", "@cite_6", "@cite_44", "@cite_27", "@cite_16", "@cite_10" ], "mid": [ "2003161362", "2150085383", "2145646037", "2095897464", "", "2052238947", "2025661610", "2119193164", "2061737513", "2161868490" ], "abstract": [ "Previous work has demonstrated the use of random sampling in visualising large data sets and the practicality of a sampling lens in enabling focus+context viewing. Autosampling was proposed as a mechanism to maintain constant density within the lens without user intervention. However, this requires rapid calculation of density or clutter. This paper defines clutter in terms of the occlusion of plotted points and evaluates three possible occlusion metrics that can be used with parallel coordinate plots. An empirical study showed the relationship between these metrics was independent of location and could be explained with a surprisingly simple probabilistic model.", "Information visualisation is about gaining insight into data through a visual representation. This data is often multivariate and increasingly, the datasets are very large. To help us explore all this data, numerous visualisation applications, both commercial and research prototypes, have been designed using a variety of techniques and algorithms. Whether they are dedicated to geo-spatial data or skewed hierarchical data, most of the visualisations need to adopt strategies for dealing with overcrowded displays, brought about by too much data to fit in too small a display space. This paper analyses a large number of these clutter reduction methods, classifying them both in terms of how they deal with clutter reduction and more importantly, in terms of the benefits and losses. The aim of the resulting taxonomy is to act as a guide to match techniques to problems where different criteria may have different importance, and more importantly as a means to critique and hence develop existing and new techniques.", "Our ability to accumulate large, complex (multivariate) data sets has far exceeded our ability to effectively process them in searching for patterns, anomalies and other interesting features. Conventional multivariate visualization techniques generally do not scale well with respect to the size of the data set. The focus of this paper is on the interactive visualization of large multivariate data sets based on a number of novel extensions to the parallel coordinates display technique. We develop a multi-resolution view of the data via hierarchical clustering, and use a variation of parallel coordinates to convey aggregation information for the resulting clusters. Users can then navigate the resulting structure until the desired focus region and level of detail is reached, using our suite of navigational and filtering tools. We describe the design and implementation of our hierarchical parallel coordinates system which is based on extending the XmdvTool system. Lastly, we show examples of the tools and techniques applied to large (hundreds of thousands of records) multivariate data sets.", "Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle \"noise\" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.", "", "Scatter Plots are one of the most powerful and most widely used techniques for visual data exploration. A well-known problem is that scatter plots often have a high degree of overlap, which may occlude a significant portion of the data values shown. In this paper, we propose the generalized scatter plot technique, which allows an overlap-free representation of large data sets to fit entirely into the display. The basic idea is to allow the analyst to optimize the degree of overlap and distortion to generate the bestpossible view. To allow an effective usage, we provide the capability to zoom smoothly between the traditional and our generalized scatter plots. We identify an optimization function that takes overlap and distortion of the visualization into acccount. We evaluate the generalized scatter plots according to this optimization function, and show that there usually exists an optimal compromise between overlap and distortion. Our generalized scatter plots have been applied successfully to a number of real-world IT services applications, such as server performance monitoring, telephone service usage analysis and financial data, demonstrating the benefits of the generalized scatter plots over traditional ones.", "The problem of visualizing huge amounts of data is well known in information visualization. Dealing with a large number of items forces almost any kind of Infovis technique to reveal its limits in terms of expressivity and scalability. In this paper we focus on 2D scatter plots, proposing a 'feature preservation' approach, based on the idea of modeling the visualization in a virtual space in order to analyze its features (e.g., absolute density, relative density, etc.). In this way we provide a formal framework to measure the visual overlapping, obtaining precise quality metrics about the visualization degradation and devising automatic sampling strategies able to improve the overall image quality. Metrics and algorithms have been improved through suitable user studies.", "Larger, higher resolution displays can be used to increase the scalability of information visualizations. But just how much can scalability increase using larger displays before hitting human perceptual or cognitive limits? Are the same visualization techniques that are good on a single monitor also the techniques that are best when they are scaled up using large, high-resolution displays? To answer these questions we performed a controlled experiment on user performance time, accuracy, and subjective workload when scaling up data quantity with different space-time-attribute visualizations using a large, tiled display. Twelve college students used small multiples, embedded bar matrices, and embedded time-series graphs either on a 2 megapixel (Mp) display or with data scaled up using a 32 Mp tiled display. Participants performed various overview and detail tasks on geospatially-referenced multidimensional time-series data. Results showed that current designs are perceptually scalable because they result in a decrease in task completion time when normalized per number of data attributes along with no decrease in accuracy. It appears that, for the visualizations selected for this study, the relative comparison between designs is generally consistent between display sizes. However, results also suggest that encoding is more important on a smaller display while spatial grouping is more important on a larger display. Some suggestions for designers are provided based on our experience designing visualizations for large displays", "This paper introduces a touch-sensitive two-dimensional scatter plot visualization, to explore and analyze movie data. The design focuses on the ability to work co-located with several users. These are able to create several focus regions through distortion techniques triggered by multi touch gestures. Furthermore, the introduced visualization is an example how promising concepts from InfoVis research can be transferred onto multi touch tables in order to offer more natural interaction.", "Information visualisation systems frequently have to deal with large amounts of data and this often leads to saturated areas in the display with considerable overplotting. This paper introduces the Sampling Lens, a novel tool that utilises random sampling to reduce the clutter within a moveable region, thus allowing the user to uncover any potentially interesting patterns and trends in the data while still being able to view the sample in context. We demonstrate the versatility of the tool by adding sampling lenses to scatter and parallel co-ordinate visualisations. We also consider some implementation issues and present initial user evaluation results." ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
Nonrectilinear distortion has been used to great effect in other applications. The work of @cite_36 demonstrates a method of arbitrarily distorting maps so that geographic points are more evenly distributed. Our approach can produce similar density equalizing distortion, but also aims to maintain the capability to quantitatively valuate point locations by smoothly interpolating the surrounding space.
{ "cite_N": [ "@cite_36" ], "mid": [ "2119353054" ], "abstract": [ "Visualizing large geo-demographical datasets using pixel-based techniques involves mapping the geospatial dimensions of a data point to screen coordinates and appropriately encod- ing its statistical value by color. The analysis of such data presents a great challenge. General tasks involve clustering, categorization, and searching for patterns of interest for sociological or economic research. Available visual encodings and screen space limitations lead to over-plotting and hiding of patterns and clusters in densely populated areas, while sparsely populated areas waste space and draw the attention away from the areas of interest. In this paper, two new approaches (RadialScale and AngularScale) are introduced to create density-equalized maps, while preserving recognizable features and neighborhoods in the visualization. These approaches build the core of a multi-scal- ing technique based on local features of the data described as local minima and maxima of point density. Scaling is conducted several times around these features, which leads to more homogeneous distortions. Results are illustrated using several real-world datasets. Our evaluation shows that the proposed techniques outperform traditional techniques as regard the homogeneity of the resulting data distributions and therefore build a more appropriate basis for analytic purposes." ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
One step of our approach involves computing a new layout for a triangulated graph. There are many existing algorithms for the layout of general graphs @cite_40 . Our case is slightly specialized in that we start with a planar graph, and guarantee planarity at each step of the layout algorithm. The approach of @cite_34 preserves planarity, but does so by allowing for non-linear edges. Some layouts are force-directed approaches that preserve edge-crossing properties, but which make assumptions that are only true in a serial implementation @cite_32 @cite_18 . The force-directed algorithm we use is a modified GPU version of FM @math @cite_41 , which is similar to other GPU implementations @cite_45 @cite_30 . The primary difference between our layout and these existing implementations of FM @math is that our layout imposes a planarity-preserving constraint.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_41", "@cite_32", "@cite_40", "@cite_45", "@cite_34" ], "mid": [ "1595997489", "1999287341", "1504716054", "1509820354", "", "2150770193", "2205695184" ], "abstract": [ "As graphics processors become powerful, ubiquitous and easier to program, they have also become more amenable to general purpose high-performance computing, including the computationally expensive task of drawing large graphs. This paper describes a new parallel analysis of the multipole method of graph drawing to support its efficient GPU implementation. We use a variation of the Fast Multipole Method to estimate the long distance repulsive forces in force directed layout. We support these multipole computations efficiently with a k-d tree constructed and traversed on the GPU. The algorithm achieves impressive speedup over previous CPU and GPU methods, drawing graphs with hundreds of thousands of vertices within a few seconds via CUDA on an NVIDIA GeForce 8800 GTX.", "PrEd [Ber00] is a force-directed algorithm that improves the existing layout of a graph while preserving its edge crossing properties. The algorithm has a number of applications including: improving the layouts of planar graph drawing algorithms, interacting with a graph layout, and drawing Euler-like diagrams. The algorithm ensures that nodes do not cross edges during its execution. However, PrEd can be computationally expensive and overlyrestrictive in terms of node movement. In this paper, we introduce ImPrEd: an improved version of PrEd that overcomes some of its limitations and widens its range of applicability. ImPrEd also adds features such as flexible or crossable edges, allowing for greater control over the output. Flexible edges, in particular, can improve the distribution of graph elements and the angular resolution of the input graph. They can also be used to generate Euler diagrams with smooth boundaries. As flexible edges increase data set size, we experience an execution drawing quality trade off. However, when flexible edges are not used, ImPrEd proves to be consistently faster than PrEd.", "Force-directed graph drawing algorithms are widely used for drawing general graphs. However, these methods do not guarantee a sub-quadratic running time in general. We present a new force-directed method that is based on a combination of an efficient multilevel scheme and a strategy for approximating the repulsive forces in the system by rapidly evaluating potential fields. Given a graph G=(V,E), the asymptotic worst case running time of this method is O(|V|log|V|+|E|) with linear memory requirements. In practice, the algorithm generates nice drawings of graphs containing 100000 nodes in less than 5 minutes. Furthermore, it clearly visualizes even the structures of those graphs that turned out to be challenging for some other methods.", "We present an iterative drawing algorithm for undirected graphs, based on a force-directed approach, that preserves edge crossing properties. This algorithm insures that two edges cross in the final drawing if and only if these edges crossed on the initial layout. So no new edge crossings are introduced. We describe applications of this technique to improve classical algorithms for drawing planar graphs and for interactive graph drawing.", "", "This paper presents a new algorithm for force directed graph layout on the GPU. The algorithm, whose goal is to compute layouts accurately and quickly, has two contributions. The first contribution is proposing a general multi-level scheme, which is based on spectral partitioning. The second contribution is computing the layout on the GPU. Since the GPU requires a data parallel programming model, the challenge is devising a mapping of a naturally unstructured graph into a well-partitioned structured one. This is done by computing a balanced partitioning of a general graph. This algorithm provides a general multi-level scheme, which has the potential to be used not only for computation on the GPU, but also on emerging multi-core architectures. The algorithm manages to compute high quality layouts of large graphs in a fraction of the time required by existing algorithms of similar quality. An application for visualization of the topologies of ISP (Internet Service Provider) networks is presented.", "Constrained graph layout is a recent generalisation of force-directed graph layout which allows constraints on node placement. We give a constrained graph layout algorithm that takes an initial feasible layout and improves it while preserving the topology of the initial layout. The algorithm supports poly-line connectors and clusters. During layout the connectors and cluster boundaries act like impervious rubber-bands which try to shrink in length. The intended application for our algorithm is dynamic graph layout, but it can also be used to improve layouts generated by other graph layout techniques." ] }
1408.0677
104031969
Analysis of high dimensional data is a common task. Often, small multiples are used to visualize 1 or 2 dimensions at a time, such as in a scatterplot matrix. Associating data points between different views can be difficult though, as the points are not fixed. Other times, dimensional reduction techniques are employed to summarize the whole dataset in one image, but individual dimensions are lost in this view. In this paper, we present a means of augmenting a dimensional reduction plot with isocontours to reintroduce the original dimensions. By applying this to each dimension in the original data, we create multiple views where the points are consistent, which facilitates their comparison. Our approach employs a combination of a novel, graph-based projection technique with a GPU accelerated implementation of moving least squares to interpolate space between the points. We also present evaluations of this approach both with a case study and with a user study.
There are many methods for interpolating between unstructured data points, including Barycentric linear interpolation, Sibson interpolation @cite_47 , and Moving Least Squares (MLS) @cite_9 . Of these, MLS offers the smoothest continuity. While MLS is often used in 3D applications such as surface reconstruction @cite_46 , it has also been applied to 2D applications such as image deformation, as in the work of @cite_23 where MLS is used to distort a grid of points according to a skeleton of control points. MLS has also be used to interpolate internal coordinates for higher order polygons @cite_14 . Our approach uses the deformation calculations from @cite_23 , but uses them to interpolate texture coordinates as in @cite_14 . Also, we use GPU acceleration to parallely perform the MLS calculation per pixel.
{ "cite_N": [ "@cite_47", "@cite_14", "@cite_9", "@cite_23", "@cite_46" ], "mid": [ "2147762997", "1965851707", "", "2015475217", "2058524213" ], "abstract": [ "Natural-neighbor interpolation methods, such as Sibson's method, are well-known schemes for multivariate data fitting and reconstruction. Despite its many desirable properties, Sibson's method is computationally expensive and difficult to implement, especially when applied to higher-dimensional data. The main reason for both problems is the method's implementation based on a Voronoi diagram of all data points. We describe a discrete approach to evaluating Sibson's interpolant on a regular grid, based solely on finding nearest neighbors and rendering and blending d-dimensional spheres. Our approach does not require us to construct an explicit Voronoi diagram, is easily implemented using commodity three-dimensional graphics hardware, leads to a significant speed increase compared to traditional approaches, and generalizes easily to higher dimensions. For large scattered data sets, we achieve two-dimensional (2D) interpolation at interactive rates and 3D interpolation (3D) with computation times of a few seconds.", "We propose a new family of barycentric coordinates that have closed-forms for arbitrary 2D polygons. These coordinates are easy to compute and have linear precision even for open polygons. Not only do these coordinates have linear precision, but we can create coordinates that reproduce polynomials of a set degree m as long as degree m polynomials are specified along the boundary of the polygon. We also show how to extend these coordinates to interpolate derivatives specified on the boundary.", "", "We provide an image deformation method based on Moving Least Squares using various classes of linear functions including affine, similarity and rigid transformations. These deformations are realistic and give the user the impression of manipulating real-world objects. We also allow the user to specify the deformations using either sets of points or line segments, the later useful for controlling curves and profiles present in the image. For each of these techniques, we provide simple closed-form solutions that yield fast deformations, which can be performed in real-time.", "We introduce a robust moving least-squares technique for reconstructing a piecewise smooth surface from a potentially noisy point cloud. We use techniques from robust statistics to guide the creation of the neighborhoods used by the moving least squares (MLS) computation. This leads to a conceptually simple approach that provides a unified framework for not only dealing with noise, but also for enabling the modeling of surfaces with sharp features.Our technique is based on a new robust statistics method for outlier detection: the forward-search paradigm. Using this powerful technique, we locally classify regions of a point-set to multiple outlier-free smooth regions. This classification allows us to project points on a locally smooth region rather than a surface that is smooth everywhere, thus defining a piecewise smooth surface and increasing the numerical stability of the projection operator. Furthermore, by treating the points across the discontinuities as outliers, we are able to define sharp features. One of the nice features of our approach is that it automatically disregards outliers during the surface-fitting phase." ] }
1408.0965
14230115
We consider Lagrangian duality based approaches to design and analyze algorithms for online energy-efficient scheduling. First, we present a primal-dual framework. Our approach makes use of the Lagrangian weak duality and convexity to derive dual programs for problems which could be formulated as convex assignment problems. The duals have intuitive structures as the ones in linear programming. The constraints of the duals explicitly indicate the online decisions and naturally lead to competitive algorithms. Second, we use a dual-fitting approach, which also based on the weak duality, to study problems which are unlikely to admit convex relaxations. Through the analysis, we show an interesting feature in which primal-dual gives idea for designing algorithms while the analysis is done by dual-fitting. We illustrate the advantages and the flexibility of the approaches through problems in different setting: from single machine to unrelated machine environments, from typical competitive analysis to the one with resource augmentation, from convex relaxations to non-convex relaxations.
In the search for principled methods to design and analyze online problems, especially in online scheduling, interesting approaches @cite_1 @cite_11 @cite_6 based on mathematical programming have been presented. The approaches give insight about the nature of many scheduling problems, hence lead to algorithms which are usually simple and competitive @cite_1 @cite_11 @cite_6 @cite_17 @cite_8 @cite_5 .
{ "cite_N": [ "@cite_11", "@cite_8", "@cite_1", "@cite_6", "@cite_5", "@cite_17" ], "mid": [ "1544574065", "2951631866", "165387623", "435588996", "1964506478", "2272870368" ], "abstract": [ "We give a principled method to design online algorithms (for potentially non-linear problems) using a mathematical programming formulation of the problem, and also to analyze the competitiveness of the resulting algorithm using the dual program. This method can be viewed as an extension of the online primal-dual method for linear programming problems, to nonlinear programs. We show the application of this method to two online speed-scaling problems: one involving scheduling jobs on a speed scalable processor so as to minimize energy plus an arbitrary sum scheduling objective, and one involving routing virtual circuit connection requests in a network of speed scalable routers so as to minimize the aggregate power or energy used by the routers. This analysis shows that competitive algorithms exist for problems that had resisted analysis using the dominant potential function approach in the speed-scaling literature, and provides alternate cleaner analysis for other known results. This gives us another tool in the design and analysis of primal-dual algorithms for online problems.", "We introduce and study a general scheduling problem that we term the Packing Scheduling problem. In this problem, jobs can have different arrival times and sizes; a scheduler can process job @math at rate @math , subject to arbitrary packing constraints over the set of rates ( @math ) of the outstanding jobs. The PSP framework captures a variety of scheduling problems, including the classical problems of unrelated machines scheduling, broadcast scheduling, and scheduling jobs of different parallelizability. It also captures scheduling constraints arising in diverse modern environments ranging from individual computer architectures to data centers. More concretely, PSP models multidimensional resource requirements and parallelizability, as well as network bandwidth requirements found in data center scheduling. In this paper, we design non-clairvoyant online algorithms for PSP and its special cases -- in this setting, the scheduler is unaware of the sizes of jobs. Our two main results are, 1) a constant competitive algorithm for minimizing total weighted completion time for PSP and 2)a scalable algorithm for minimizing the total flow-time on unrelated machines, which is a special case of PSP.", "We propose a general dual-fitting technique for analyzing online scheduling algorithms in the unrelated machines setting where the objective function involves weighted flow-time, and we allow the machines of the on-line algorithm to have (1 + e)-extra speed than the offline optimum (the so-called speed augmentation model). Typically, such algorithms are analyzed using non-trivial potential functions which yield little insight into the proof technique. We propose that one can often analyze such algorithms by looking at the dual (or Lagrangian dual) of the linear (or convex) program for the corresponding scheduling problem, and finding a feasible dual solution as the on-line algorithm proceeds. As representative cases, we get the following results: • For the problem of minimizing weighted flow-time, we give an O (1 e)-competitive greedy algorithm. This is an improvement by a factor of 1 e on the competitive ratio of the greedy algorithm of Chadha-Garg-Kumar-Muralidhara. • For the problem of minimizing weighted lk norm of flow-time, we show that a greedy algorithm gives an O (1 e)-competitive ratio. This marginally improves the result of Im and Moseley. • For the problem of minimizing weighted flow-time plus energy, and when the energy function f(s) is equal to sγ, γ > 1, we show that a natural greedy algorithm is O(γ2)-competitive. Prior to our work, such a result was known for the related machines setting only (Gupta-Krishnaswamy-Pruhs).", "We present an unified approach to study online scheduling problems in the resource augmentation speed scaling models. Potential function method is extensively used for analyzing algorithms in these models; however, they yields little insight on how to construct potential functions and how to design algorithms for related problems. In the paper, we generalize and strengthen the dual-fitting technique proposed by [1]. The approach consists of considering a possibly non-convex relaxation and its Lagrangian dual; then constructing dual variables such that the Lagrangian dual has objective value within a desired factor of the primal optimum. The competitive ratio follows by the standard Lagrangian weak duality. This approach is simple yet powerful and it is seemingly a right tool to study problems with resource augmentation or speed scaling. We illustrate the approach through the following results. 1 We revisit algorithms EQUI and LAPS in Non-clairvoyant Scheduling to minimize total flow-time. We give simple analyses to prove known facts on the competitiveness of such algorithms. Not only are the analyses much simpler than the previous ones, they also explain why LAPS is a natural extension of EQUI to design a scalable algorithm for the problem. 2 We consider the online scheduling problem to minimize total weighted flow-time plus energy where the energy power f(s) is a function of speed s and is given by s α for α ≥ 1. For a single machine, we showed an improved competitive ratio for a non-clairvoyant memoryless algorithm. For unrelated machines, we give an O(α logα)-competitive algorithm. The currently best algorithm for unrelated machines is O(α 2)-competitive. 3 We consider the online scheduling problem on unrelated machines with the objective of minimizing ∑ i,j w ij f(F j ) where F j is the flow-time of job j and f is an arbitrary non-decreasing cost function with some nice properties. We present an algorithm which is ( 1 1-3 )-speed, ( 2K( ) )-competitive where K(e) is a function depending on f and e. The algorithm does not need to know the speed (1 + e) a priori. A corollary is a (1 + e)-speed, ( k ^ 1+1 k )-competitive algorithm (which does not know e a priori) for the objective of minimizing the weighted l k -norm of flow-time.", "We consider the classical problem of minimizing the total weighted flow-time for unrelated machines in the online non-clairvoyant setting. In this problem, a set of jobs J arrive over time to be scheduled on a set of M machines. Each job J has processing length pj, weight wj, and is processed at a rate of lij when scheduled on machine i. The online scheduler knows the values of wj and lij upon arrival of the job, but is not aware of the quantity pj. We present the first online algorithm that is scalable ((1+e)-speed O(1 2)-competitive for any constant e > 0) for the total weighted flow-time objective. No non-trivial results were known for this setting, except for the most basic case of identical machines. Our result resolves a major open problem in online scheduling theory. Moreover, we also show that no job needs more than a logarithmic number of migrations. We further extend our result and give a scalable algorithm for the objective of minimizing total weighted flow-time plus energy cost for the case of unrelated machines. In this problem, each machine can be sped up by a factor of f-1i(P) when consuming power P, where fi is an arbitrary strictly convex power function. In particular, we get an O(γ2)-competitive algorithm when all power functions are of form sγ. These are the first non-trivial non-clairvoyant results in any setting with heterogeneous machines. The key algorithmic idea is to let jobs migrate selfishly until they converge to an equilibrium. Towards this end, we define a game where each job's utility which is closely tied to the instantaneous increase in the objective the job is responsible for, and each machine declares a policy that assigns priorities to jobs based on when they migrate to it, and the execution speeds. This has a spirit similar to coordination mechanisms that attempt to achieve near optimum welfare in the presence of selfish agents (jobs). To the best our knowledge, this is the first work that demonstrates the usefulness of ideas from coordination mechanisms and Nash equilibria for designing and analyzing online algorithms.", "We consider the problem of online scheduling of jobs on unrelated machines with dynamic speed scaling to minimize the sum of energy and weighted flow time. We give an algorithm with an almost optimal competitive ratio for arbitrary power functions. (No earlier results handled arbitrary power functions for minimizing flow time plus energy with unrelated machines.) For power functions of the form f(s) = sα for some constant α > 1, we get a competitive ratio of O(α log α), improving upon a previous competitive ratio of O(α2) by [3], along with a matching lower bound of Ω(α log α). Further, in the resource augmentation model, with a 1 + e speed up, we give a 2(1 e + 1) competitive algorithm, with essentially the same techniques, improving the bound of 1 + O(1 e2) by [15] and matching the bound of [3] for the special case of fixed speed unrelated machines. Unlike the previous results most of which used an amortized local competitiveness argument or dual fitting methods, we use a primal-dual method, which is useful not only to analyze the algorithms but also to design the algorithm itself." ] }
1408.0965
14230115
We consider Lagrangian duality based approaches to design and analyze algorithms for online energy-efficient scheduling. First, we present a primal-dual framework. Our approach makes use of the Lagrangian weak duality and convexity to derive dual programs for problems which could be formulated as convex assignment problems. The duals have intuitive structures as the ones in linear programming. The constraints of the duals explicitly indicate the online decisions and naturally lead to competitive algorithms. Second, we use a dual-fitting approach, which also based on the weak duality, to study problems which are unlikely to admit convex relaxations. Through the analysis, we show an interesting feature in which primal-dual gives idea for designing algorithms while the analysis is done by dual-fitting. We illustrate the advantages and the flexibility of the approaches through problems in different setting: from single machine to unrelated machine environments, from typical competitive analysis to the one with resource augmentation, from convex relaxations to non-convex relaxations.
was the first who proposed studying online scheduling by linear (convex) programming and dual fitting. By this approach, they gave simple algorithms and simple analyses with improved performance for problems where the analyses based on potential functions are complex or it is unclear how to design such functions. Subsequently, Nguyen @cite_6 generalized the approach in @cite_1 and proposed to study online scheduling by non-convex programming and the weak Lagrangian duality. Using that technique, @cite_6 derive competitive algorithms for problems related to weighted flow-time.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "165387623", "435588996" ], "abstract": [ "We propose a general dual-fitting technique for analyzing online scheduling algorithms in the unrelated machines setting where the objective function involves weighted flow-time, and we allow the machines of the on-line algorithm to have (1 + e)-extra speed than the offline optimum (the so-called speed augmentation model). Typically, such algorithms are analyzed using non-trivial potential functions which yield little insight into the proof technique. We propose that one can often analyze such algorithms by looking at the dual (or Lagrangian dual) of the linear (or convex) program for the corresponding scheduling problem, and finding a feasible dual solution as the on-line algorithm proceeds. As representative cases, we get the following results: • For the problem of minimizing weighted flow-time, we give an O (1 e)-competitive greedy algorithm. This is an improvement by a factor of 1 e on the competitive ratio of the greedy algorithm of Chadha-Garg-Kumar-Muralidhara. • For the problem of minimizing weighted lk norm of flow-time, we show that a greedy algorithm gives an O (1 e)-competitive ratio. This marginally improves the result of Im and Moseley. • For the problem of minimizing weighted flow-time plus energy, and when the energy function f(s) is equal to sγ, γ > 1, we show that a natural greedy algorithm is O(γ2)-competitive. Prior to our work, such a result was known for the related machines setting only (Gupta-Krishnaswamy-Pruhs).", "We present an unified approach to study online scheduling problems in the resource augmentation speed scaling models. Potential function method is extensively used for analyzing algorithms in these models; however, they yields little insight on how to construct potential functions and how to design algorithms for related problems. In the paper, we generalize and strengthen the dual-fitting technique proposed by [1]. The approach consists of considering a possibly non-convex relaxation and its Lagrangian dual; then constructing dual variables such that the Lagrangian dual has objective value within a desired factor of the primal optimum. The competitive ratio follows by the standard Lagrangian weak duality. This approach is simple yet powerful and it is seemingly a right tool to study problems with resource augmentation or speed scaling. We illustrate the approach through the following results. 1 We revisit algorithms EQUI and LAPS in Non-clairvoyant Scheduling to minimize total flow-time. We give simple analyses to prove known facts on the competitiveness of such algorithms. Not only are the analyses much simpler than the previous ones, they also explain why LAPS is a natural extension of EQUI to design a scalable algorithm for the problem. 2 We consider the online scheduling problem to minimize total weighted flow-time plus energy where the energy power f(s) is a function of speed s and is given by s α for α ≥ 1. For a single machine, we showed an improved competitive ratio for a non-clairvoyant memoryless algorithm. For unrelated machines, we give an O(α logα)-competitive algorithm. The currently best algorithm for unrelated machines is O(α 2)-competitive. 3 We consider the online scheduling problem on unrelated machines with the objective of minimizing ∑ i,j w ij f(F j ) where F j is the flow-time of job j and f is an arbitrary non-decreasing cost function with some nice properties. We present an algorithm which is ( 1 1-3 )-speed, ( 2K( ) )-competitive where K(e) is a function depending on f and e. The algorithm does not need to know the speed (1 + e) a priori. A corollary is a (1 + e)-speed, ( k ^ 1+1 k )-competitive algorithm (which does not know e a priori) for the objective of minimizing the weighted l k -norm of flow-time." ] }
1408.0965
14230115
We consider Lagrangian duality based approaches to design and analyze algorithms for online energy-efficient scheduling. First, we present a primal-dual framework. Our approach makes use of the Lagrangian weak duality and convexity to derive dual programs for problems which could be formulated as convex assignment problems. The duals have intuitive structures as the ones in linear programming. The constraints of the duals explicitly indicate the online decisions and naturally lead to competitive algorithms. Second, we use a dual-fitting approach, which also based on the weak duality, to study problems which are unlikely to admit convex relaxations. Through the analysis, we show an interesting feature in which primal-dual gives idea for designing algorithms while the analysis is done by dual-fitting. We illustrate the advantages and the flexibility of the approaches through problems in different setting: from single machine to unrelated machine environments, from typical competitive analysis to the one with resource augmentation, from convex relaxations to non-convex relaxations.
presented the primal-dual method for online packing and covering problems. Their method unifies several previous potential function based analyses and is a powerful tool to design and analyze algorithms for problems with linear relaxations. gave a primal-dual algorithm for a general class of scheduling problems with cost function @math . also used the primal-dual approach to derive optimal competitive ratios for online matching with concave return. The construction of dual programs in @cite_17 @cite_9 is based on convex conjugates and Fenchel duality for primal convex programs in which the objective is convex and the constraints are .
{ "cite_N": [ "@cite_9", "@cite_17" ], "mid": [ "1988278843", "2272870368" ], "abstract": [ "We consider a significant generalization of the Adwords problem by allowing arbitrary concave returns, and we characterize the optimal competitive ratio achievable. The problem considers a sequence of items arriving online that have to be allocated to agents, with different agents bidding different amounts. The objective function is the sum, over each agent i, of a monotonically non-decreasing concave function Mi : R+ -> R+ of the total amount allocated to i. All variants of online matching problems (including the Adwords problem) studied in the literature consider the special case of budgeted linear functions, that is, functions of the form Mi(ui) = min ui,Bi for some constant Bi. The distinguishing feature of this paper is in allowing arbitrary concave returns. The main result of this paper is that for each concave function M, there exists a constant F(M) ≤ 1 such that: there exists an algorithm with competitive ratio of miniF(Mi), independent of the sequence of items. No algorithm has a competitive ratio larger than F(M) over all instances with Mi= M for all i. Our algorithm is based on the primal-dual paradigm and makes use of convex programming duality. The upper bounds are obtained by formulating the task of finding the right counterexample as an optimization problem. This path takes us through the calculus of variations which deals with optimizing over continuous functions. The algorithm and the upper bound are related to each other via a set of differential equations, which points to a certain kind of duality between them.", "We consider the problem of online scheduling of jobs on unrelated machines with dynamic speed scaling to minimize the sum of energy and weighted flow time. We give an algorithm with an almost optimal competitive ratio for arbitrary power functions. (No earlier results handled arbitrary power functions for minimizing flow time plus energy with unrelated machines.) For power functions of the form f(s) = sα for some constant α > 1, we get a competitive ratio of O(α log α), improving upon a previous competitive ratio of O(α2) by [3], along with a matching lower bound of Ω(α log α). Further, in the resource augmentation model, with a 1 + e speed up, we give a 2(1 e + 1) competitive algorithm, with essentially the same techniques, improving the bound of 1 + O(1 e2) by [15] and matching the bound of [3] for the special case of fixed speed unrelated machines. Unlike the previous results most of which used an amortized local competitiveness argument or dual fitting methods, we use a primal-dual method, which is useful not only to analyze the algorithms but also to design the algorithm itself." ] }
1408.0965
14230115
We consider Lagrangian duality based approaches to design and analyze algorithms for online energy-efficient scheduling. First, we present a primal-dual framework. Our approach makes use of the Lagrangian weak duality and convexity to derive dual programs for problems which could be formulated as convex assignment problems. The duals have intuitive structures as the ones in linear programming. The constraints of the duals explicitly indicate the online decisions and naturally lead to competitive algorithms. Second, we use a dual-fitting approach, which also based on the weak duality, to study problems which are unlikely to admit convex relaxations. Through the analysis, we show an interesting feature in which primal-dual gives idea for designing algorithms while the analysis is done by dual-fitting. We illustrate the advantages and the flexibility of the approaches through problems in different setting: from single machine to unrelated machine environments, from typical competitive analysis to the one with resource augmentation, from convex relaxations to non-convex relaxations.
In the speed scaling with power down energy model, all previous papers considered the problem of minimizing the energy consumption on a single machine. was the first who studied the problem in online setting and derived an algorithm with competitive ratio @math . Subsequently, presented an algorithm which is @math -competitive. In offline setting, the problem is recently showed to be NP-hard @cite_2 . Moreover, also gave a 1.171-approximation algorithm, which improved the 2-approximation algorithm in @cite_4 . If the instances are agreeable then the problem is polynomial @cite_3 .
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2145398804", "1973204371", "2173032108" ], "abstract": [ "This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.", "We consider the problem of scheduling on a single processor a given set of n jobs. Each job j has a workload wj and a release time r j. The processor can vary its speed and hibernate to reduce energy consumption. In a schedule minimizing overall consumed energy, it might be that some jobs complete arbitrarily far from their release time. So in order to guarantee some quality of service, we would like to impose a deadline d j = rj + F for every job j, where F is a guarantee on the flow time. We provide an O(n3) algorithm for the more general case of agreeable deadlines, where jobs have release times and deadlines and can be ordered such that for every i < j, both ri rj and di dj.", "We study an energy conservation problem where a variable-speed processor is equipped with a sleep state. Executing jobs at high speeds and then setting the processor asleep is an approach that can lead to further energy savings compared to standard dynamic speed scaling. We consider classical deadline-based scheduling, i.e. each job is specified by a release time, a deadline and a processing volume. For general convex power functions, [12] devised an offline 2-approximation algorithm. Roughly speaking, the algorithm schedules jobs at a critical speed Scrit that yields the smallest energy consumption while jobs are processed. For power functions P(s) = sα + γ, where s is the processor speed, [11] gave an (αα + 2)-competitive online algorithm. We investigate the offline setting of speed scaling with a sleep state. First we prove NP-hardness of the optimization problem. Additionally, we develop lower bounds, for general convex power functions: No algorithm that constructs Scrit-schedules, which execute jobs at speeds of at least scrit, can achieve an approximation factor smaller than 2. Furthermore, no algorithm that minimizes the energy expended for processing jobs can attain an approximation ratio smaller than 2. We then present an algorithmic framework for designing good approximation algorithms. For general convex power functions, we derive an approximation factor of 4 3. For power functions P(s) = βsα + γ, we obtain an approximation of 137 117 < 1.171. We finally show that our framework yields the best approximation guarantees for the class of Scrit-schedules. For general convex power functions, we give another 2-approximation algorithm. For functions P(s) = βsα + γ, we present tight upper and lower bounds on the best possible approximation factor. The ratio is exactly eW−1(−e−1−1 e) (eW−1(−e−1−1 e) + 1) < 1.211, where W--1 is the lower branch of the Lambert W function." ] }
1408.0994
1655492871
Abstract We propose a novel factorization of a non-singular matrix P , viewed as a 2 × 2 -blocked matrix. The factorization decomposes P into a product of three matrices that are lower block-unitriangular, upper block-triangular, and lower block-unitriangular, respectively. Our goal is to make this factorization “as block-diagonal as possible” by minimizing the ranks of the off-diagonal blocks. We give lower bounds on these ranks and show that they are sharp by providing an algorithm that computes an optimal solution. The proposed decomposition can be viewed as a generalization of the well-known Block LU factorization using the Schur complement. Finally, we briefly explain one application of this factorization: the design of optimal circuits for a certain class of streaming permutations.
Schur complement Several efforts have been made to adapt the definition of Schur complement in the case of general @math and @math . For instance, it is possible to define an indexed Schur complement of another non-singular principal submatrix @cite_9 , or use pseudo-inverses @cite_5 for matrix inversion algorithms.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "1993446440", "1542938076" ], "abstract": [ "Suppose the complex matrix M is partitioned into a @math array of blocks; let @math . The generalized Schur complement of A in M is defined to be @math , where @math is the Moore–Penrose inverse of A. The relationship of the ranks of M, A, and @math is determined. A new proof, under certain conditions, of Sylvester’s determinantal formula is given. A quotient formula like one previously proved for the Schur complement is obtained. Finally, several known inequalities for positive semidefinite Hermitian matrices are generalized.", "Historical Introduction: Issai Schur and the Early Development of the Schur Complement.- Basic Properties of the Schur Complement.- Eigenvalue and Singular Value Inequalities of Schur Complements.- Block Matrix Techniques.- Closure Properties.- Schur Complements and Matrix Inequalities: Operator-Theoretic Approach.- Schur complements in statistics and probability.- Schur Complements and Applications in Numerical Analysis." ] }
1408.0994
1655492871
Abstract We propose a novel factorization of a non-singular matrix P , viewed as a 2 × 2 -blocked matrix. The factorization decomposes P into a product of three matrices that are lower block-unitriangular, upper block-triangular, and lower block-unitriangular, respectively. Our goal is to make this factorization “as block-diagonal as possible” by minimizing the ranks of the off-diagonal blocks. We give lower bounds on these ranks and show that they are sharp by providing an algorithm that computes an optimal solution. The proposed decomposition can be viewed as a generalization of the well-known Block LU factorization using the Schur complement. Finally, we briefly explain one application of this factorization: the design of optimal circuits for a certain class of streaming permutations.
Alternative block decompositions A common way to handle the case where @math is singular is to use a permutation matrix @math that reorders the columns of @math such that the new principal upper submatrix is non-singular @cite_9 . Decomposition then becomes: @math However, @math needs to swap columns with index @math ; thus @math does not have the required form considered in our work.
{ "cite_N": [ "@cite_9" ], "mid": [ "1542938076" ], "abstract": [ "Historical Introduction: Issai Schur and the Early Development of the Schur Complement.- Basic Properties of the Schur Complement.- Eigenvalue and Singular Value Inequalities of Schur Complements.- Block Matrix Techniques.- Closure Properties.- Schur Complements and Matrix Inequalities: Operator-Theoretic Approach.- Schur complements in statistics and probability.- Schur Complements and Applications in Numerical Analysis." ] }
1408.0994
1655492871
Abstract We propose a novel factorization of a non-singular matrix P , viewed as a 2 × 2 -blocked matrix. The factorization decomposes P into a product of three matrices that are lower block-unitriangular, upper block-triangular, and lower block-unitriangular, respectively. Our goal is to make this factorization “as block-diagonal as possible” by minimizing the ranks of the off-diagonal blocks. We give lower bounds on these ranks and show that they are sharp by providing an algorithm that computes an optimal solution. The proposed decomposition can be viewed as a generalization of the well-known Block LU factorization using the Schur complement. Finally, we briefly explain one application of this factorization: the design of optimal circuits for a certain class of streaming permutations.
One can modify the above idea to choose @math such that @math has the shape required by decomposition : & = & Then the problem is to design @math such that @math is non-singular and @math is minimal. This basic idea is used in @cite_4 , where, however, only @math is minimized, which, in general, does not produce optimal solutions for the problem considered here.
{ "cite_N": [ "@cite_4" ], "mid": [ "1994493549" ], "abstract": [ "This article presents a method for constructing hardware structures that perform a fixed permutation on streaming data. The method applies to permutations that can be represented as linear mappings on the bit-level representation of the data locations. This subclass includes many important permutations such as stride permutations (corner turn, perfect shuffle, etc.), the bit reversal, the Hadamard reordering, and the Gray code reordering. The datapath for performing the streaming permutation consists of several independent banks of memory and two interconnection networks. These structures are built for a given streaming width (i.e., number of inputs and outputs per cycle) and operate at full throughput for this streaming width. We provide an algorithm that completely specifies the datapath and control logic given the desired permutation and streaming width. Further, we provide lower bounds on the achievable cost of a solution and show that for an important subclass of permutations our solution is optimal. We apply our algorithm to derive datapaths for several important permutations, including a detailed example that carefully illustrates each aspect of the design process. Lastly, we compare our permutation structures to those of [2004], which are specialized for stride permutations." ] }
1408.0395
2017370456
In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by [1] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.
Topological self-stabilization has recently attracted a lot of attention. Various topologies have been considered such as simple line and ring networks (e.g., @cite_14 @cite_7 ), skip lists and skip graphs (e.g., @cite_19 @cite_0 ), expanders @cite_13 , the Delaunay graph @cite_9 , the hypertree @cite_5 , and Chord @cite_8 . Also a universal protocol for topological self-stabilization has been proposed @cite_22 . However, none of these works consider nodes with heterogeneous bandwidths.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_0", "@cite_19", "@cite_5", "@cite_13" ], "mid": [ "2164639586", "153242498", "2118184224", "1992206081", "2163818815", "", "2009821304", "1580692781", "2021142034" ], "abstract": [ "We propose a self-stabilizing and modeless peer-to-peer (P2P) network construction and maintenance protocol, called the Ring Network (RN) protocol. The RN protocol, when started on a network of peers that are in an arbitrary state, will cause the network to converge to a structured P2P system with a directed ring topology, where peers are ordered according to their identifiers. Furthermore, the RN protocol maintains this structure in the face of peer joins and departures. The RN protocol is a distributed and asynchronous message-passing protocol, which fits well the autonomous behavior of peers in a P2P system. The RN protocol requires only the existence of a bootstrapping system which is weakly connected. Peers do not need to be informed of any global network state, nor do they need to assist in repairing the network topology when they leave. We provide a proof of the self-stabilizing nature of the protocol, and experimentally measure the average cost (in time and number of messages) to achieve convergence.", "Overlay networks are expected to operate in hostile environments, where node and link failures are commonplace. One way to make overlay networks robust is to design self-stabilizing overlay networks, i.e., overlay networks that can handle node and link failures without any external supervision. In this paper, we first describe a simple framework, which we call the Transitive Closure Framework (TCF), for the selfstabilizing construction of an extensive class of overlay networks. Like previous self-stabilizing overlay networks, TCF permits node degrees to grow to Ω(n), independent of the maximum degree of the target overlay network. However, TCF has several advantages over previous work in this area: (i) it is a \"framework\" and can be used for the construction of a variety of overlay networks, not just a particular network, (ii) it runs in an optimal number of rounds for a variety of overlay networks, and (iii) it can easily be composed with other non-self-stabilizing protocols that can recover from specific bad initial states in a memory-efficient fashion. We demonstrate the power of our framework by deriving from TCF a simple self-stabilizing protocol for constructing Skip+ graphs (, PODC 2009) which presents optimal convergence time from any configuration, and requires only a O(1) factor of extra memory for handling node Joins.", "Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms that converge quickly to such a desirable topology, independently of the initial network state. This paper proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. We propose two variants of a simple algorithm, and provide an extensive formal analysis of their worst-case and best-case parallel time complexities, as well as their performance under a greedy selection of the actions to be executed.", "The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that in the Chord network it is not locally checkable whether its current topology matches the correct topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(n log n) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O((log n)2) communication rounds the Re-Chord network is stable again.", "This article studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identifiers, we explore a natural and relevant 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm DStab which is based on the concept of ''local Delaunay graphs'' and which forwards temporary edges in greedy fashion reminiscent of compass routing. DStab constructs a Delaunay graph from any initial connected topology and in a distributed manner in time O(n^3) in the worst-case; if the initial network contains the Delaunay graph, the convergence time is only O(n) rounds. DStab also ensures that individual node joins and leaves affect a small part of the network only. Such self-stabilizing Delaunay networks have interesting applications and our construction gives insights into the necessary geometric reasoning that is required for higher-dimensional linearization problems.", "", "We present Corona, a deterministic self-stabilizing algorithm for skip list construction in structured overlay networks. Corona operates in the low-atomicity message-passing asynchronous system model. Corona requires constant process memory space for its operation and, therefore, scales well. We prove the general necessary conditions limiting the initial states from which a self-stabilizing structured overlay network in a message-passing system can be constructed. The conditions require that initial state information has to form a weakly connected graph and it should only contain identifiers that are present in the system. We formally describe Corona and rigorously prove that it stabilizes from an arbitrary initial state subject to the necessary conditions. We extend Corona to construct a skip graph.", "Peer-to-peer systems are prone to faults, thus it is vitally important to design peer-to-peer systems to automatically regain consistency, namely to be self-stabilizing. Toward this goal, we present a deterministic structure that defines for every n the entire (IP) pointers structure among the n machines. Namely, the next hop for the insert, delete and search procedures of the peer-to-peer system. Thus, the consistency of the system is easily defined, monitored, verified and repaired. We present the HyperTree (distributed) structure which support the peer-to-peer procedures while ensuring that the out-degree and in-degree (the number of outgoing incoming pointers) are b log sub b N where N in the maximal number of machines and b is an integer parameter greater than 1. In addition the HyperTree ensures that the maximal number of hops involved in each procedure is bounded by log sub b N. A self-stabilizing peer-to-peer system based on the HyperTree is presented.", "Self-stabilizing distributed construction of expanders by the use of short random walks. We consider self-stabilizing and self-organizing distributed construction of a spanner that forms an expander. We advocate the importance of designing systems to be self-stabilizing and self-organizing, as designers cannot predict and address all fault scenarios and should address unexpected faults in the fastest possible way. We use folklore results to randomly define an expander graph. Given the randomized nature of our algorithms, a monitoring technique is presented for ensuring the desired results. The monitoring is based on the fact that expanders have a rapid mixing time and the possibility of examining the rapid mixing time by O(nlogn) short (O(log^4n) length) random walks even for non-regular expanders. We then use our results to construct a hierarchical sequence of spanders, each being an expander spanning the previous spander. Such a sequence of spanders may be used to achieve different quality of service (QoS) assurances in different applications. Several snap-stabilizing algorithms that are used for monitoring are presented, including: (i) Snap-stabilizing data-link, (ii) Snap-stabilizing message passing reset, and (iii) Snap-stabilizing token tracing." ] }
1408.0395
2017370456
In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by [1] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.
Probabilistic approaches like ours have the advantage of better graph properties (e.g., a logarithmic expansion) compared to deterministic variants (e.g., @cite_17 ).
{ "cite_N": [ "@cite_17" ], "mid": [ "2061086123" ], "abstract": [ "In this paper we study the problem of designing searchable concurrent data structures with performance guarantees that can be used in a distributed environment where data elements are stored in a dynamically changing set of nodes. Searchable data structures are data structures that provide three basic operations: I NSERT , D ELETE , and S EARCH . In addition to searching for an exact match, we demand that for a data structure to be called \"searchable\", Search also has to be able to search for the closest successor or predecessor of a data item. Such a property has a tremendous advantage over just exact match, because it would allow to implement many data base applications.We are interested in finding a searchable concurrent data structure that has (1) a low degree, (2) requires a small amount of work for I NSERT and D ELETE operations, and (3) is able to handle concurrent search requests with low congestion and dilation.We present the first deterministic concurrent data structure, called Hyperring, that can fulfill all of these objectives in a polylogarithmic way. In fact, the Hyperring has a degree of O(log n), requires O(log3 n) work for I NSERT and D ELETE operations, and can handle concurrent search requests to random destinations, one request per node, with congestion and dilation O(log n) w.h.p.Most of the previous solutions for distributed environments are not searchable (in our sense) but only provide exact lookup, and those that are searchable do not have proofs about the congestion caused by concurrent search requests." ] }
1408.0320
2949592496
We apply crystal theory to affine Schubert calculus, Gromov-Witten invariants for the complete flag manifold, and the positroid stratification of the positive Grassmannian. We introduce operators on decompositions of elements in the type- @math affine Weyl group and produce a crystal reflecting the internal structure of the generalized Young modules whose Frobenius image is represented by stable Schubert polynomials. We apply the crystal framework to products of a Schur function with a @math -Schur function, consequently proving that a subclass of 3-point Gromov-Witten invariants of complete flag varieties for @math enumerate the highest weight elements under these operators. Included in this class are the Schubert structure constants in the (quantum) product of a Schubert polynomial with a Schur function @math for all @math . Another by-product gives a highest weight formulation for various fusion coefficients of the Verlinde algebra and for the Schubert decomposition of certain positroid classes.
Knutson formulated a conjecture for the quantum Grassmannian Littlewood--Richardson coefficients in terms of puzzles @cite_55 as presented in @cite_29 . Coskun @cite_62 gave a positive geometric rule to compute the structure constants of the cohomology ring of two-step flag varieties in terms of Mondrian tableaux. A proof of the puzzle conjecture was recently given by @cite_40 . In the flag case, Fomin, Gelfand and Postnikov @cite_26 computed the quantum Monk rule which was extended in @cite_15 to the quantum Pieri rule. Berg, Saliola and Serrano @cite_35 computed the Littlewood--Richardson coefficients for @math -Schur functions for the case which is equivalent to the quantum Monk rule. Denton @cite_34 proved a special @math -Littlewood--Richardson rule when there is a single term without multiplicity.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_62", "@cite_55", "@cite_29", "@cite_40", "@cite_15", "@cite_34" ], "mid": [ "2123992412", "1718951286", "", "2135893164", "1661441591", "2949133410", "1554718120", "" ], "abstract": [ "We prove that the Lam-Shimozono ''down operator'' on the affine Weyl group induces a derivation of the affine Fomin-Stanley subalgebra. We use this to verify a conjecture of Berg, Bergeron, Pon and Zabrocki describing the expansion of non-commutative k-Schur functions of ''near rectangles'' in the affine nilCoxeter algebra. Consequently, we obtain a combinatorial interpretation of the corresponding k-Littlewood-Richardson coefficients.", "where In is the ideal generated by symmetric polynomials in x1,... ,xn without constant term. Another, geometric, description of the cohomology ring of the flag manifold is based on the decomposition of Fln into Schubert cells. These are even-dimensional cells indexed by the elements w of the symmetric group Sn. The corresponding cohomology classes oa, called Schubert classes, form an additive basis in H* (Fln 2) . To relate the two descriptions, one would like to determine which elements of 2[xl, ... , Xn] In correspond to the Schubert classes under the isomorphism (1.1). This was first done in [2] (see also [8]) for a general case of an arbitrary complex semisimple Lie group. Later, Lascoux and Schiitzenberger [22] came up with a combinatorial version of this theory (for the type A) by introducing remarkable polynomial representatives of the Schubert classes oa called Schubert polynomials and denoted Gw. Recently, motivated by ideas that came from the string theory [31, 30], mathematicians defined, for any Kahler algebraic manifold X, the (small) quantum cohomology ring QH* (X, 2), which is a certain deformation of the classical cohomology ring (see, e.g., [28, 19, 14] and references therein). The additive structure of QH* (X , 2) is essentially the same as that of ordinary cohomology. In particular, QH* (Fln , Z) is canonically isomorphic, as an abelian group, to the tensor product H* (Fln , 2) (0 Z[ql,..., qn-1], where the qi are formal variables (deformation parameters). The multiplicative structure of the quantum cohomology is however", "", "", "We prove that any three-point genus zero Gromov-Witten invariant on a type A Grassmannian is equal to a classical intersection number on a two-step flag variety. We also give symplectic and orthogonal analogues of this result; in these cases the two-step flag variety is replaced by a sub-maximal isotropic Grassmannian. Our theorems are applied, in type A, to formulate a conjectural quantum Littlewood-Richardson rule, and in the other classical Lie types, to obtain new proofs of the main structure theorems for the quantum cohomology of Lagrangian and orthogonal Grassmannians.", "We prove a conjecture of Knutson asserting that the Schubert structure constants of the cohomology ring of a two-step flag variety are equal to the number of puzzles with specified border labels that can be created using a list of eight puzzle pieces. As a consequence, we obtain a puzzle formula for the Gromov-Witten invariants defining the small quantum cohomology ring of a Grassmann variety of type A. The proof of the conjecture proceeds by showing that the puzzle formula defines an associative product on the cohomology ring of the two-step flag variety. It is based on an explicit bijection of gashed puzzles that is analogous to the jeu de taquin algorithm but more complicated.", "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper.", "" ] }
1408.0325
1892473335
With the advent of online social networks, recommender systems have became crucial for the success of many online applications services due to their significance role in tailoring these applications to user-specific needs or preferences. Despite their increasing popularity, in general recommender systems suffer from the data sparsity and the cold-start problems. To alleviate these issues, in recent years there has been an upsurge of interest in exploiting social information such as trust relations among users along with the rating data to improve the performance of recommender systems. The main motivation for exploiting trust information in recommendation process stems from the observation that the ideas we are exposed to and the choices we make are significantly influenced by our social context. However, in large user communities, in addition to trust relations, the distrust relations also exist between users. For instance, in Epinions the concepts of personal "web of trust" and personal "block list" allow users to categorize their friends based on the quality of reviews into trusted and distrusted friends, respectively. In this paper, we propose a matrix factorization based model for recommendation in social rating networks that properly incorporates both trust and distrust relationships aiming to improve the quality of recommendations and mitigate the data sparsity and the cold-start users issues. Through experiments on the Epinions data set, we show that our new algorithm outperforms its standard trust-enhanced or distrust-enhanced counterparts with respect to accuracy, thereby demonstrating the positive effect that incorporation of explicit distrust information can have on recommender systems.
Social network data has been widely investigated in the memory-based approaches. These methods typically explore the social network and find a neighborhood of users trusted (directly or indirectly) by a user and perform the recommendation by aggregating their ratings. These methods use the transitivity of trust and propagate trust to indirect neighbors in the social network @cite_2 @cite_29 @cite_35 @cite_25 @cite_27 @cite_42 @cite_31 .
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_42", "@cite_27", "@cite_2", "@cite_31", "@cite_25" ], "mid": [ "1976320242", "1632591701", "2135598826", "2285076186", "", "2054141820", "2084527756" ], "abstract": [ "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.", "Recommender Systems based on Collaborative Filtering suggest to users items they might like, such as movies, songs, scientific papers, or jokes. Based on the ratings Based on the ratings provided by users about items, they first find users similar to the users receiving the recommendations and then suggest to her items appreciated in past by those like-minded users. However, given the ratable items are many and the ratings provided by each users only a tiny fraction, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network in order to find users that can be trusted by the active user. Items appreciated by these trustworthy users can then be recommended to the active user. An empirical evaluation on a large dataset crawled from Epinions.com shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings, so that trust is able to alleviate the cold start problem and other weaknesses that beset Collaborative Filtering Recommender Systems.", "Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users.", "Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users who have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model in a principled way. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users.", "", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods." ] }
1408.0325
1892473335
With the advent of online social networks, recommender systems have became crucial for the success of many online applications services due to their significance role in tailoring these applications to user-specific needs or preferences. Despite their increasing popularity, in general recommender systems suffer from the data sparsity and the cold-start problems. To alleviate these issues, in recent years there has been an upsurge of interest in exploiting social information such as trust relations among users along with the rating data to improve the performance of recommender systems. The main motivation for exploiting trust information in recommendation process stems from the observation that the ideas we are exposed to and the choices we make are significantly influenced by our social context. However, in large user communities, in addition to trust relations, the distrust relations also exist between users. For instance, in Epinions the concepts of personal "web of trust" and personal "block list" allow users to categorize their friends based on the quality of reviews into trusted and distrusted friends, respectively. In this paper, we propose a matrix factorization based model for recommendation in social rating networks that properly incorporates both trust and distrust relationships aiming to improve the quality of recommendations and mitigate the data sparsity and the cold-start users issues. Through experiments on the Epinions data set, we show that our new algorithm outperforms its standard trust-enhanced or distrust-enhanced counterparts with respect to accuracy, thereby demonstrating the positive effect that incorporation of explicit distrust information can have on recommender systems.
@cite_2 , a trust-aware collaborative filtering method for recommender systems is proposed. In this work, the collaborative filtering process is informed by the reputation of users, which is computed by propagating trust. @cite_35 proposed a method based on the random walk algorithm to utilize social connection and other social annotations to improve recommendation accuracy. However, this method does not utilize the rating information and is not applicable to constructing a random walk graph in real data sets. TidalTrust @cite_1 performs a modied breadth first search in the trust network to compute a prediction. To compute the trust value between user @math and @math who are not directly connected, TidalTrust aggregates the trust value between @math 's direct neighbors and @math weighted by the direct trust values of @math and its direct neighbors.
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_2" ], "mid": [ "1976320242", "1601015633", "" ], "abstract": [ "Social network systems, like last.fm, play a significant role in Web 2.0, containing large amounts of multimedia-enriched data that are enhanced both by explicit user-provided annotations and implicit aggregated feedback describing the personal preferences of each user. It is also a common tendency for these systems to encourage the creation of virtual networks among their users by allowing them to establish bonds of friendship and thus provide a novel and direct medium for the exchange of data. We investigate the role of these additional relationships in developing a track recommendation system. Taking into account both the social annotation and friendships inherent in the social graph established among users, items and tags, we created a collaborative recommendation system that effectively adapts to the personal information needs of each user. We adopt the generic framework of Random Walk with Restarts in order to provide with a more natural and efficient way to represent social networks. In this work we collected a representative enough portion of the music social network last.fm, capturing explicitly expressed bonds of friendship of the user as well as social tags. We performed a series of comparison experiments between the Random Walk with Restarts model and a user-based collaborative filtering method using the Pearson Correlation similarity. The results show that the graph model system benefits from the additional information embedded in social knowledge. In addition, the graph model outperforms the standard collaborative filtering method.", "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user's opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.", "" ] }
1408.0325
1892473335
With the advent of online social networks, recommender systems have became crucial for the success of many online applications services due to their significance role in tailoring these applications to user-specific needs or preferences. Despite their increasing popularity, in general recommender systems suffer from the data sparsity and the cold-start problems. To alleviate these issues, in recent years there has been an upsurge of interest in exploiting social information such as trust relations among users along with the rating data to improve the performance of recommender systems. The main motivation for exploiting trust information in recommendation process stems from the observation that the ideas we are exposed to and the choices we make are significantly influenced by our social context. However, in large user communities, in addition to trust relations, the distrust relations also exist between users. For instance, in Epinions the concepts of personal "web of trust" and personal "block list" allow users to categorize their friends based on the quality of reviews into trusted and distrusted friends, respectively. In this paper, we propose a matrix factorization based model for recommendation in social rating networks that properly incorporates both trust and distrust relationships aiming to improve the quality of recommendations and mitigate the data sparsity and the cold-start users issues. Through experiments on the Epinions data set, we show that our new algorithm outperforms its standard trust-enhanced or distrust-enhanced counterparts with respect to accuracy, thereby demonstrating the positive effect that incorporation of explicit distrust information can have on recommender systems.
TrustWalker @cite_25 combines trust-based and item-based recommendation to consider enough ratings without suffering from noisy data. Their experiments show that TrustWalker outperforms other existing memory based approaches. Each random walk on the user trust graph returns a predicted rating for user @math on target item @math . The probability of stopping is directly proportional to the similarity between the target item and the most similar item @math , weighted by the sigmoid function of step size @math . The more the similarity, the greater the probability of stopping and using the rating on item @math as the predicted rating for item @math . As the step size increases, the probability of stopping decreases. Thus ratings by closer friends on similar items are considered more reliable than ratings on the target item by friends further away.
{ "cite_N": [ "@cite_25" ], "mid": [ "2084527756" ], "abstract": [ "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods." ] }
1407.8289
2950272073
With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.
: Tensor factorizations are higher-order extensions of matrix factorization that elicit intrinsic multi-way structures and capture the underlying patterns in tensor data. These techniques have been widely used in diverse disciplines to analyze and process tensor data. A thorough survey of these techniques and applications can be found in @cite_20 . The two most commonly factorizations are CP and Tucker. CP is a special case of Tucker decomposition which forces the core array to a (super)diagonal form. It is thus more condensed than that of Tucker. In the supervised tensor learning setting, CP is more frequently applied to explore tensor data because of its properties of uniqueness and simplicity @cite_4 @cite_7 @cite_19 @cite_14 . However, in these applications, CP factorization is used either for exploratory analysis or to deal with linear tensor-based models. In this study, we employ the CP factorization to foster the use of kernel methods for supervised tensor learning.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_19", "@cite_20" ], "mid": [ "2136002544", "1522576632", "", "2031049553", "2024165284" ], "abstract": [ "Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data. Supplementary materials for this article are available online.", "We propose a method for unsupervised linear feature extraction through tensor decomposition. The linear feature extraction can be formulated as a canonical polyadic decomposition (CPD) of a third-order tensor when transformation matrix is constrained to be equal to the Khatri-Rao product of two matrices. Therefore, standard algorithms for computing CPD decomposition can be used for feature extraction. The proposed method is validated on publicly available low-resolutionmass spectra of cancerous and non-cancerous samples. Obtained results indicate that this approach could be of practical importance in analysis of protein expression profiles.", "", "This paper aims to take general tensors as inputs for supervised learning. A supervised tensor learning (STL) framework is established for convex optimization based learning techniques such as support vector machines (SVM) and minimax probability machines (MPM). Within the STL framework, many conventional learning machines can be generalized to take n sup th -order tensors as inputs. We also study the applications of tensors to learning machine design and feature extraction by linear discriminant analysis (LDA). Our method for tensor based feature extraction is named the tenor rank-one discriminant analysis (TR1DA). These generalized algorithms have several advantages: 1) reduce the curse of dimension problem in machine learning and data mining; 2) avoid the failure to converge; and 3) achieve better separation between the different categories of samples. As an example, we generalize MPM to its STL version, which is named the tensor MPM (TMPM). TMPM learns a series of tensor projections iteratively. It is then evaluated against the original MPM. Our experiments on a binary classification problem show that TMPM significantly outperforms the original MPM.", "This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors." ] }
1407.8289
2950272073
With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.
: Supervised tensor learning has been extensively studied in recent years @cite_21 @cite_17 @cite_5 @cite_19 @cite_14 . Most of previous work has concentrated on learning linear tensor-based models, whereas the problem of how to build nonlinear models directly on tensor data has not been well studied. A first attempt in this direction focused on second-order tensors and led to a non-convex optimization problem @cite_13 . Subsequently, the authors claimed that it can be extended to deal with higher-order tensors at the cost of a higher computational complexity, and proposed a factor kernel for tensors of arbitrary order except for square matrices based upon matrix unfoldings @cite_8 . In the context of this proposal, @cite_12 introduced a cumulant-based kernel approach for classification of multichannel signals. @cite_10 presented a kernel tensor partial least squares for regression of lamb movements. A drawback of the approaches in @cite_8 @cite_12 @cite_10 is that they can only capture the one-way relationships within the tensor data, because the tensors are unfolded into matrices. The multi-way structures within tensor data are already lost before the kernel construction process. Different from these methods, we aim to directly exploit the algebraic structure of the tensor to study structure-preserving kernels.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_10", "@cite_21", "@cite_19", "@cite_5", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2136002544", "1988001416", "2136818001", "1547296758", "2031049553", "2039588376", "", "", "1970176196" ], "abstract": [ "Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data. Supplementary materials for this article are available online.", "Tensor-based techniques for learning allow one to exploit the structure of carefully chosen representations of data. This is a desirable feature in particular when the number of training patterns is small which is often the case in areas such as biosignal processing and chemometrics. However, the class of tensor-based models is somewhat restricted and might suffer from limited discriminative power. On a different track, kernel methods lead to flexible nonlinear models that have been proven successful in many different contexts. Nonetheless, a naive application of kernel methods does not exploit structural properties possessed by the given tensorial representations. The goal of this work is to go beyond this limitation by introducing non-parametric tensor-based models. The proposed framework aims at improving the discriminative power of supervised tensor-based models while still exploiting the structural information embodied in the data. We begin by introducing a feature space formed by multilinear functionals. The latter can be considered as the infinite dimensional analogue of tensors. Successively we show how to implicitly map input patterns in such a feature space by means of kernels that exploit the algebraic structure of data tensors. The proposed tensorial kernel links to the MLSVD and features an interesting invariance property; the approach leads to convex optimization and fits into the same primal-dual framework underlying SVM-like algorithms.", "We present a new supervised tensor regression method based on multi-way array decompositions and kernel machines. The main issue in the development of a kernel-based framework for tensorial data is that the kernel functions have to be defined on tensor-valued input, which here is defined based on multi-mode product kernels and probabilistic generative models. This strategy enables taking into account the underlying multilinear structure during the learning process. Based on the defined kernels for tensorial data, we develop a kernel-based tensor partial least squares approach for regression. The effectiveness of our method is demonstrated by a real-world application, i.e., the reconstruction of 3D movement trajectories from electrocorticography signals recorded from a monkey brain.", "", "This paper aims to take general tensors as inputs for supervised learning. A supervised tensor learning (STL) framework is established for convex optimization based learning techniques such as support vector machines (SVM) and minimax probability machines (MPM). Within the STL framework, many conventional learning machines can be generalized to take n sup th -order tensors as inputs. We also study the applications of tensors to learning machine design and feature extraction by linear discriminant analysis (LDA). Our method for tensor based feature extraction is named the tenor rank-one discriminant analysis (TR1DA). These generalized algorithms have several advantages: 1) reduce the curse of dimension problem in machine learning and data mining; 2) avoid the failure to converge; and 3) achieve better separation between the different categories of samples. As an example, we generalize MPM to its STL version, which is named the tensor MPM (TMPM). TMPM learns a series of tensor projections iteratively. It is then evaluated against the original MPM. Our experiments on a binary classification problem show that TMPM significantly outperforms the original MPM.", "In this paper we address the two-class classification problem within the tensor-based framework, by formulating the Support Tucker Machines (STuMs). More precisely, in the proposed STuMs the weights parameters are regarded to be a tensor, calculated according to the Tucker tensor decomposition as the multiplication of a core tensor with a set of matrices, one along each mode. We further extend the proposed STuMs to the Σ Σ w STuMs, in order to fully exploit the information offered by the total or the within-class covariance matrix and whiten the data, thus providing in-variance to affine transformations in the feature space. We formulate the two above mentioned problems in such a way that they can be solved in an iterative manner, where at each iteration the parameters corresponding to the projections along a single tensor mode are estimated by solving a typical Support Vector Machine-type problem. The superiority of the proposed methods in terms of classification accuracy is illustrated on the problems of gait and action recognition.", "", "", "In this paper, we exploit the advantages of tensorial representations and propose several tensor learning models for regression. The model is based on the canonical parallel-factor decomposition of tensors of multiple modes and allows the simultaneous projections of an input tensor to more than one direction along each mode. Two empirical risk functions are studied, namely, the square loss and e-insensitive loss functions. The former leads to higher rank tensor ridge regression (TRR), and the latter leads to higher rank support tensor regression (STR), both formulated using the Frobenius norm for regularization. We also use the group-sparsity norm for regularization, favoring in that way the low rank decomposition of the tensorial weight. In that way, we achieve the automatic selection of the rank during the learning process and obtain the optimal-rank TRR and STR. Experiments conducted for the problems of head-pose, human-age, and 3-D body-pose estimations using real data from publicly available databases, verified not only the superiority of tensors over their vector counterparts but also the efficiency of the proposed algorithms." ] }
1407.8289
2950272073
With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.
Another recent work by @cite_2 , although not directly performs supervised tensor learning, is worth mentioning in this context. They introduced the so-called tensor kernels to analyze neuroimaging data from multiple sources, which demonstrated that the tensor product feature space is useful for modeling interactions between feature sets in different domains. In this study, we make use of the tensor product feature space to derive our kernels in vivo the incorporation of CP model. The tensor kernels can be cast as a special case of our framework.
{ "cite_N": [ "@cite_2" ], "mid": [ "2072248825" ], "abstract": [ "The tensor kernel has been used across the machine learning literature for a number of purposes and applications, due to its ability to incorporate samples from multiple sources into a joint kernel defined feature space. Despite these uses, there have been no attempts made towards investigating the resulting tensor weight in respect to the contribution of the individual tensor sources. Motivated by the increase in the current availability of Neuroscience data, specifically for two-source analyses, we propose a novel approach for decomposing the resulting tensor weight into its two components without accessing the feature space. We demonstrate our method and give experimental results on paired fMRI image-stimuli data." ] }
1407.8363
1456179565
Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users’ constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users’ social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenged networks.
Routing in opportunistic networks must be capable of dealing with occasional contacts, intermittent connectivity, highly mobile nodes, power and storage-constrained devices, and the possible nonexistence of end-to-end paths. In the last couple of years, different social-aware opportunistic routing solutions have emerged @cite_8 trying to exploit the less volatile graph created by social proximity metrics in relation to metrics reflecting the mobility behavior of nodes.
{ "cite_N": [ "@cite_8" ], "mid": [ "189668218" ], "abstract": [ "Since users move around based on social relationships and interests, their movement patterns represent how nodes are socially connected (i.e., nodes with strong social ties, nodes that meet occasionally by sharing the same working environment). This means that social interactions reflect personal relationships (e.g., family, friends, co-workers, and passers-by) that may be translated into statistical contact opportunities within and between social groups over time. Such contact opportunities may be exploited to ensure good data dissemination and retrieval, even in the presence of intermittent connectivity. Thus, in the last years, a new routing trend based on social similarity emerged where social relationships, interests, popularity, and among other social characteristics are used to improve opportunistic routing (i.e., routing able to take advantage on intermittent contacts). In this chapter, the reader will learn about the different approaches related to opportunistic routing, focusing on social-aware approaches, and how such approaches make use of social information derived from opportunistic contacts to improve data forwarding. Additionally, a brief overview on the existing taxonomies for opportunistic routing as well as a new one, based on the new social trend, are provided along with a set of experiments in scenarios based on synthetic mobility models and human traces to show the potential of social-aware solutions." ] }
1407.8363
1456179565
Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users’ constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users’ social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenged networks.
Now with content being introduced to social-aware opportunistic routing, proposals can be classified as content-oblivious or content-oriented. Among the social-aware content-oblivious proposals, @cite_3 , @cite_2 , and @cite_11 are close in essence to : all exploit social proximity to devise forwarding schemes.
{ "cite_N": [ "@cite_11", "@cite_3", "@cite_2" ], "mid": [ "1966559656", "2039157284", "2012927050" ], "abstract": [ "Context information can be used to streamline routing decisions in opportunistic networks. We propose a novel social context-based routing scheme that considers both the spatial and the temporal dimensions of the activity of mobile nodes to predict the mobility patterns of nodes based on the BackPropagation Neural Networks model.", "The increasing penetration of smart devices with networking capability form novel networks. Such networks, also referred as pocket switched networks (PSNs), are intermittently connected and represent a paradigm shift of forwarding data in an ad hoc manner. The social structure and interaction of users of such devices dictate the performance of routing protocols in PSNs. To that end, social information is an essential metric for designing forwarding algorithms for such types of networks. Previous methods relied on building and updating routing tables to cope with dynamic network conditions. On the downside, it has been shown that such approaches end up being cost ineffective due to the partial capture of the transient network behavior. A more promising approach would be to capture the intrinsic characteristics of such networks and utilize them in the design of routing algorithms. In this paper, we exploit two social and structural metrics, namely centrality and community, using real human mobility traces. The contributions of this paper are two-fold. First, we design and evaluate BUBBLE, a novel social-based forwarding algorithm, that utilizes the aforementioned metrics to enhance delivery performance. Second, we empirically show that BUBBLE can substantially improve forwarding performance compared to a number of previously proposed algorithms including the benchmarking history-based PROPHET algorithm, and social-based forwarding SimBet algorithm.", "Opportunistic routing is being investigated to enable the proliferation of low-cost wireless applications. A recent trend is looking at social structures, inferred from the social nature of human mobility, to bring messages close to a destination. To have a better picture of social structures, social-based opportunistic routing solutions should consider the dynamism of users' behavior resulting from their daily routines. We address this challenge by presenting dLife, a routing algorithm able to capture the dynamics of the network represented by time-evolving social ties between pair of nodes. Experimental results based on synthetic mobility models and real human traces show that dLife has better delivery probability, latency, and cost than proposals based on social structures." ] }
1407.8363
1456179565
Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users’ constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users’ social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenged networks.
Regarding the social-aware content-oriented proposals, @cite_9 and @cite_5 also take into account the content and users' interest on it.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2112263843", "2170904274" ], "abstract": [ "In this paper we present and evaluate ContentPlace, a data dissemination system for opportunistic networks, i.e., mobile networks in which stable simultaneous multi-hop paths between communication endpoints cannot be provided. We consider a scenario in which users both produce and consume data objects. ContentPlace takes care of moving and replicating data objects in the network such that interested users receive them despite possible long disconnections, partitions, etc. Thanks to ContentPlace, data producers and consumers are completely decoupled, and might be never connected to the network at the same point in time. The key feature of ContentPlace is learning and exploiting information about the social behaviour of the users to drive the data dissemination process. This allows ContentPlace to be more efficient both in terms of data delivery and in terms of resource usage with respect to reference alternative solutions. The performance of ContentPlace is thoroughly investigated both through simulation and analytical models.", "Applications involving the dissemination of information directly relevant to humans (e.g., service advertising, news spreading, environmental alerts) often rely on publish-subscribe, in which the network delivers a published message only to the nodes whose subscribed interests match it. In principle, publish- subscribe is particularly useful in mobile environments, since it minimizes the coupling among communication parties. However, to the best of our knowledge, none of the (few) works that tackled publish-subscribe in mobile environments has yet addressed intermittently-connected human networks. Socially-related people tend to be co-located quite regularly. This characteristic can be exploited to drive forwarding decisions in the interest-based routing layer supporting the publish-subscribe network, yielding not only improved performance but also the ability to overcome high rates of mobility and long-lasting disconnections. In this paper we propose SocialCast, a routing framework for publish-subscribe that exploits predictions based on metrics of social interaction (e.g., patterns of movements among communities) to identify the best information carriers. We highlight the principles underlying our protocol, illustrate its operation, and evaluate its performance using a mobility model based on a social network validated with real human mobility traces. The evaluation shows that prediction of colocation and node mobility allow for maintaining a very high and steady event delivery with low overhead and latency, despite the variation in density, number of replicas per message or speed." ] }
1407.8363
1456179565
Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users’ constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users’ social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenged networks.
considers the interest shared among nodes and devises a utility function that captures the node's future co-location (with others sharing the same interest) and the change in its connectivity degree. Thus, the utility functions used by measure how good message carrier a node can be regarding a given interest. Moreover, functions are based on the publish-subscribe paradigm, where users broadcast their interests, and content is disseminated to interested parties and or to high utility new carriers. Since the performance of is related to the co-location assumption (i.e., nodes with same interests spend quite some time together), the proposal may be compromised in scenarios where it does not always apply as such assumption may not always be true @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2033411413" ], "abstract": [ "In opportunistic networks, end-to-end paths between two communicating nodes are rarely available. In such situations, the nodes might still copy and forward messages to nodes that are more likely to meet the destination. The question is which forwarding algorithm offers the best trade off between cost (number of message replicas) and rate of successful message delivery. We address this challenge by developing the PeopleRank approach in which nodes are ranked using a tunable weighted social information. Similar to the PageRank idea, PeopleRank gives higher weight to nodes if they are socially connected to important other nodes of the network. We develop centralized and distributed variants for the computation of PeopleRank. We present an evaluation using real mobility traces of nodes and their social interactions to show that PeopleRank manages to deliver messages with near optimal success rate (close to Epidemic Routing) while reducing the number of message retransmissions by 50 compared to Epidemic Routing." ] }
1407.8363
1456179565
Nowadays, routing proposals must deal with a panoply of heterogeneous devices, intermittent connectivity, and the users’ constant need for communication, even in rather challenging networking scenarios. Thus, we propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP, that considers the users’ social interaction and their interests to improve data delivery in urban, dense scenarios. Through simulations, using synthetic mobility and human traces scenarios, we compare the performance of our solution against other two social-aware solutions, dLife and Bubble Rap, and the social-oblivious Spray and Wait, in order to show that the combination of social awareness and content knowledge can be beneficial when disseminating data in challenged networks.
Besides taking into account the interest that users have in the content, @cite_5 also considers information about the users' social relationships to improve content availability. For that, a utility function is computed for each data object considering the access probability to the object and the involved cost in accessing it, as well as the user's social strength towards the different communities that he she belongs to and or has interacted with. The idea is to have users fetching data objects that maximize the utility function with respect to the local cache limitations, choosing the objects that are of interest to him herself and can be further disseminated in the communities with which they have strong social ties.
{ "cite_N": [ "@cite_5" ], "mid": [ "2112263843" ], "abstract": [ "In this paper we present and evaluate ContentPlace, a data dissemination system for opportunistic networks, i.e., mobile networks in which stable simultaneous multi-hop paths between communication endpoints cannot be provided. We consider a scenario in which users both produce and consume data objects. ContentPlace takes care of moving and replicating data objects in the network such that interested users receive them despite possible long disconnections, partitions, etc. Thanks to ContentPlace, data producers and consumers are completely decoupled, and might be never connected to the network at the same point in time. The key feature of ContentPlace is learning and exploiting information about the social behaviour of the users to drive the data dissemination process. This allows ContentPlace to be more efficient both in terms of data delivery and in terms of resource usage with respect to reference alternative solutions. The performance of ContentPlace is thoroughly investigated both through simulation and analytical models." ] }
1407.8368
2012927050
Opportunistic routing is being investigated to enable the proliferation of low-cost wireless applications. A recent trend is looking at social structures, inferred from the social nature of human mobility, to bring messages close to a destination. To have a better picture of social structures, social-based opportunistic routing solutions should consider the dynamism of users' behavior resulting from their daily routines. We address this challenge by presenting dLife, a routing algorithm able to capture the dynamics of the network represented by time-evolving social ties between pair of nodes. Experimental results based on synthetic mobility models and real human traces show that dLife has better delivery probability, latency, and cost than proposals based on social structures.
Most of the existing opportunistic routing solutions are based on some level of replication @cite_0 . Among these proposals, emerge solutions based on different representations of social similarity: i) labeling users according to their social groups (e.g., @cite_8 ); ii) looking at the importance (i.e., popularity) of nodes (e.g., @cite_4 ); iii) combining the notion of community and centrality (e.g., @cite_9 and @cite_2 ); iv) considering interests that users have in common (e.g., @cite_10 ).
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_0", "@cite_2", "@cite_10" ], "mid": [ "", "2161750835", "2082674813", "2174433104", "2039157284", "2170904274" ], "abstract": [ "", "It is widely believed that identifying communities in an ad hoc mobile communications system, such as a pocket switched network, can reduce the amount of traffic created when forwarding messages, but there has not been any empirical evidence available to support this assumption to date. In this paper, we show through use of real experimental human mobility data, how using a small label, identifying users according to their affiliation, can bring a large improvement in forwarding performance, in term of both delivery ratio and cost", "Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.", "Due to the increased capabilities of mobile devices and through wireless opportunistic contacts, users can experience new ways to share and retrieve content anywhere and anytime, even in the presence of link intermittency. Due to the significant number of available routing solutions, it is difficult to understand which one has the best performance, since all of them follow a different evaluation method. This paper proposes an assessment model, based on a new taxonomy, which comprises an evaluation guideline with performance metrics and experimental setup to aid designers in evaluating solutions through fair comparisons. Simulation results based on the proposed model revisit the performance results published by Epidemic, PROPHET, and BubbleRap, showing how they perform under the same set of metrics and scenario.", "The increasing penetration of smart devices with networking capability form novel networks. Such networks, also referred as pocket switched networks (PSNs), are intermittently connected and represent a paradigm shift of forwarding data in an ad hoc manner. The social structure and interaction of users of such devices dictate the performance of routing protocols in PSNs. To that end, social information is an essential metric for designing forwarding algorithms for such types of networks. Previous methods relied on building and updating routing tables to cope with dynamic network conditions. On the downside, it has been shown that such approaches end up being cost ineffective due to the partial capture of the transient network behavior. A more promising approach would be to capture the intrinsic characteristics of such networks and utilize them in the design of routing algorithms. In this paper, we exploit two social and structural metrics, namely centrality and community, using real human mobility traces. The contributions of this paper are two-fold. First, we design and evaluate BUBBLE, a novel social-based forwarding algorithm, that utilizes the aforementioned metrics to enhance delivery performance. Second, we empirically show that BUBBLE can substantially improve forwarding performance compared to a number of previously proposed algorithms including the benchmarking history-based PROPHET algorithm, and social-based forwarding SimBet algorithm.", "Applications involving the dissemination of information directly relevant to humans (e.g., service advertising, news spreading, environmental alerts) often rely on publish-subscribe, in which the network delivers a published message only to the nodes whose subscribed interests match it. In principle, publish- subscribe is particularly useful in mobile environments, since it minimizes the coupling among communication parties. However, to the best of our knowledge, none of the (few) works that tackled publish-subscribe in mobile environments has yet addressed intermittently-connected human networks. Socially-related people tend to be co-located quite regularly. This characteristic can be exploited to drive forwarding decisions in the interest-based routing layer supporting the publish-subscribe network, yielding not only improved performance but also the ability to overcome high rates of mobility and long-lasting disconnections. In this paper we propose SocialCast, a routing framework for publish-subscribe that exploits predictions based on metrics of social interaction (e.g., patterns of movements among communities) to identify the best information carriers. We highlight the principles underlying our protocol, illustrate its operation, and evaluate its performance using a mobility model based on a social network validated with real human mobility traces. The evaluation shows that prediction of colocation and node mobility allow for maintaining a very high and steady event delivery with low overhead and latency, despite the variation in density, number of replicas per message or speed." ] }