aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1404.6230
2016552385
f-divergence estimation is an important problem in the fields of information theory, machine learning, and statistics. While several divergence estimators exist, relatively few of their convergence rates are known. We derive the MSE convergence rate for a density plug-in estimator of f-divergence. Then by applying the theory of optimally weighted ensemble estimation, we derive a divergence estimator with a convergence rate of O(1 T) that is simple to implement and performs well in high dimensions. We validate our theoretical results with experiments.
Singh and P ' o czos @cite_2 provided an estimator for R ' e nyi- @math divergences that uses a mirror image'' kernel density estimator. They prove a convergence rate of @math when @math for each of the densities. However this method requires several computations at each boundary of the support of the densities which becomes difficult to implement as @math gets large. Also, this method requires knowledge of the support of the densities which may not be possible for some problems.
{ "cite_N": [ "@cite_2" ], "mid": [ "2963881189" ], "abstract": [ "Estimating divergences in a consistent way is of great importance in many machine learning tasks. Although this is a fundamental problem in nonparametric statistics, to the best of our knowledge there has been no finite sample exponential inequality convergence bound derived for any divergence estimators. The main contribution of our work is to provide such a bound for an estimator of Renyi-α divergence for a smooth Holder class of densities on the d-dimensional unit cube [0, 1]d. We also illustrate our theoretical results with a numerical experiment." ] }
1404.6570
2950549990
In this work, we present EAGr, a system for supporting large numbers of continuous neighborhood-based ("ego-centric") aggregate queries over large, highly dynamic, and rapidly evolving graphs. Examples of such queries include computation of personalized, tailored trends in social networks, anomaly event detection in financial transaction networks, local search and alerts in spatio-temporal networks, to name a few. Key challenges in supporting such continuous queries include high update rates typically seen in these situations, large numbers of queries that need to be executed simultaneously, and stringent low latency requirements. We propose a flexible, general, and extensible in-memory framework for executing different types of ego-centric aggregate queries over large dynamic graphs with low latencies. Our framework is built around the notion of an aggregation overlay graph, a pre-compiled data structure that encodes the computations to be performed when an update query is received. The overlay graph enables sharing of partial aggregates across multiple ego-centric queries (corresponding to the nodes in the graph), and also allows partial pre-computation of the aggregates to minimize the query latencies. We present several highly scalable techniques for constructing an overlay graph given an aggregation function, and also design incremental algorithms for handling structural changes to the underlying graph. We also present an optimal, polynomial-time algorithm for making the pre-computation decisions given an overlay graph, and evaluate an approach to incrementally adapt those decisions as the workload changes. Although our approach is naturally parallelizable, we focus on a single-machine deployment and show that our techniques can easily handle graphs of size up to 320 million nodes and edges, and achieve update query throughputs of over 500K s using a single, powerful machine.
Network analysis, sometimes called network science , has been a very active area of research over the last decade, with much work on network evolution and information diffusion models, community detection, centrality computation, so on. We refer the reader to well-known surveys and textbooks on that topic (see, e.g., @cite_48 @cite_59 @cite_38 @cite_47 ). Increasing availability of temporally annotated network data has led many researchers to focus on designing analytical models that capture how a network evolves, with a primary focus on social networks and the Web (see, e.g., @cite_48 @cite_20 @cite_4 @cite_6 ). There is also much work on understanding how communities evolve, identifying key individuals, locating hidden groups, identifying changes, visualizing the temporal evolution, in dynamic networks. Graph data mining is another well researched area where the goal is to find relevant structural patterns present in the graph @cite_25 @cite_44 @cite_34 @cite_39 . Most of that prior work, however, focuses on off-line analysis of static datasets.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_48", "@cite_6", "@cite_39", "@cite_44", "@cite_59", "@cite_47", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "", "", "631140850", "", "1592376869", "2170726034", "", "2070722739", "1641749581", "2130426318", "" ], "abstract": [ "", "", "Managing and Mining Graph Data is a comprehensive survey book in graph data analytics. It contains extensive surveys on important graph topics such as graph languages, indexing, clustering, data generation, pattern mining, classification, keyword search, pattern matching, and privacy. It also studies a number of domain-specific scenarios such as stream mining, web graphs, social networks, chemical and biological data. The chapters are written by leading researchers, and provide a broad perspective of the area. This is the first comprehensive survey book in the emerging topic of graph data processing. Managing and Mining Graph Data is designed for a varied audience composed of professors, researchers and practitioners in industry. This volume is also suitable as a reference book for advanced-level database students in computer science. About the Editors:Charu C. Aggarwal obtained his B.Tech in Computer Science from IIT Kanpur in 1993 and Ph.D. from MIT in 1996. He has worked as a researcher at IBM since then, and has published over 130 papers in major data mining conferences and journals. He has applied for or been granted over 70 US and International patents, and has thrice been designated a Master Inventor at IBM. He has received an IBM Corporate award for his work on data stream analytics, and an IBM Outstanding Innovation Award for his work on privacy technology. He has served on the executive committees of most major data mining conferences. He has served as an associate editor of the IEEE TKDE, as an associate editor of the ACM SIGKDD Explorations, and as an action editor of the DMKD Journal. He is a fellow of the IEEE, and a life-member of the ACM. Haixun Wang is currently a researcher at Microsoft Research Asia. He received the B.S. and the M.S. degree, both in computer science, from Shanghai Jiao Tong University in 1994 and 1996. He received the Ph.D. degree in computer science from the University of California, Los Angeles in 2000. He subsequently worked as a researcher at IBMuntil 2009. His main research interest is database language and systems, data mining, and information retrieval. He has published more than 100 research papers in referred international journals and conference proceedings. He serves as an associate editor of the IEEE TKDE, and has served as a reviewer and program committee member of leading database conferences and journals.", "", "Basket Analysis, which is a standard method for data mining, derives frequent itemsets from database. However, its mining ability is limited to transaction data consisting of items. In reality, there are many applications where data are described in a more structural way, e.g. chemical compounds and Web browsing history. There are a few approaches that can discover characteristic patterns from graph-structured data in the field of machine learning. However, almost all of them are not suitable for such applications that require a complete search for all frequent subgraph patterns in the data. In this paper, we propose a novel principle and its algorithm that derive the characteristic patterns which frequently appear in graph-structured data. Our algorithm can derive all frequent induced subgraphs from both directed and undirected graph structured data having loops (including self-loops) with labeled or unlabeled nodes and links. Its performance is evaluated through the applications to Web browsing pattern analysis and chemical carcinogenesis analysis.", "We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude.", "", "Coupled biological and chemical systems, neural networks, social interacting species, the Internet and the World Wide Web, are only a few examples of systems composed by a large number of highly interconnected dynamical units. The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them. On the one hand, scientists have to cope with structural issues, such as characterizing the topology of a complex wiring architecture, revealing the unifying principles that are at the basis of real networks, and developing models to mimic the growth of a network and reproduce its structural properties. On the other hand, many relevant questions arise when studying complex networks’ dynamics, such as learning how a large ensemble of dynamical systems that interact through a complex wiring topology can behave collectively. We review the major concepts and results recently achieved in the study of the structure and dynamics of complex networks, and summarize the relevant applications of these ideas in many different disciplines, ranging from nonlinear science to biology, from statistical mechanics to medicine and engineering. © 2005 Elsevier B.V. All rights reserved.", "This paper proposes a novel approach named AGM to efficiently mine the association rules among the frequently appearing substructures in a given graph data set. A graph transaction is represented by an adjacency matrix, and the frequent patterns appearing in the matrices are mined through the extended algorithm of the basket analysis. Its performance has been evaluated for the artificial simulation data and the carcinogenesis data of Oxford University and NTP. Its high efficiency has been confirmed for the size of a real-world problem.", "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.", "" ] }
1404.6312
2949822859
Linguists and psychologists have long been studying cross-linguistic transfer, the influence of native language properties on linguistic performance in a foreign language. In this work we provide empirical evidence for this process in the form of a strong correlation between language similarities derived from structural features in English as Second Language (ESL) texts and equivalent similarities obtained from the typological features of the native languages. We leverage this finding to recover native language typological similarity structure directly from ESL text, and perform prediction of typological features in an unsupervised fashion with respect to the target languages. Our method achieves 72.2 accuracy on the typology prediction task, a result that is highly competitive with equivalent methods that rely on typological resources.
Following the work of , NLI has been gaining increasing interest in NLP, culminating in a recent shared task with 29 participating systems @cite_14 . Much of the NLI efforts thus far have been focused on exploring various feature sets for optimizing classification performance. While many of these features are linguistically motivated, some of the discriminative power of these approaches stems from cultural and domain artifacts. For example, our preliminary experiments with a typical NLI feature set, show that the strongest features for predicting Chinese are strings such as and . Similar features dominate the weights of other languages as well. Such content features boost classification performance, but are hardly relevant for modeling linguistic phenomena, thus weakening the argument that NLI classification performance is indicative of cross-linguistic transfer.
{ "cite_N": [ "@cite_14" ], "mid": [ "2135528482" ], "abstract": [ "In this paper, we show that stylistic text features can be exploited to determine an anonymous author's native language with high accuracy. Specifically, we first use automatic tools to ascertain frequencies of various stylistic idiosyncrasies in a text. These frequencies then serve as features for support vector machines that learn to classify texts according to author native language." ] }
1404.6290
2492810161
We consider stochastic processes on complete, locally compact tree-like metric spaces (T,r)(T,r) on their “natural scale” with boundedly finite speed measure νν. Given a triple (T,r,ν)(T,r,ν) such a speed-νν motion on (T,r)(T,r) can be characterized as the unique strong Markov process which if restricted to compact subtrees satisfies for all x,y∈Tx,y∈T and all positive, bounded measurable ff, Ex[∫τy0dsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, Ex[∫0τydsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, where c(x,y,z)c(x,y,z) denotes the branch point generated by x,y,zx,y,z. If (T,r)(T,r) is a discrete tree, XX is a continuous time nearest neighbor random walk which jumps from vv to v′∼vv′∼v at rate 12⋅(ν( v )⋅r(v,v′))−112⋅(ν( v )⋅r(v,v′))−1. If (T,r)(T,r) is path-connected, XX has continuous paths and equals the νν-Brownian motion which was recently constructed in [Trans. Amer. Math. Soc. 365 (2013) 3115–3150]. In this paper, we show that speed-νnνn motions on (Tn,rn)(Tn,rn) converge weakly in path space to the speed-νν motion on (T,r)(T,r) provided that the underlying triples of metric measure spaces converge in the Gromov–Hausdorff-vague topology introduced in [Stochastic Process. Appl. 126 (2016) 2527–2553].
It was shown in Remark 3.1 in @cite_17 that the unique @math -symmetric strong Markov process associated with @math is the speed- @math motion on @math
{ "cite_N": [ "@cite_17" ], "mid": [ "2017262458" ], "abstract": [ "The real trees form a class of metric spaces that extends the class of trees with edge lengths by allowing behavior such as infinite total edge length and vertices with infinite branching degree. We use Dirichlet form methods to construct Brownian motion on any given locally compact R-tree (T,r) equipped with a Radon measure ν on (T,B(T)). We specify a criterion under which the Brownian motion is recurrent or transient. For compact recurrent R-trees we provide bounds on the mixing time." ] }
1404.6290
2492810161
We consider stochastic processes on complete, locally compact tree-like metric spaces (T,r)(T,r) on their “natural scale” with boundedly finite speed measure νν. Given a triple (T,r,ν)(T,r,ν) such a speed-νν motion on (T,r)(T,r) can be characterized as the unique strong Markov process which if restricted to compact subtrees satisfies for all x,y∈Tx,y∈T and all positive, bounded measurable ff, Ex[∫τy0dsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, Ex[∫0τydsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, where c(x,y,z)c(x,y,z) denotes the branch point generated by x,y,zx,y,z. If (T,r)(T,r) is a discrete tree, XX is a continuous time nearest neighbor random walk which jumps from vv to v′∼vv′∼v at rate 12⋅(ν( v )⋅r(v,v′))−112⋅(ν( v )⋅r(v,v′))−1. If (T,r)(T,r) is path-connected, XX has continuous paths and equals the νν-Brownian motion which was recently constructed in [Trans. Amer. Math. Soc. 365 (2013) 3115–3150]. In this paper, we show that speed-νnνn motions on (Tn,rn)(Tn,rn) converge weakly in path space to the speed-νν motion on (T,r)(T,r) provided that the underlying triples of metric measure spaces converge in the Gromov–Hausdorff-vague topology introduced in [Stochastic Process. Appl. 126 (2016) 2527–2553].
In this subsection we relate our invariance principle to the one obtain earlier in @cite_20 . We first recall the excursion representation of a rooted compact measure @math -tree. We denote by the set of continuous excursions on @math . From each excursion @math , we can define a measure @math -tree in the following way: @math is a pseudo-distance on @math , @math are said to be equivalent, @math , if @math , the image of the projection @math endowed with the push forward of @math (again denoted @math ), i.e. @math , is a rooted compact @math -tree. We endow this space with the probability measure @math which is the push forward of the Lebesgue measure on @math . We denote by @math the resulting glue function'', which sends an excursion to a rooted probability measure @math -tree.
{ "cite_N": [ "@cite_20" ], "mid": [ "2109849732" ], "abstract": [ "Consider a family of random ordered graph trees (T-n)(n >= 1), where T-n has n vertices. It has previously been established that if the associated search-depth processes converge to the normalised Brownian excursion when rescaled appropriately as n -> infinity, then the simple random walks on the graph trees have the Brownian motion on the Brownian continuum random tree as their scaling limit. Here, this result is extended to demonstrate the existence of a diffusion scaling limit whenever the volume measure on the limiting real tree is nonatomic, supported on the leaves of the limiting tree, and satisfies a polynomial lower bound for the volume of balls. Furthermore, as an application of this generalisation, it is established that the simple random walks on a family of Galton-Watson trees with a critical infinite variance offspring distribution, conditioned on the total number of offspring, can be resealed to converge to the Brownian motion on a related alpha-stable tree." ] }
1404.6290
2492810161
We consider stochastic processes on complete, locally compact tree-like metric spaces (T,r)(T,r) on their “natural scale” with boundedly finite speed measure νν. Given a triple (T,r,ν)(T,r,ν) such a speed-νν motion on (T,r)(T,r) can be characterized as the unique strong Markov process which if restricted to compact subtrees satisfies for all x,y∈Tx,y∈T and all positive, bounded measurable ff, Ex[∫τy0dsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, Ex[∫0τydsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, where c(x,y,z)c(x,y,z) denotes the branch point generated by x,y,zx,y,z. If (T,r)(T,r) is a discrete tree, XX is a continuous time nearest neighbor random walk which jumps from vv to v′∼vv′∼v at rate 12⋅(ν( v )⋅r(v,v′))−112⋅(ν( v )⋅r(v,v′))−1. If (T,r)(T,r) is path-connected, XX has continuous paths and equals the νν-Brownian motion which was recently constructed in [Trans. Amer. Math. Soc. 365 (2013) 3115–3150]. In this paper, we show that speed-νnνn motions on (Tn,rn)(Tn,rn) converge weakly in path space to the speed-νν motion on (T,r)(T,r) provided that the underlying triples of metric measure spaces converge in the Gromov–Hausdorff-vague topology introduced in [Stochastic Process. Appl. 126 (2016) 2527–2553].
In @cite_20 the following subspace of @math is considered: Let @math , be a sequence of rooted graph trees with @math , whose search-depth functions @math in @math with uniform topology satisfy for a sequence @math and some @math with @math . In Theorem 1.1 of @cite_20 , it is shown that the discrete-time simple random walks on @math starting in @math with jump sizes rescaled by @math and speeded up by a factor of @math converge to the @math Brownian motion on @math starting in @math .
{ "cite_N": [ "@cite_20" ], "mid": [ "2109849732" ], "abstract": [ "Consider a family of random ordered graph trees (T-n)(n >= 1), where T-n has n vertices. It has previously been established that if the associated search-depth processes converge to the normalised Brownian excursion when rescaled appropriately as n -> infinity, then the simple random walks on the graph trees have the Brownian motion on the Brownian continuum random tree as their scaling limit. Here, this result is extended to demonstrate the existence of a diffusion scaling limit whenever the volume measure on the limiting real tree is nonatomic, supported on the leaves of the limiting tree, and satisfies a polynomial lower bound for the volume of balls. Furthermore, as an application of this generalisation, it is established that the simple random walks on a family of Galton-Watson trees with a critical infinite variance offspring distribution, conditioned on the total number of offspring, can be resealed to converge to the Brownian motion on a related alpha-stable tree." ] }
1404.6290
2492810161
We consider stochastic processes on complete, locally compact tree-like metric spaces (T,r)(T,r) on their “natural scale” with boundedly finite speed measure νν. Given a triple (T,r,ν)(T,r,ν) such a speed-νν motion on (T,r)(T,r) can be characterized as the unique strong Markov process which if restricted to compact subtrees satisfies for all x,y∈Tx,y∈T and all positive, bounded measurable ff, Ex[∫τy0dsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, Ex[∫0τydsf(Xs)]=2∫Tν(dz)r(y,c(x,y,z))f(z)<∞, where c(x,y,z)c(x,y,z) denotes the branch point generated by x,y,zx,y,z. If (T,r)(T,r) is a discrete tree, XX is a continuous time nearest neighbor random walk which jumps from vv to v′∼vv′∼v at rate 12⋅(ν( v )⋅r(v,v′))−112⋅(ν( v )⋅r(v,v′))−1. If (T,r)(T,r) is path-connected, XX has continuous paths and equals the νν-Brownian motion which was recently constructed in [Trans. Amer. Math. Soc. 365 (2013) 3115–3150]. In this paper, we show that speed-νnνn motions on (Tn,rn)(Tn,rn) converge weakly in path space to the speed-νν motion on (T,r)(T,r) provided that the underlying triples of metric measure spaces converge in the Gromov–Hausdorff-vague topology introduced in [Stochastic Process. Appl. 126 (2016) 2527–2553].
Note that in contrast to @cite_20 our Theorem does not require any additional assumptions on the limiting tree, which also does not have to be an @math -tree. The polynomial lower bound or that @math is non-atomic and supported on the leaves are not required. Also note that Theorem 1.1 of @cite_20 does only allow for homogeneous (non-state-dependent) rescaling. This means, for example, that in the particular case where the trees @math are subsets of @math , only the case @math and @math , @math , is covered.
{ "cite_N": [ "@cite_20" ], "mid": [ "2109849732" ], "abstract": [ "Consider a family of random ordered graph trees (T-n)(n >= 1), where T-n has n vertices. It has previously been established that if the associated search-depth processes converge to the normalised Brownian excursion when rescaled appropriately as n -> infinity, then the simple random walks on the graph trees have the Brownian motion on the Brownian continuum random tree as their scaling limit. Here, this result is extended to demonstrate the existence of a diffusion scaling limit whenever the volume measure on the limiting real tree is nonatomic, supported on the leaves of the limiting tree, and satisfies a polynomial lower bound for the volume of balls. Furthermore, as an application of this generalisation, it is established that the simple random walks on a family of Galton-Watson trees with a critical infinite variance offspring distribution, conditioned on the total number of offspring, can be resealed to converge to the Brownian motion on a related alpha-stable tree." ] }
1404.5665
2949858346
The vast quantity of data generated and captured every day has led to a pressing need for tools and processes to organize, analyze and interrelate this data. Automated reasoning and optimization tools with inherent support for data could enable advancements in a variety of contexts, from data-backed decision making to data-intensive scientific research. To this end, we introduce a decidable logic aimed at database analysis. Our logic extends quantifier-free Linear Integer Arithmetic with operators from Relational Algebra, like selection and cross product. We provide a scalable decision procedure that is based on the BC(T) architecture for ILP Modulo Theories. Our decision procedure makes use of database techniques. We also experimentally evaluate our approach, and discuss potential applications.
The Constraint Database framework @cite_6 provides a database perspective on constraint solving. The framework encompasses relations described by means of constraints, but not relations comprised of concrete tuples.
{ "cite_N": [ "@cite_6" ], "mid": [ "1991718942" ], "abstract": [ "We discuss the relationship between constraint programming and database query languages. We show that bottom-up, efficient, declarative database programming can be combined with efficient constraint solving. The key intuition is that the generalization of a ground fact, or tuple, is a conjunction of constraints. We describe the basic Constraint Query Language design principles, and illustrate them with four different classes of constraints: Polynomial, rational order, equality, and Boolean constraints." ] }
1404.6041
2029741572
In multiuser MIMO (MU-MIMO) LANs, the achievable throughput of a client depends on who is transmitting concurrently with it. Existing MU-MIMO MAC protocols, however, enable clients to use the traditional 802.11 contention to contend for concurrent transmission opportunities on the uplink. Such a contention-based protocol not only wastes lots of channel time on multiple rounds of contention but also fails to maximally deliver the gain of MU-MIMO because users randomly join concurrent transmissions without considering their channel characteristics. To address such inefficiency, this paper introduces MIMOMate, a leader-contention-based MU-MIMO MAC protocol that matches clients as concurrent transmitters according to their channel characteristics to maximally deliver the MU-MIMO gain while ensuring all users fairly share concurrent transmission opportunities. Furthermore, MIMOMate elects the leader of the matched users to contend for transmission opportunities using traditional 802.11 CSMA CA. It hence requires only a single contention overhead for concurrent streams and can be compatible with legacy 802.11 devices. A prototype implementation in USRP N200 shows that MIMOMate achieves an average throughput gain of 1.42× and $1.52× over the traditional contention-based protocol for two- and three-antenna AP scenarios, respectively, and also provides fairness for clients.
In the last few years, the advantage of MU-MIMO LANs has been verified theoretically @cite_6 @cite_43 @cite_26 and demonstrated empirically @cite_11 @cite_12 @cite_18 @cite_44 @cite_42 @cite_21 @cite_47 . In Beamforming @cite_11 @cite_12 @cite_18 , a multi-antenna AP uses the precoding technique to transmit multiple streams to multiple single-antenna clients. SAM @cite_44 focuses on the uplink scenario and allows multiple single-antenna clients to communicate concurrently with a multi-antenna AP. TurboRate @cite_42 proposes a rate adaptation protocol for uplink MU-MIMO LANs. IAC @cite_21 connects multiple APs through the Ethernet to form a virtual MIMO node that communicates concurrently with multiple clients. 802.11n @math @cite_47 enables concurrent transmissions across different links. All the above practical MU-MIMO systems leverage the traditional 802.11 content mechanism to share concurrent transmission opportunities. In contrast, enables clients with a better channel orthogonality to transmit concurrently.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_42", "@cite_21", "@cite_6", "@cite_44", "@cite_43", "@cite_47", "@cite_12", "@cite_11" ], "mid": [ "", "2103749601", "2144561012", "2146102702", "2041863035", "2157709134", "2164664877", "2144916689", "2164102933", "2060243821" ], "abstract": [ "", "We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate four-way connection between the vector broadcast channel, the corresponding point-to-point channel (where the receivers can cooperate), the multiple-access channel (MAC) (where the role of transmitters and receivers are reversed), and the corresponding point-to-point channel (where the transmitters can cooperate).", "In multiuser MIMO (MU-MIMO) networks, the optimal bit rate of a user is highly dynamic and changes from one packet to the next. This breaks traditional bit rate adaptation algorithms, which rely on recent history to predict the best bit rate for the next packet. To address this problem, we introduce TurboRate, a rate adaptation scheme for MU-MIMO LANs. TurboRate shows that clients in a MU-MIMO LAN can adapt their bit rate on a per-packet basis if each client learns two variables: its SNR when it transmits alone to the access point, and the direction along which its signal is received at the AP. TurboRate also shows that each client can compute these two variables passively without exchanging control frames with the access point. A TurboRate client then annotates its packets with these variables to enable other clients to pick the optimal bit rate and transmit concurrently to the AP. A prototype implementation in USRP-N200 shows that traditional rate adaptation does not deliver the gains of MU-MIMO WLANs, and can interact negatively with MU-MIMO, leading to low throughput. In contrast, enabling MU-MIMO with TurboRate provides a mean throughput gain of 1.7x and 2.3x, for 2-antenna and 3-antenna APs respectively.", "The throughput of existing MIMO LANs is limited by the number of antennas on the AP. This paper shows how to overcome this limit. It presents interference alignment and cancellation (IAC), a new approach for decoding concurrent sender-receiver pairs in MIMO networks. IAC synthesizes two signal processing techniques, interference alignment and interference cancellation, showing that the combination applies to scenarios where neither interference alignment nor cancellation applies alone. We show analytically that IAC almost doubles the throughput of MIMO LANs. We also implement IAC in GNU-Radio, and experimentally demonstrate that for 2x2 MIMO LANs, IAC increases the average throughput by 1.5x on the downlink and 2x on the uplink.", "A unified framework is given for multiple user information networks. These networks consist of several users communicating to one another in the presence of arbitrary interference and noise. The presence of many senders necessitates a tradeoff in the achievable information transmission rates. The goal is the characterization of the capacity region consisting of all achievable rates. The focus is on broadcast, multiple access, relay, and other channels for which the recent theory is relativdy well developed. A discussion of the Gaussian version of these channels demonstrates the concreteness of the encoding and decoding necessary to achieve optimal information flow. We also offer speculations about the form of a general theory of information flow in networks.", "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70 with two antennas over 802.11.", "Multi-user MIMO (MU-MIMO) networks reveal the unique opportunities arising from a joint optimization of antenna combining techniques with resource allocation protocols. Furthermore, it brings robustness with respect to multipath richness, allowing for compact antenna spacing at the BS and, crucially, yielding the diversity and multiplexing gains without the need for multiple antenna user terminals. To realize these gains, however, the BS should be informed with the user's channel coefficients, which may limit practical application to TDD or low-mobility settings. To circumvent this problem and reduce feedback load, combining MU-MIMO with opportunistic scheduling seems a promising direction. The success for this type of scheduler is strongly traffic and QoS-dependent, however.", "This paper presents the design and implementation of 802.11n+, a fully distributed random access protocol for MIMO networks. 802.11n+ allows nodes that differ in the number of antennas to contend not just for time, but also for the degrees of freedom provided by multiple antennas. We show that even when the medium is already occupied by some nodes, nodes with more antennas can transmit concurrently without harming the ongoing transmissions. Furthermore, such nodes can contend for the medium in a fully distributed way. Our testbed evaluation shows that even for a small network with three competing node pairs, the resulting system about doubles the average network throughput. It also maintains the random access nature of today's 802.11n networks.", "In this work, we report the first study of an important realization of directional communication, beamforming, on mobile devices. We first demonstrate that beamforming is already feasible on mobile devices in terms of form factor, device mobility and power efficiency. Surprisingly, we show that by making an increasingly profitable tradeoff between transmit and circuit power, beamforming with state-of-the-art integrated CMOS implementations can be more power-efficient than its single antenna counterpart. We then investigate the optimal way of using beamforming in terms of device power efficiency, by allowing a dynamic number of active antennas. We propose a simple yet effective solution, BeamAdapt, which allows each mobile client in a network to individually identify the optimal number of active antennas with guaranteed convergence and close-to-optimal performance. We finally report a WARP-based prototype of BeamAdapt and experimentally demonstrate its effectiveness in realistic environments, and then complement the prototype-based experiments with Qualnet-based simulation of a large-scale network. Our results show that BeamAdapt with four antennas can reduce the power consumption of mobile clients by more than half compared to a single antenna, while maintaining a required network throughput.", "Multi-User MIMO promises to increase the spectral efficiency of next generation wireless systems and is currently being incorporated in future industry standards. Although a significant amount of research has focused on theoretical capacity analysis, little is known about the performance of such systems in practice. In this paper, we present the design and implementation of the first multi-user beamforming system and experimental framework for wireless LANs. Using extensive measurements in an indoor environment, we evaluate the impact of receiver separation distance, outdated channel information due to mobility and environmental variation, and the potential for increasing spatial reuse. For the measured indoor environment, our results reveal that two receivers achieve close to maximum performance with a minimum separation distance of a quarter of a wavelength. We also show that the required channel information update rate is dependent on environmental variation and user mobility as well as a per-link SNR requirement. Assuming that a link can tolerate an SNR decrease of 3 dB, the required channel update rate is equal to 100 and 10 ms for non-mobile receivers and mobile receivers with a pedestrian speed of 3 mph respectively. Our results also show that spatial reuse can be increased by efficiently eliminating interference at any desired location; however, this may come at the expense of a significant drop in the quality of the served users." ] }
1404.6041
2029741572
In multiuser MIMO (MU-MIMO) LANs, the achievable throughput of a client depends on who is transmitting concurrently with it. Existing MU-MIMO MAC protocols, however, enable clients to use the traditional 802.11 contention to contend for concurrent transmission opportunities on the uplink. Such a contention-based protocol not only wastes lots of channel time on multiple rounds of contention but also fails to maximally deliver the gain of MU-MIMO because users randomly join concurrent transmissions without considering their channel characteristics. To address such inefficiency, this paper introduces MIMOMate, a leader-contention-based MU-MIMO MAC protocol that matches clients as concurrent transmitters according to their channel characteristics to maximally deliver the MU-MIMO gain while ensuring all users fairly share concurrent transmission opportunities. Furthermore, MIMOMate elects the leader of the matched users to contend for transmission opportunities using traditional 802.11 CSMA CA. It hence requires only a single contention overhead for concurrent streams and can be compatible with legacy 802.11 devices. A prototype implementation in USRP N200 shows that MIMOMate achieves an average throughput gain of 1.42× and $1.52× over the traditional contention-based protocol for two- and three-antenna AP scenarios, respectively, and also provides fairness for clients.
Prior theoretical work on user selection in downlink MU-MIMO LANs @cite_24 @cite_14 @cite_38 @cite_16 @cite_31 @cite_8 selects the optimal subset of clients from those who have packets queued in the AP to maximize the sum rate of concurrent transmissions. The works @cite_38 @cite_31 further address the issue of fairness. However, their solutions are designed for downlink MU-MIMO, and cannot be easily applied in the uplink scenarios due to the lack of a coordinator.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_8", "@cite_24", "@cite_31", "@cite_16" ], "mid": [ "2129533376", "2030470184", "2074165216", "1601993071", "2083962008", "2124133602" ], "abstract": [ "Recently, the capacity region of a multiple-input multiple-output (MIMO) Gaussian broadcast channel, with Gaussian codebooks and known-interference cancellation through dirty paper coding, was shown to equal the union of the capacity regions of a collection of MIMO multiple-access channels. We use this duality result to evaluate the system capacity achievable in a cellular wireless network with multiple antennas at the base station and multiple antennas at each terminal. Some fundamental properties of the rate region are exhibited and algorithms for determining the optimal weighted rate sum and the optimal covariance matrices for achieving a given rate vector on the boundary of the rate region are presented. These algorithms are then used in a simulation study to determine potential capacity enhancements to a cellular system through known-interference cancellation. We study both the circuit data scenario in which each user requires a constant data rate in every frame and the packet data scenario in which users can be assigned a variable rate in each frame so as to maximize the long-term average throughput. In the case of circuit data, the outage probability as a function of the number of active users served at a given rate is determined through simulations. For the packet data case, long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm are determined. We generalize the zero-forcing beamforming technique to the multiple receive antennas case and use this as the baseline for the packet data throughput evaluation.", "It is difficult to implement optimal beamforming for multi-user multiple-input multiple-output (MIMO) downlink due to the high complexity. This paper proposes a low complexity generalized beamforming (GBF) scheme combined with an efficient user selection to maximize the weighted sum-rate. For each user, the outputs of the multiple antennas are combined with a receive GBF vector to create an equivalent multiple-input single-output (MISO) effective channel. First, user selection and receive GBF vectors are jointly optimized to construct a group of preferable effective channels. Then, transmit GBF vectors are obtained by zero-forcing over these effective channels. Simulation results show significant gain over currently known suboptimal schemes in various scenarios.", "Advances in digital communications and signal processing have enabled the adaptation of advanced communication techniques in wireless LAN (WLAN) standards. Among those advanced techniques, multiple-input-multipleoutput (MIMO) is now part of the high throughput (HT) WLAN standard, a.k.a. IEEE 802.11n. With the completion of the IEEE 802.11n specification, the IEEE 802.11 working group (WG) is now working on a new amendment to achieve very high throughput (VHT) in the gigabit range.. The move to higher data rates is enabled by introducing wider bandwidth, downlink multi-user MIMO (MU-MIMO), and higher modulation and coding schemes (MCS). This paper focuses the use of group membership and identifiers for managing downlink transmissions using MU-MIMO. Downlink MU-MIMO allows a WLAN access point to transmit to multiple stations simultaneously. Group Membership and identifiers are used to signal target stations together with their positions and the number of spatial streams intended for each of them in the downlink transmission. A description of the Group ID concept and its application to the VHT specification is discussed here. A method for assigning stations' position to Group IDs is introduced and its performance is examined.", "Abstract A study on the zero-forcing beamforming (ZFBF) scheme with antenna selection at user terminals in downlink multi-antenna multi-user systems is presented. Simulation results show that the proposed ZFBF scheme with receiver antenna selection (ZFBF-AS) achieves considerable throughput improvement over the ZFBF scheme with single receiver antenna. The results also show that, with multi-user diversity, the ZFBF-AS scheme approaches the throughput performance of the ZFBF scheme using all receiver antennas (ZFBF-WO-AS) when the base station adopts semi-orthogonal user selection (SUS) algorithm, and achieves larger throughput when the base station adopts the Round-robin scheduling algorithm. Compared with ZFBF-WO-AS, the proposed ZFBF-AS scheme can reduce the cost of user equipments and the channel state information requirement at the transmitter (CSIT) as well as the multiuser scheduling complexity at the transmitter.", "In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1 n, irrespective of its path loss. In fact, using M= spl alpha logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.", "Achieving the capacity region in the MIMO broadcast channel requires the use of Dirty Paper Coding (DPC). When it cannot be afforded to satisfy the requirements of all users by DPC based approaches, it is necessary to identify the users which exhibit the highest performance gain compared to simpler approaches. In this paper we present a user grouping method that aims to identify those users with highest weighted sum rate gain. First some users are excluded by a simple criterion leading to a reduced user set, from which the final user group is selected by a more sophisticated criterion." ] }
1404.5916
2198907803
Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.
Throughout the last few years, display technology has undergone a major transformation. Whereas improvements of display characteristics, such as resolution and contrast, have traditionally relied exclusively on advances in optical and electrical fabrication, computation has now become an integral part of the image formation. Through the co-design of display optics and computational processing, computational and compressive displays have the potential to overcome fundamental limitations of purely optical designs. Characteristics that can be improved by a co-design of display optics and computation include dynamic range @cite_9 and depth of field of projectors @cite_0 . A significant amount of research has recently been conducted on compressive light field display for glasses-free 3D image presentation @cite_16 @cite_10 @cite_3 @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_16", "@cite_10" ], "mid": [ "2168790157", "2109804663", "1974451021", "2139122753", "1985305873", "2082231847" ], "abstract": [ "We introduce tensor displays: a family of compressive light field displays comprising all architectures employing a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We show that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. Using this representation we introduce a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light field decompositions, significantly reducing artifacts observed with prior multilayer-only and multiframe-only decompositions; it is also the first optimization method for designs combining multiple layers with directional backlighting. We verify the benefits and limitations of tensor displays by constructing a prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. Through simulations and experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.", "The dynamic range of many real-world environments exceeds the capabilities of current display technology by several orders of magnitude. In this paper we discuss the design of two different display systems that are capable of displaying images with a dynamic range much more similar to that encountered in the real world. The first display system is based on a combination of an LCD panel and a DLP projector, and can be built from off-the-shelf components. While this design is feasible in a lab setting, the second display system, which relies on a custom-built LED panel instead of the projector, is more suitable for usual office workspaces and commercial applications. We describe the design of both systems as well as the software issues that arise. We also discuss the advantages and disadvantages of the two designs and potential applications for both systems.", "We introduce polarization field displays as an optically-efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially-controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light. Color display is achieved using field sequential color illumination with monochromatic LCDs, mitigating severe attenuation and moire occurring with layered color filter arrays. We demonstrate such displays can be controlled, at interactive refresh rates, by adopting the SART algorithm to tomographically solve for the optimal spatially-varying polarization state rotations applied by each layer. We validate our design by constructing a prototype using modified off-the-shelf panels. We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures. Experiments characterize the accuracy of our image formation model, verifying polarization field displays achieve increased brightness, higher resolution, and extended depth of field, as compared to existing automultiscopic display methods for dual-layer and multi-layer LCDs.", "Coding a projector's aperture plane with adaptive patterns together with inverse filtering allow the depth-of-field of projected imagery to be increased. We present two prototypes and corresponding algorithms for static and programmable apertures. We also explain how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system. Applications such as projector defocus compensation, high-quality projector depixelation, and increased temporal contrast of projected video sequences can be supported. Coded apertures are a step towards next-generation auto-iris projector lenses.", "We optimize automultiscopic displays built by stacking a pair of modified LCD panels. To date, such dual-stacked LCDs have used heuristic parallax barriers for view-dependent imagery: the front LCD shows a fixed array of slits or pinholes, independent of the multi-view content. While prior works adapt the spacing between slits or pinholes, depending on viewer position, we show both layers can also be adapted to the multi-view content, increasing brightness and refresh rate. Unlike conventional barriers, both masks are allowed to exhibit non-binary opacities. It is shown that any 4D light field emitted by a dual-stacked LCD is the tensor product of two 2D masks. Thus, any pair of 1D masks only achieves a rank-1 approximation of a 2D light field. Temporal multiplexing of masks is shown to achieve higher-rank approximations. Non-negative matrix factorization (NMF) minimizes the weighted Euclidean distance between a target light field and that emitted by the display. Simulations and experiments characterize the resulting content-adaptive parallax barriers for low-rank light field approximation.", "We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary oblique views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. As multi-layer generalizations of conventional parallax barriers, such displays are shown, both by theory and experiment, to exceed the performance of existing dual-layer architectures. For 3D display, spatial resolution, depth of field, and brightness are increased, compared to parallax barriers. For a plane at a fixed depth, our optimization also allows optimal construction of high dynamic range displays, confirming existing heuristics and providing the first extension to multiple, disjoint layers. We conclude by demonstrating the benefits and limitations of attenuation-based light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets." ] }
1404.5916
2198907803
Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.
In this paper, we explore a new compressive display design that can be switched between light field, high dynamic range, and superresolution modes. The display design is inspired by previously proposed light field displays @cite_16 , and is comprised of two high-speed liquid crystal displays (LCDs) that are mounted in front of each other with a slight offset (Fig. ). To support a super-resolution display mode, we introduce an additional diffuser covering the LCD closest to the observer. The two stacked LCDs synthesize an intermediate light field inside the device; the diffuser then integrates the different views of that light field such that an observer perceives a superresolved, two-dimensional image. Making the diffuser electronically switchable allows for the display to be used in 3D or high dynamic range mode.
{ "cite_N": [ "@cite_16" ], "mid": [ "1985305873" ], "abstract": [ "We optimize automultiscopic displays built by stacking a pair of modified LCD panels. To date, such dual-stacked LCDs have used heuristic parallax barriers for view-dependent imagery: the front LCD shows a fixed array of slits or pinholes, independent of the multi-view content. While prior works adapt the spacing between slits or pinholes, depending on viewer position, we show both layers can also be adapted to the multi-view content, increasing brightness and refresh rate. Unlike conventional barriers, both masks are allowed to exhibit non-binary opacities. It is shown that any 4D light field emitted by a dual-stacked LCD is the tensor product of two 2D masks. Thus, any pair of 1D masks only achieves a rank-1 approximation of a 2D light field. Temporal multiplexing of masks is shown to achieve higher-rank approximations. Non-negative matrix factorization (NMF) minimizes the weighted Euclidean distance between a target light field and that emitted by the display. Simulations and experiments characterize the resulting content-adaptive parallax barriers for low-rank light field approximation." ] }
1404.6151
2150563378
Some mobile sensor network applications require the sensor nodes to transfer their trajectories to a data sink. This paper proposes an adaptive trajectory (lossy) compression algorithm based on compressive sensing. The algorithm has two innovative elements. First, we propose a method to compute a deterministic projection matrix from a learnt dictionary. Second, we propose a method for the mobile nodes to adaptively predict the number of projections needed based on the speed of the mobile nodes. Extensive evaluation of the proposed algorithm using 6 datasets shows that our proposed algorithm can achieve sub-metre accuracy. In addition, our method of computing projection matrices outperforms two existing methods. Finally, comparison of our algorithm against a state-of-the-art trajectory compression algorithm show that our algorithm can reduce the error by 10-60 cm for the same compression ratio.
Most adaptive compression algorithms proposed for WSN are motivated by energy savings. Most of the them considered slowly changing natural phenomena, which intrinsically require relatively low sampling. For example, @cite_7 proposed an adaptive compression algorithm, wherein compression is adapted at the sensing node by analyzing the correlation in a centralized data store. Since the approach requires central server to node communication, it is suitable for slowly changing phenomena e.g., soil moisture. However, we consider trajectory sampled at as high as 2 Hz sampling rate, therefore, such technique may result in enormous node to base communication causing quick depletion of the sensor node battery.
{ "cite_N": [ "@cite_7" ], "mid": [ "2164680510" ], "abstract": [ "We propose a novel approach to reducing energy consumption in sensor networks using a distributed adaptive signal processing framework and efficient algorithm. While the topic of energy-aware routing to alleviate energy consumption in sensor networks has received attention recently (C. Toh, 2001; R. , 2002), in this paper, we propose an orthogonal approach to previous methods. Specifically, we propose a distributed way of continuously exploiting existing correlations in sensor data based on adaptive signal processing and distributed source coding principles. Our approach enables sensor nodes to blindly compress their readings with respect to one another without the need for explicit and energy-expensive intersensor communication to effect this compression. Furthermore, the distributed algorithm used by each sensor node is extremely low in complexity and easy to implement (i.e., one modulo operation), while an adaptive filtering framework is used at the data gathering unit to continuously learn the relevant correlation structures in the sensor data. Our simulations show the power of our proposed algorithms, revealing their potential to effect significant energy savings (from 10 -65 ) for typical sensor data corresponding to a multitude of sensor modalities." ] }
1404.6151
2150563378
Some mobile sensor network applications require the sensor nodes to transfer their trajectories to a data sink. This paper proposes an adaptive trajectory (lossy) compression algorithm based on compressive sensing. The algorithm has two innovative elements. First, we propose a method to compute a deterministic projection matrix from a learnt dictionary. Second, we propose a method for the mobile nodes to adaptively predict the number of projections needed based on the speed of the mobile nodes. Extensive evaluation of the proposed algorithm using 6 datasets shows that our proposed algorithm can achieve sub-metre accuracy. In addition, our method of computing projection matrices outperforms two existing methods. Finally, comparison of our algorithm against a state-of-the-art trajectory compression algorithm show that our algorithm can reduce the error by 10-60 cm for the same compression ratio.
Some other adaptive compression algorithms, although not requiring enormous inter-node communication, however require a large number of on-node processing. For example, @cite_27 proposed an adaptive wavelet compression algorithm for WSNs. In the proposed method each receiving sensor computes the space savings, and calculates the total energy dissipation to make a decision about whether to adjust the wavelet transform level. Clearly, this method will involve enormous computation given that for each trajectory segment it has to iterate multiple times to determine the optimal transform level for the best compression and energy trade-off.
{ "cite_N": [ "@cite_27" ], "mid": [ "2112025575" ], "abstract": [ "In this paper we proposed a novel Adaptive Distributed Wavelet Compression (ADWC) algorithm for reducing energy consumption in a wireless sensor network, where each of the sensors has limited power. This algorithm is characterized by a distributed lifting factorization, which matching well with the transmission strategy employed in wireless sensor networks., it also present an adaptive algorithm to selects the optimal wavelet compression parameters to minimize total energy dissipation. The simulation results showed that these approaches can achieve significant energy savings without sacrificing the quality of the data reconstruction." ] }
1404.6151
2150563378
Some mobile sensor network applications require the sensor nodes to transfer their trajectories to a data sink. This paper proposes an adaptive trajectory (lossy) compression algorithm based on compressive sensing. The algorithm has two innovative elements. First, we propose a method to compute a deterministic projection matrix from a learnt dictionary. Second, we propose a method for the mobile nodes to adaptively predict the number of projections needed based on the speed of the mobile nodes. Extensive evaluation of the proposed algorithm using 6 datasets shows that our proposed algorithm can achieve sub-metre accuracy. In addition, our method of computing projection matrices outperforms two existing methods. Finally, comparison of our algorithm against a state-of-the-art trajectory compression algorithm show that our algorithm can reduce the error by 10-60 cm for the same compression ratio.
A similar problem will be experienced with the algorithm proposed in @cite_11 , which employs a feedback approach in which the space savings is compared to a pre-determined threshold. The compression model used in the previous frame can be retained and used for the next frame, if space savings is greater than the predefined threshold. Otherwise, the system will produce a new compression model.
{ "cite_N": [ "@cite_11" ], "mid": [ "2109878464" ], "abstract": [ "Data compression techniques have extensive applications in power-constrained digital communication systems, such as in the rapidly-developing domain of wireless sensor network applications. This paper explores energy consumption tradeoffs associated with data compression, particularly in the context of lossless compression for acoustic signals. Such signal processing is relevant in a variety of sensor network applications, including surveillance and monitoring. Applying data compression in a sensor node generally reduces the energy consumption of the transceiver at the expense of additional energy expended in the embedded processor due to the computational cost of compression. This paper introduces a methodology for comparing data compression algorithms in sensor networks based on the figure of merit D E, where D is the amount of data (before compression) that can be transmitted under a given energy budget E for computation and communication. We develop experiments to evaluate, using this figure of merit, different variants of linear predictive coding. We also demonstrate how different models of computation applied to the embedded software design lead to different degrees of processing efficiency, and thereby have significant effect on the targeted figure of merit." ] }
1404.6151
2150563378
Some mobile sensor network applications require the sensor nodes to transfer their trajectories to a data sink. This paper proposes an adaptive trajectory (lossy) compression algorithm based on compressive sensing. The algorithm has two innovative elements. First, we propose a method to compute a deterministic projection matrix from a learnt dictionary. Second, we propose a method for the mobile nodes to adaptively predict the number of projections needed based on the speed of the mobile nodes. Extensive evaluation of the proposed algorithm using 6 datasets shows that our proposed algorithm can achieve sub-metre accuracy. In addition, our method of computing projection matrices outperforms two existing methods. Finally, comparison of our algorithm against a state-of-the-art trajectory compression algorithm show that our algorithm can reduce the error by 10-60 cm for the same compression ratio.
A slightly different adaptive compression principle is proposed in @cite_15 . The authors design an on-line adaptive algorithm that dynamically makes compression decisions to accommodate the changing state of WSNs. By using the queueing model, the algorithm predicts the compression effect on the average packet delay and performs compression only when it can reduce the packet delay.
{ "cite_N": [ "@cite_15" ], "mid": [ "2105564986" ], "abstract": [ "In this paper, architectures for two-dimensional and three-dimensional underwater sensor networks are discussed. A detailed overview on the current solutions for medium access control, network, and transport layer protocols are given and open research issues are discussed." ] }
1404.6074
2949399602
Networks are ubiquitous in biology and computational approaches have been largely investigated for their inference. In particular, supervised machine learning methods can be used to complete a partially known network by integrating various measurements. Two main supervised frameworks have been proposed: the local approach, which trains a separate model for each network node, and the global approach, which trains a single model over pairs of nodes. Here, we systematically investigate, theoretically and empirically, the exploitation of tree-based ensemble methods in the context of these two approaches for biological network inference. We first formalize the problem of network inference as classification of pairs, unifying in the process homogeneous and bipartite graphs and discussing two main sampling schemes. We then present the global and the local approaches, extending the later for the prediction of interactions between two unseen network nodes, and discuss their specializations to tree-based ensemble methods, highlighting their interpretability and drawing links with clustering techniques. Extensive computational experiments are carried out with these methods on various biological networks that clearly highlight that these methods are competitive with existing methods.
@cite_31 developed and applied the local approach with support vector machines to predict the PPI and MN networks and show that it was superior to several previous works @cite_16 @cite_21 . They only consider @math predictions and used 5-fold CV. Although they exploited yeast-two-hybrid data as additional features for the prediction of the PPI network, we obtain very similar performances with the local multiple output approach (see Table ). @cite_22 use ensembles of output kernel trees to infer the MN and PPI networks with the same input data as @cite_31 . With the global approach, we obtain similar or inferior results as @cite_22 in terms of AUROC but much better results in terms of AUPR, especially on the MN data.
{ "cite_N": [ "@cite_31", "@cite_16", "@cite_21", "@cite_22" ], "mid": [ "2116504907", "2128817214", "2159944419", "" ], "abstract": [ "Motivation: Inference and reconstruction of biological networks from heterogeneous data is currently an active research subject with several important applications in systems biology. The problem has been attacked from many different points of view with varying degrees of success. In particular, predicting new edges with a reasonable false discovery rate is highly demanded for practical applications, but remains extremely challenging due to the sparsity of the networks of interest. Results: While most previous approaches based on the partial knowledge of the network to be inferred build global models to predict new edges over the network, we introduce here a novel method which predicts whether there is an edge from a newly added vertex to each of the vertices of a known network using local models. This involves learning individually a certain subnetwork associated with each vertex of the known network, then using the discovered classification rule associated with only that vertex to predict the edge to the new vertex. Excellent experimental results are shown in the case of metabolic and protein–protein interaction network reconstruction from a variety of genomic data. Availability: An implementation of the proposed algorithm is available upon request from the authors. Contact: Jean-Philippe.Vert@ensmp.fr", "Motivation: An increasing number of observations support the hypothesis that most biological functions involve the interactions between many proteins, and that the complexity of living systems arises as a result of such interactions. In this context, the problem of inferring a global protein network for a given organism, using all available genomic data about the organism, is quickly becoming one of the main challenges in current computational biology. Results: This paper presents a new method to infer protein networks from multiple types of genomic data. Based on a variant of kernel canonical correlation analysis, its originality is in the formalization of the protein network inference problem as a supervised learning problem, and in the integration of heterogeneous genomic data within this framework. We present promising results on the prediction of the protein network for the yeast Saccharomyces cerevisiae from four types of widely available data: gene expressions, protein interactions measured by yeast two-hybrid systems, protein localizations in the cell and protein phylogenetic profiles. The method is shown to outperform other unsupervised protein network inference methods. We finally conduct a comprehensive prediction of the protein network for all proteins of the yeast, which enables us to propose protein candidates for missing enzymes in a biosynthesis pathway. Availability: Softwares are available upon request.", "Motivation: Inferring networks of proteins from biological data is a central issue of computational biology. Most network inference methods, including Bayesian networks, take unsupervised approaches in which the network is totally unknown in the beginning, and all the edges have to be predicted. A more realistic supervised framework, proposed recently, assumes that a substantial part of the network is known. We propose a new kernel-based method for supervised graph inference based on multiple types of biological datasets such as gene expression, phylogenetic profiles and amino acid sequences. Notably, our method assigns a weight to each type of dataset and thereby selects informative ones. Data selection is useful for reducing data collection costs. For example, when a similar network inference problem must be solved for other organisms, the dataset excluded by our algorithm need not be collected. Results: First, we formulate supervised network inference as a kernel matrix completion problem, where the inference of edges boils down to estimation of missing entries of a kernel matrix. Then, an expectation--maximization algorithm is proposed to simultaneously infer the missing entries of the kernel matrix and the weights of multiple datasets. By introducing the weights, we can integrate multiple datasets selectively and thereby exclude irrelevant and noisy datasets. Our approach is favorably tested in two biological networks: a metabolic network and a protein interaction network. Availability: Software is available on request. Contact: kato-tsuyoshi@aist.go.jp Supplementary information: A supplementary report including mathematical details is available at www.cbrc.jp kato faem faem.html", "" ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
The present work builds upon the second moment argument from @cite_2 . Conversely, yields a small improvement over the lower bound from @cite_2 . Indeed, as we saw above implies that @math , thereby determining the precise error term'' @math in the lower bound ).
{ "cite_N": [ "@cite_2" ], "mid": [ "2963885173" ], "abstract": [ "In this paper we establish a substantially improved lower bound on the k-color ability threshold of the random graph G(n, m) with n vertices and m edges. The new lower bound is ≈ 1.39 less than the 2k ln (k)-ln (k) first-moment upper bound (and approximately 0.39 less than the 2k ln (k) - ln(k) - 1 physics conjecture). By comparison, the best previous bounds left a gap of about 2+ln(k), unbounded in terms of the number of colors [Achlioptas, Naor: STOC 2004]. Furthermore, we prove that, in a precise sense, our lower bound marks the so-called condensation phase transition predicted on the basis of physics arguments [: PNAS 2007]. Our proof technique is a novel approach to the second moment method, inspired by physics conjectures on the geometry of the set of k-colorings of the random graph." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
In fact, @math is the best-possible lower bound that can be obtained via a certain natural'' type of second moment argument. Assume that @math is a random variable such that @math ; think of @math as a random variable that counts @math -colorings, perhaps excluding some pathological cases''. Then for any @math such that the second moment method works'', i.e., @math a concentration result from @cite_27 implies that @math . Consequently, @math .
{ "cite_N": [ "@cite_27" ], "mid": [ "2115831572" ], "abstract": [ "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
The notion that for @math close to the (hypothetical) @math -colorability threshold @math it seems difficult to find a @math -coloring of @math algorithmically could be used to construct a candidate one-way function @cite_27 (see also @cite_21 ). This function maps a @math -coloring @math to a random graph @math by linking any two vertices @math with @math with some @math independently. The edge probability @math could be chosen such that the average degree of the resulting graph is close to the @math -colorability threshold. This distribution on graphs is the so-called .
{ "cite_N": [ "@cite_27", "@cite_21" ], "mid": [ "2115831572", "2172366597" ], "abstract": [ "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs.", "We suggest a candidate one-way function using combinatorial constructs such as expander graphs. These graphs are used to determine a sequence of small overlapping subsets of input bits, to which a hard-wired random predicate is applied. Thus, the function is extremely easy to evaluate: All that is needed is to take multiple projections of the input bits, and to use these as entries to a look-up table. It is feasible for the adversary to scan the look-up table, but we believe it would be infeasible to find an input that fits a given sequence of values obtained for these overlapping projections. The conjectured difficulty of inverting the suggested function does not seem to follow from any well-known assumption. Instead, we propose the study of the complexity of inverting this function as an interesting open problem, with the hope that further research will provide evidence to our belief that the inversion task is intractable." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
If the planted distribution is close to @math , one might think that the function @math is difficult to invert. Indeed, it should be difficult to find any @math -coloring of @math , not to mention the planted coloring @math . As shown in @cite_27 , the planted distribution and @math are interchangeable (in a certain precise sense) iff @math . Hence, @math marks the point where these two distributions start to differ. In particular, shows that at the @math -colorability threshold, the two distributions are not interchangeable. In effect, experimental evidence that coloring @math is difficult'' at or near @math is inconclusive with respect to the problem of finding a @math -coloring in the planted model (which may, of course, well be difficult for some other reason).
{ "cite_N": [ "@cite_27" ], "mid": [ "2115831572" ], "abstract": [ "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
The cavity method has inspired new message passing'' algorithms by the name of Belief Survey Propagation Guided Decimation @cite_7 . Experiments on random graph @math -coloring instances for small values of @math indicate an excellent performance of these algorithms @cite_19 @cite_8 @cite_18 . However, whether these experimental results are reliable and or extend to larger @math remains shrouded in mystery.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_7", "@cite_8" ], "mid": [ "1980130379", "2105546154", "1518885151", "2963880447" ], "abstract": [ "We study the graph coloring problem over random graphs of finite average connectivity c. Given a number q of available colors, we find that graphs with low connectivity admit almost always a proper coloring whereas graphs with high connectivity are uncolorable. Depending on q, we find with a one-step replica-symmetry breaking approximation the precise value of the critical average connectivity @math Moreover, we show that below @math there exists a clustering phase @math in which ground states spontaneously divide into an exponential number of clusters. Furthermore, we extended our considerations to the case of single instances showing consistent results. This leads us to propose a different algorithm that is able to color in polynomial time random graphs in the hard but colorable region, i.e., when @math", "We consider the problem of coloring the vertices of a large sparse random graph with a given number of colors so that no adjacent vertices have the same color. Using the cavity method, we present a detailed and systematic analytical study of the space of proper colorings (solutions). We show that for a fixed number of colors and as the average vertex degree (number of constraints) increases, the set of solutions undergoes several phase transitions similar to those observed in the mean field theory of glasses. First, at the clustering transition, the entropically dominant part of the phase space decomposes into an exponential number of pure states so that beyond this transition a uniform sampling of solutions becomes hard. Afterward, the space of solutions condenses over a finite number of the largest states and consequently the total entropy of solutions becomes smaller than the annealed one. Another transition takes place when in all the entropically dominant states a finite fraction of nodes freezes so that each of these nodes is allowed a single color in all the solutions inside the state. Eventually, above the coloring threshold, no more solutions are available.", "We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio α of clauses to variables less than a threshold α c are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below α c , where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.", "Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named \"locked\" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
Perhaps the most plausible stab at understanding Belief Propagation Guided Decimation is the non-rigorous contribution @cite_10 . Roughly speaking, the result of the Belief Propagation fixed point iteration after @math iterations can be expected to yield a good approximation to the actual marginal distribution iff there is no condensation among the remaining list colorings. If so, one should expect that the algorithm actually finds a @math -coloring if condensation does not occur at any step @math . Thus, we look at a two-dimensional phase diagram'' parametrised by the average degree @math and the time @math . We need to identify the line that marks the (suitably defined) condensation phase transition in this diagram. deals with the case @math , and it would be most interesting to see if the present techniques extend to @math . Attempts at (rigorously) analysing message passing algorithms along these lines have been made for random @math -SAT, but the current results are far from precise @cite_4 @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_4" ], "mid": [ "2101009127", "1973744137", "1833343854" ], "abstract": [ "We study the antiferromagnetic Potts model on the Poissonian Erdős-Renyi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.", "We introduce a version of the cavity method for diluted mean-field spin models that allows the computation of thermodynamic quantities similar to the Franz–Parisi quenched potential in sparse random graph models. This method is developed in the particular case of partially decimated random constraint satisfaction problems. This allows us to develop a theoretical understanding of a class of algorithms for solving constraint satisfaction problems, in which elementary degrees of freedom are sequentially assigned according to the results of a message passing procedure (belief propagation). We confront this theoretical analysis with the results of extensive numerical simulations.", "Let Φ be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-constructive arguments show that Φ is satisfiable for clause variable ratios m n ≤ rk 2k ln 2 with high probability (Achlioptas, Moore: SICOMP 2006; Achlioptas, Peres: J. AMS 2004). Yet no efficient algorithm is know to find a satisfying assignment for densities as low as m n rk · ln(k) k with a non-vanishing probability. In fact, the density m n rk · ln(k) k seems to form a barrier for a broad class of local search algorithms (Achlioptas, Coja-Oghlan: FOCS 2008). On the basis of deep but non-rigorous statistical mechanics considerations, a message passing algorithm called belief propagation guided decimation for solving random k-SAT has been forward (Mezard, Parisi, Zecchina: Science 2002; Braunstein, Mezard, Zecchina: RSA 2005). Experiments suggest that the algorithm might succeed for densities very close to rk for k = 3, 4, 5 (Kroc, Sabharwal, Selman: SAC 2009). Furnishing the first rigorous analysis of belief propagation guided decimation on random k-SAT, the present paper shows that the algorithm fails to find a satisfying assignment already for m n ≥ ρ · rk k, for a constant ρ > 0 independent of k." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
With respect to diluted'' models, Coja-Oghlan and Zdeborova @cite_13 showed that a condensation phase transition exists in random @math -uniform hypergraph @math -coloring. Furthermore, @cite_13 determines the location of the condensation phase transition up to an error @math that tends to zero as the uniformity @math of the hypergraph becomes large. By contrast, is the first result that pins down the exact condensation phase transition in a diluted mean-field model.
{ "cite_N": [ "@cite_13" ], "mid": [ "1610636547" ], "abstract": [ "For many random constraint satisfaction problems such as random satisfiability or random graph or hypergraph coloring, the best current estimates of the threshold for the existence of solutions are based on the first and the second moment method. However, in most cases these techniques do not yield matching upper and lower bounds. Sophisticated but non-rigorous arguments from statistical mechanics have ascribed this discrepancy to the existence of a phase transition called condensation that occurs shortly before the actual threshold for the existence of solutions and that affects the combinatorial nature of the problem (Krzakala, Montanari, Ricci-Tersenghi, Semerjian, Zdeborova: PNAS 2007). In this paper we prove for the first time that a condensation transition exists in a natural random CSP, namely in random hypergraph 2-coloring. Perhaps surprisingly, we find that the second moment method applied to the number of 2-colorings breaks down strictly before the condensation transition. Our proof also yields slightly improved bounds on the threshold for random hypergraph 2-colorability." ] }
1404.5513
1598946980
Based on a non-rigorous formalism called the “cavity method”, physicists have put forward intriguing predictions on phase transitions in diluted mean-field models, in which the geometry of interactions is induced by a sparse random graph or hypergraph. One example of such a model is the graph coloring problem on the Erdős–Renyi random graph G(n, d n), which can be viewed as the zero temperature case of the Potts antiferromagnet. The cavity method predicts that in addition to the k-colorability phase transition studied intensively in combinatorics, there exists a second phase transition called the condensation phase transition ( in Proc Natl Acad Sci 104:10318–10323, 2007). In fact, there is a conjecture as to the precise location of this phase transition in terms of a certain distributional fixed point problem. In this paper we prove this conjecture for k exceeding a certain constant k0.
Technically, we build upon some of the techniques that have been developed to study the geometry'' of the set of @math -colorings of the random graph and add to this machinery. Among the techniques that we harness is the planting trick'' from @cite_27 (which, in a sense, we are going to put into reverse''), the notion of a core @cite_27 @cite_2 @cite_20 , techniques for proving the existence of frozen variables'' @cite_20 , and a concentration argument from @cite_13 . Additionally, our proof directly incorporates some of the physics calculations from [Appendix C] LenkaFlorent . That said, the cornerstone of the present work is a novel argument that allows us to connect the distributional fixed point problem from @cite_18 rigorously with the geometry of the set of @math -colorings.
{ "cite_N": [ "@cite_18", "@cite_27", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2105546154", "2115831572", "2963885173", "1610636547", "1991809010" ], "abstract": [ "We consider the problem of coloring the vertices of a large sparse random graph with a given number of colors so that no adjacent vertices have the same color. Using the cavity method, we present a detailed and systematic analytical study of the space of proper colorings (solutions). We show that for a fixed number of colors and as the average vertex degree (number of constraints) increases, the set of solutions undergoes several phase transitions similar to those observed in the mean field theory of glasses. First, at the clustering transition, the entropically dominant part of the phase space decomposes into an exponential number of pure states so that beyond this transition a uniform sampling of solutions becomes hard. Afterward, the space of solutions condenses over a finite number of the largest states and consequently the total entropy of solutions becomes smaller than the annealed one. Another transition takes place when in all the entropically dominant states a finite fraction of nodes freezes so that each of these nodes is allowed a single color in all the solutions inside the state. Eventually, above the coloring threshold, no more solutions are available.", "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs.", "In this paper we establish a substantially improved lower bound on the k-color ability threshold of the random graph G(n, m) with n vertices and m edges. The new lower bound is ≈ 1.39 less than the 2k ln (k)-ln (k) first-moment upper bound (and approximately 0.39 less than the 2k ln (k) - ln(k) - 1 physics conjecture). By comparison, the best previous bounds left a gap of about 2+ln(k), unbounded in terms of the number of colors [Achlioptas, Naor: STOC 2004]. Furthermore, we prove that, in a precise sense, our lower bound marks the so-called condensation phase transition predicted on the basis of physics arguments [: PNAS 2007]. Our proof technique is a novel approach to the second moment method, inspired by physics conjectures on the geometry of the set of k-colorings of the random graph.", "For many random constraint satisfaction problems such as random satisfiability or random graph or hypergraph coloring, the best current estimates of the threshold for the existence of solutions are based on the first and the second moment method. However, in most cases these techniques do not yield matching upper and lower bounds. Sophisticated but non-rigorous arguments from statistical mechanics have ascribed this discrepancy to the existence of a phase transition called condensation that occurs shortly before the actual threshold for the existence of solutions and that affects the combinatorial nature of the problem (Krzakala, Montanari, Ricci-Tersenghi, Semerjian, Zdeborova: PNAS 2007). In this paper we prove for the first time that a condensation transition exists in a natural random CSP, namely in random hypergraph 2-coloring. Perhaps surprisingly, we find that the second moment method applied to the number of 2-colorings breaks down strictly before the condensation transition. Our proof also yields slightly improved bounds on the threshold for random hypergraph 2-colorability.", "We rigorously determine the exact freezing threshold, rkf, for k-colourings of a random graph. We prove that for random graphs with density above rkf, almost every colouring is such that a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations whereby we change the colours of o(n) vertices at a time, always obtaining another proper colouring. When the density is below rkf, then almost every colouring has at most o(n) frozen variables. This confirms hypotheses made using the non-rigorous cavity method. It has been hypothesized that the freezing threshold is the cause of the \"algorithmic barrier\", the long observed phenomenon that when the edge-density of a random graph exceeds hf k ln k(1+ok(1)), no algorithms are known to find k-colourings, despite the fact that this density is only half the k-colourability threshold. We also show that rkf is the threshold of a strong form of reconstruction for k-colourings of the Galton-Watson tree, and of the graphical model." ] }
1404.5421
2951035916
We consider the problem of multiple users targeting the arms of a single multi-armed stochastic bandit. The motivation for this problem comes from cognitive radio networks, where selfish users need to coexist without any side communication between them, implicit cooperation or common control. Even the number of users may be unknown and can vary as users join or leave the network. We propose an algorithm that combines an @math -greedy learning rule with a collision avoidance mechanism. We analyze its regret with respect to the system-wide optimum and show that sub-linear regret can be obtained in this setting. Experiments show dramatic improvement compared to other algorithms for this setting.
The work closest in spirit to ours is @cite_3 . The authors propose different algorithms for solving the CRN-MAB problem, attempting to lift assumptions of cooperation and communication as they go along. Their main contribution is expressed in an algorithm which is coordination and communication free, but relies on exact knowledge of the number of users in the network. In order to resolve this issue, an algorithm which is based on the number of users is proposed. Performance guarantees for this algorithm are rather vague, and it does not address the scenario of a time-varying number of users.
{ "cite_N": [ "@cite_3" ], "mid": [ "2169926276" ], "abstract": [ "The problem of distributed learning and channel access is considered in a cognitive network with multiple secondary users. The availability statistics of the channels are initially unknown to the secondary users and are estimated using sensing decisions. There is no explicit information exchange or prior agreement among the secondary users and sensing and access decisions are undertaken by them in a completely distributed manner. We propose policies for distributed learning and access which achieve order-optimal cognitive system throughput (number of successful secondary transmissions) under self play, i.e., when implemented at all the secondary users. Equivalently, our policies minimize the sum regret in distributed learning and access, which is the loss in secondary throughput due to learning and distributed access. For the scenario when the number of secondary users is known to the policy, we prove that the total regret is logarithmic in the number of transmission slots. This policy achieves order-optimal regret based on a logarithmic lower bound for regret under any uniformly-good learning and access policy. We then consider the case when the number of secondary users is fixed but unknown, and is estimated at each user through feedback. We propose a policy whose sum regret grows only slightly faster than logarithmic in the number of transmission slots." ] }
1404.5421
2951035916
We consider the problem of multiple users targeting the arms of a single multi-armed stochastic bandit. The motivation for this problem comes from cognitive radio networks, where selfish users need to coexist without any side communication between them, implicit cooperation or common control. Even the number of users may be unknown and can vary as users join or leave the network. We propose an algorithm that combines an @math -greedy learning rule with a collision avoidance mechanism. We analyze its regret with respect to the system-wide optimum and show that sub-linear regret can be obtained in this setting. Experiments show dramatic improvement compared to other algorithms for this setting.
A different approach to resource allocation with multiple noncooperative users involves game theoretic concepts @cite_16 @cite_15 . In our work we focus on cognitive, rather than strategic, users. Yet another perspective includes work on CRNs with multiple secondary users, where the emphasis is placed on collision avoidance and sensing. References such as @cite_8 and @cite_12 propose ALOHA based algorithms, achieving favorable results. However, these works do not consider the learning problem we are facing, and assume all channels to be known and identical.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_12", "@cite_8" ], "mid": [ "2160081863", "2143001213", "2063542522", "2124683077" ], "abstract": [ "\"Cognitive radio\" is an emerging technique to improve the utilization of radio frequency spectrum in wireless networks. In this paper, we consider the problem of spectrum sharing among a primary user and multiple secondary users. We formulate this problem as an oligopoly market competition and use a noncooperative game to obtain the spectrum allocation for secondary users. Nash equilibrium is considered as the solution of this game. We first present the formulation of a static game for the case where all secondary users have the current information of the adopted strategies and the payoff of each other. However, this assumption may not be realistic in some cognitive radio systems. Therefore, we consider the case of bounded rationality in which the secondary users gradually and iteratively adjust their strategies based on the observations on their previous strategies. The speed of adjustment of the strategies is controlled by the learning rate. The stability condition of the dynamic behavior for this spectrum sharing scheme is investigated. The numerical results reveal the dynamics of distributed dynamic adaptation of spectrum sharing strategies.", "In this work, we propose a game theoretic framework to analyze the behavior of cognitive radios for distributed adaptive channel allocation. We define two different objective functions for the spectrum sharing games, which capture the utility of selfish users and cooperative users, respectively. Based on the utility definition for cooperative users, we show that the channel allocation problem can be formulated as a potential game, and thus converges to a deterministic channel allocation Nash equilibrium point. Alternatively, a no-regret learning implementation is proposed for both scenarios and it is shown to have similar performance with the potential game when cooperation is enforced, but with a higher variability across users. The no-regret learning formulation is particularly useful to accommodate selfish users. Non-cooperative learning games have the advantage of a very low overhead for information exchange in the network. We show that cooperation based spectrum sharing etiquette improves the overall network performance at the expense of an increased overhead required for information exchange.", "In this paper, we investigate a novel slotted ALOHA-based distributed access cognitive network in which a secondary user (SU) selects a random subset of channels for sensing, detects an idle (unused by licensed users) subset therein, and transmits in any one of those detected idle channels. First, we derive a range for the number of channels to be sensed per SU access. Then, the analytical average system throughput is attained for cases where the number of idle channels is a random variable. Based on that, a relationship between the average system throughput and the number of sensing channels is attained. Subsequently, a joint optimization problem is formulated in order to maximize average system throughput. The analytical results are validated by substantial simulations.", "In this paper we present a slotted ALOHA based multi-channel cognitive radio network opportunistically using the unused spectrum of incumbent devices and analyze its throughput and delay. We also evaluate packet capture effect of our proposed system over Rayleigh fading." ] }
1404.5545
1466611538
We consider the problem of @math -testing of class of bounded derivative properties over hypergrid domain with points distributed according to some product distribution. This class includes monotonicity, the Lipschitz property, @math -generalized Lipschitz and many more properties. Previous results for @math testing on @math for this class were known for monotonicity and @math -Lipschitz properties over uniformly distributed domains. Our results imply testers that give the same upper bound for arbitrary product distributions as the hitherto known testers, which use uniformly randomly chosen samples from @math , for monotonicity and Lipschitz testing. Also, our testers are for a large class of bounded derivative properties, that includes @math -generalized Lipschitz property, over uniform distributions. Infact, each edge in @math is allowed to have it's own left and right Lipschitz constants. The time complexity is for arbitrary product distributions.
Goldreich et. al. @cite_36 had already posed the question of testing properties of functions over non-uniform distributions, and obtain some results for dense graph properties. A serious study of the role of distributions was undertaken by Halevy and Kushilevitz @cite_18 @cite_13 @cite_6 @cite_4 , who formalized the concept of distribution-free testing. (Refer to Halevy's thesis @cite_3 for a comprehensive study.) Glasner and Servedio @cite_28 and Dolev and Ron @cite_2 give various upper and lower bounds for distribution-free testers for various classes of functions (not monotonicity) over @math .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_36", "@cite_28", "@cite_6", "@cite_3", "@cite_2", "@cite_13" ], "mid": [ "", "2046654297", "1970630090", "2568354864", "1557097080", "", "2186390088", "2006402542" ], "abstract": [ "", "We consider distribution-free property-testing of graph connectivity. In this setting of property testing, the distance between functions is measured with respect to a fixed but unknown distribution D on the domain, and the testing algorithm has an oracle access to random sampling from the domain according to this distribution D. This notion of distribution-free testing was previously defined, and testers were shown for very few properties. However, no distribution-free property testing algorithm was known for any graph property. We present the first distribution-free testing algorithms for one of the central properties in this area—graph connectivity (specifically, the problem is mainly interesting in the case of sparse graphs). We introduce three testing models for sparse graphs: A model for bounded-degree graphs, A model for graphs with a bound on the total number of edges (both models were already considered in the context of uniform distribution testing), and A model which is a combination of the two previous testing models; i.e., bounded-degree graphs with a bound on the total number of edges. We prove that connectivity can be tested in each of these testing models, in a distribution-free manner, using a number of queries that is independent of the size of the graph. This is done by providing a new analysis to previously known connectivity testers (from “standard”, uniform distribution property-testing) and by introducing some new testers.", "In this paper, we consider the question of determining whether a function f has property P or is e-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation. In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k -Colorable, or having a p -Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph.", "", "We consider monotonicity testing of functions f:[n]d→ 0,1 , in the property testing framework of Rubinfeld and Sudan [23] and Goldreich, Goldwasser and Ron [14]. Specifically, we consider the framework of distribution-free property testing, where the distance between functions is measured with respect to a fixed but unknown distribution D on the domain and the testing algorithms have an oracle access to random sampling from the domain according to this distribution D. We show that, though in the uniform distribution case, testing of boolean functions defined over the boolean hypercube can be done using query complexity that is polynomial in @math and in the dimension d, in the distribution-free setting such testing requires a number of queries that is exponential in d. Therefore, in the high-dimensional case (in oppose to the low-dimensional case), the gap between the query complexity for the uniform and the distribution-free settings is exponential.", "", "We consider the problem of distribution-free testing of the class of monotone monomials and the class of monomials over n variables. While there are very efficient testers for a variety of classes of functions when the underlying distribution is uniform, designing distribution-free testers (which must work under an arbitrary and unknown distribution) tends to be more challenging. When the underlying distribution is uniform, (SIAM J. Discr. Math., 2002) give a tester for (monotone) monomials whose query complexity does not depend on n, and whose dependence on the distance parameter is (inverse) linear. In contrast, Glasner and Servedio (Theory of Computing, 2009) prove that every distribution-free tester for monotone monomials as well as for general monomials must have query complexity e W(n 1=5 ) (for a constant distance parameter e). In this paper we present distribution-free testers for these classes with query complexity e O(n 1=2 =e). We note that in contrast to previous results for distribution-free testing, our testers do not build on the testers that work under the uniform distribution. Rather, we define and exploit certain structural properties of monomials (and functions that differ from them on a non-negligible part of the input space), which were not used in previous work on property testing.", "We consider the problem of monotonicity testing over graph products. Monotonicity testing is one of the central problems studied in the field of property testing. We present a testing approach that enables us to use known monotonicity testers for given graphs G1, G2, to test monotonicity over their product G1 × G2. Such an approach of reducing monotonicity testing over a graph product to monotonicity testing over the original graphs, has been previously used in the special case of monotonicity testing over [n]d for a limited type of testers; in this article, we show that this approach can be applied to allow modular design of testers in many interesting cases: this approach works whenever the functions are boolean, and also in certain cases for functions with a general range. We demonstrate the usefulness of our results by showing how a careful use of this approach improves the query complexity of known testers. Specifically, based on our results, we provide a new analysis for the known tester for [n]d which significantly improves its query complexity analysis in the low-dimensional case. For example, when d = O(1), we reduce the best known query complexity from O(log 2n-e) to O(log n-e). © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 An extended abstract of this paper appeared in ICALP 2004 [23]." ] }
1404.5002
2158162651
Many graph processing algorithms require determination of shortest-path distances between arbitrary numbers of node pairs. Since computation of exact distances between all node-pairs of a large graph, e.g., 10M nodes and up, is prohibitively expensive both in computational time and storage space, distance approximation is often used in place of exact computation. A distance oracle is a data structure that answers inter-point distance queries more efficiently than in standard O(n 2 ) in time or storage space for an n node graph, e.g., in O(nlogn). In this paper, we present a novel and scalable distance oracle that leverages the hyperbolic core of real-world large graphs for fast and scalable distance approximation via spanning trees. We show empirically that the proposed oracle significantly outperforms prior oracles on a random set of test cases drawn from public domain graph libraries. There are two sets of prior work against which we benchmark our approach. The first set, which often outperforms all other oracles, employs embedding of the graph into low dimensional Euclidean spaces with carefully constructed hyperbolic distances, but provides no guarantees on the distance estimation error. The second set leverages Gromov-type tree contraction of the graph with the additive error guaranteed not to exceed 2δ logn, where δ is the hyperbolic constant of the graph. We show that our proposed oracle 1) is significantly faster than those oracles that use hyperbolic embedding (first set) with similar approximation error and, perhaps surprisingly, 2) exhibits substantially lower average estimation error compared to Gromovlike tree contractions (second set). We substantiate our claims through numerical computations on a collection of a dozen real world networks and synthetic test cases from multiple domains, ranging in size from 10s of thousand to 10s of millions of nodes.
Theoretical Bounds on General Graphs. There is a rich body of literature on distance oracles for general graphs, including the special case of distance labeling scheme where the distance for query node pairs is estimated by merely using labels associated with the query nodes, and the related problems of graph spanners. The seminal work of Thorup and Zwick @cite_7 described a distance oracle that gives @math approximation with @math query time, @math space and @math preprocessing time on an arbitrary weighted undirected graph with @math nodes and @math edges, for any integer @math . The preprocessing time and the query time of this distance oracle were subsequently improved (c.f. @cite_44 @cite_4 @cite_1 @cite_2 ), but the space versus approximation factor trade-off has remained almost the same. In fact, various lower bounds have been proved under plausible conjectures (c.f. @cite_29 and the references therein) for the space versus worst-case approximation factor trade-off. These lower bound results suggest that it is unlikely that a distance oracle can result in a significantly better trade-off for general weighted graphs.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_29", "@cite_1", "@cite_44", "@cite_2" ], "mid": [ "2034680223", "2045446569", "2108166877", "2130725973", "2157760787", "2005985615" ], "abstract": [ "Let G = (V,E) be a weighted undirected graph with |V | = n and |E| = m. An estimate ( u,v ) of the distance ( u,v ) in G between u, v V is said to be of stretch t iff ( u,v ) ( u,v ) t ? ( u,v ). The most efficient algorithms known for computing small stretch distances in G are the approximate distance oracles of [16] and the three algorithms in [9] to compute all-pairs stretch t distances for t = 2, 7 3, and 3. We present faster algorithms for these problems. For any integer k 1, Thorup and Zwick in [16] gave an O(kmn^ 1 k ) algorithm to construct a data structure of size O(kn^ 1+1 k ) which, given a query (u, v) V ? V , returns in O(k) time, a 2k - 1 stretch estimate of ( u,v ). But for small values of k, the time to construct the oracle is rather high. Here we present an O(n^2 log n) algorithm to construct such a data structure of size O(kn^ 1+1 k ) for all integers k 2. Our query answering time is O(k) for k 2 and (log n) for k = 2. We use a new generic scheme for all-pairs approximate shortest paths for these results. This scheme also enables us to design faster algorithms for allpairs t-stretch distances for t = 2 and 7 3, and compute all-pairs almost stretch 2 distances in O(n^2 log n) time.", "Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.", "We give the first improvement to the space approximation trade-off of distance oracles since the seminal result of Thorup and Zwick [STOC'01]. For unweighted graphs, our distance oracle has size @math and, when queried about vertices at distance @math , returns a path of length @math . For weighted graphs with @math edges, our distance oracle has size @math and returns a factor 2 approximation. Based on a plausible conjecture about the hardness of set intersection queries, we show that a 2-approximate distance oracle requires space @math . For unweighted graphs, this implies a @math space lower bound to achieve approximation @math .", "Thorup and Zwick, in the seminal paper [Journal of ACM, 52(1),2005, pp 1-24], showed that a weighted undirected graph onnvertices can be preprocessed in subcubic time to designa data structure which occupies only subquadratic space, and yet,for any pair of vertices, can answer distance query approximatelyin constant time. The data structure is termed as approximatedistance oracle. Subsequently, there has been improvement in theirpreprocessing time, and presently the best known algorithms [4,3]achieve expected O(n2) preprocessingtime for these oracles. For a class of graphs, these algorithmsindeed run in θ(n2) time. Inthis paper, we are able to break this quadratic barrier at theexpense of introducing a (small) constant additive error forunweighted graphs. In achieving this goal, we have been able topreserve the optimal size-stretch trade offs of the oracles. One ofour algorithms can be extended to weighted graphs, where theadditive error becomes 2·wmax(u,v) - herewmax(u,v) is the heaviestedge in the shortest path between vertices u,v.", "Let G e (V, E) be an undirected graph on n vertices, and let Δ(u, v) denote the distance in G between two vertices u and v. Thorup and Zwick showed that for any positive integer t, the graph G can be preprocessed to build a data structure that can efficiently report t-approximate distance between any pair of vertices. That is, for any u, v ∈ V, the distance reported is at least Δ(u, v) and at most tΔ(u, v). The remarkable feature of this data structure is that, for t≥3, it occupies subquadratic space, that is, it does not store all-pairs distances explicitly, and still it can answer any t-approximate distance query in constant time. They named the data structure “approximate distance oracle” because of this feature. Furthermore, the trade-off between the stretch t and the size of the data structure is essentially optimal.In this article, we show that we can actually construct approximate distance oracles in expected O(n2) time if the graph is unweighted. One of the new ideas used in the improved algorithm also leads to the first expected linear-time algorithm for computing an optimal size (2, 1)-spanner of an unweighted graph. A (2, 1) spanner of an undirected unweighted graph G e (V, E) is a subgraph (V, E), E ⊆ E, such that for any two vertices u and v in the graph, their distance in the subgraph is at most 2Δ(u, v) p 1.", "This paper addresses the non-linear isomorphic Dvoretzky theorem and the design of good approximate distance oracles for large distortion. We introduce and construct optimal Ramsey partitions, and use them to show that for every (0, 1), any n-point metric space has a subset of size n^ 1 - which embeds into Hilbert space with distortion O(1 ). This result is best possible and improves part of the metric Ramsey theorem of Bartal, Linial, Mendel and Naor [5], in addition to considerably simplifying its proof. We use our new Ramsey partitions to design approximate distance oracles with a universal constant query time, closing a gap left open by Thorup and Zwick in [26]. Namely, we show that for any n point metric space X, and k 1, there exists an O(k)-approximate distance oracle whose storage requirement is O(n^ 1 + 1 k ), and whose query time is a universal constant. We also discuss applications to various other geometric data structures, and the relation to well separated pair decompositions." ] }
1404.5002
2158162651
Many graph processing algorithms require determination of shortest-path distances between arbitrary numbers of node pairs. Since computation of exact distances between all node-pairs of a large graph, e.g., 10M nodes and up, is prohibitively expensive both in computational time and storage space, distance approximation is often used in place of exact computation. A distance oracle is a data structure that answers inter-point distance queries more efficiently than in standard O(n 2 ) in time or storage space for an n node graph, e.g., in O(nlogn). In this paper, we present a novel and scalable distance oracle that leverages the hyperbolic core of real-world large graphs for fast and scalable distance approximation via spanning trees. We show empirically that the proposed oracle significantly outperforms prior oracles on a random set of test cases drawn from public domain graph libraries. There are two sets of prior work against which we benchmark our approach. The first set, which often outperforms all other oracles, employs embedding of the graph into low dimensional Euclidean spaces with carefully constructed hyperbolic distances, but provides no guarantees on the distance estimation error. The second set leverages Gromov-type tree contraction of the graph with the additive error guaranteed not to exceed 2δ logn, where δ is the hyperbolic constant of the graph. We show that our proposed oracle 1) is significantly faster than those oracles that use hyperbolic embedding (first set) with similar approximation error and, perhaps surprisingly, 2) exhibits substantially lower average estimation error compared to Gromovlike tree contractions (second set). We substantiate our claims through numerical computations on a collection of a dozen real world networks and synthetic test cases from multiple domains, ranging in size from 10s of thousand to 10s of millions of nodes.
Empirical Work on Road Networks. In contrast to the theoretical work on general graphs, there has been considerable algorithm engineering and experimentation work on road networks for navigation applications using global positioning systems (GPS). These solutions (e.g., @cite_41 @cite_9 @cite_33 @cite_20 ) crucially rely on many specific characteristic properties of road networks such as the existence of small natural cuts, a grid-like structure, highway hierarchies, guiding the search towards the target using the latitude and longitude of the target location, etc. On other graph classes such as those from online social networks, equivalent solutions are not generally known to produce equally good approximations.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_33", "@cite_20" ], "mid": [ "2579664199", "2153374077", "2086416434", "2062519180" ], "abstract": [ "Die klassische Losung zur Berechnung kurzester Wege ist die Anwendung des Dijkstra Algorithmus, der in O(m + n log m) Zeit selbigen findet, wobei m die Anzahl Kanten ist und n die Anzahl der Knoten. Fur die Berechnung zwischen zwei beliebigen Knoten im US-Strasennetzwerk, bestehend aus ca. 24 Mio. Knoten und ca. 58 Mio. Kanten, dauert dies auf heutigen Rechnern mehr als eine Sekunde. Fur viele Anwendungen ist dies zu langsam. Obwohl es noch eine offene Frage ist, ob Dijkstra [1] fur solche Anfragen optimal ist, gibt es eine offensichtliche untere Schranke von Ω(m + n). Um schnellere, sublineare, Anfragezeiten zu erreichen, mus eine Vorverarbeitung stattfinden. Holger [2] stellten das Transitknoten-Konzept – ein allgemeines Konzept zur Auswahl einer Menge an Strasenknoten fur eine Vorberechnung – vor, auf welches im Folgenden naher eingegangen wird.", "We present a novel approach to graph partitioning based on the notion of . Our algorithm, called PUNCH, has two phases. The first phase performs a series of minimum-cut computations to identify and contract dense regions of the graph. This reduces the graph size, but preserves its general structure. The second phase uses a combination of greedy and local search heuristics to assemble the final partition. The algorithm performs especially well on road networks, which have an abundance of natural cuts (such as bridges, mountain passes, and ferries). In a few minutes, it obtains the best known partitions for continental-sized networks, significantly improving on previous results.", "", "Highway hierarchies exploit hierarchical properties inherent in real-world road networks to allow fast and exact point-to-point shortest-path queries. A fast preprocessing routine iteratively performs two steps: First, it removes edges that only appear on shortest paths close to source or target; second, it identifies low-degree nodes and bypasses them by introducing shortcut edges. The resulting hierarchy of highway networks is then used in a Dijkstra-like bidirectional query algorithm to considerably reduce the search space size without losing exactness. The crucial fact is that ‘far away’ from source and target it is sufficient to consider only high-level edges. Experiments with road networks for a continent show that using a preprocessing time of around 15 min, one can achieve a query time of around 1ms on a 2.0GHz AMD Opteron. Highway hierarchies can be combined with goal-directed search, they can be extended to answer many-to-many queries, and they can be used as a basis for other speed-up techniques (e.g., for transit-node routing and highway-node routing)." ] }
1404.5002
2158162651
Many graph processing algorithms require determination of shortest-path distances between arbitrary numbers of node pairs. Since computation of exact distances between all node-pairs of a large graph, e.g., 10M nodes and up, is prohibitively expensive both in computational time and storage space, distance approximation is often used in place of exact computation. A distance oracle is a data structure that answers inter-point distance queries more efficiently than in standard O(n 2 ) in time or storage space for an n node graph, e.g., in O(nlogn). In this paper, we present a novel and scalable distance oracle that leverages the hyperbolic core of real-world large graphs for fast and scalable distance approximation via spanning trees. We show empirically that the proposed oracle significantly outperforms prior oracles on a random set of test cases drawn from public domain graph libraries. There are two sets of prior work against which we benchmark our approach. The first set, which often outperforms all other oracles, employs embedding of the graph into low dimensional Euclidean spaces with carefully constructed hyperbolic distances, but provides no guarantees on the distance estimation error. The second set leverages Gromov-type tree contraction of the graph with the additive error guaranteed not to exceed 2δ logn, where δ is the hyperbolic constant of the graph. We show that our proposed oracle 1) is significantly faster than those oracles that use hyperbolic embedding (first set) with similar approximation error and, perhaps surprisingly, 2) exhibits substantially lower average estimation error compared to Gromovlike tree contractions (second set). We substantiate our claims through numerical computations on a collection of a dozen real world networks and synthetic test cases from multiple domains, ranging in size from 10s of thousand to 10s of millions of nodes.
Distance oracles have been investigated both from theoretical and practical perspectives. As described earlier, our focus is on distance oracles that provide accurate distance estimates on large real-world graphs rapidly (a few microseconds) while having scalable preprocessing and near-linear storage requirement. This category includes a number of recent practical heuristics @cite_12 @cite_24 @cite_14 @cite_43 @cite_5 @cite_6 ) that aim to estimate distance by embedding the graph in geometric spaces, like Euclidean or Hyperbolic, or by extracting different kinds of approximating trees from the real graph. Other heuristics, such as, @cite_43 @cite_8 or the landmark based approaches with diverse seeding strategies @cite_31 use variants of breadth first search (BFS) trees. We also remark that the sketch-based distance oracle by @cite_26 , that engineers a distance oracle with provable multiplicative guarantees, is similar in nature to other approaches cited above. On the other side, there are some theoretical approaches @cite_27 @cite_37 @cite_18 @cite_17 that prove worst-case bounds on accuracy for specific graph classes such as those with power-law degree distribution or those with small graph hyperbolicity @cite_0 @cite_10 @cite_40 . These techniques have not been evaluated on large real-world graphs. We evaluate some of these techniques for benchmarking of our oracle's performance.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_26", "@cite_18", "@cite_8", "@cite_6", "@cite_24", "@cite_43", "@cite_27", "@cite_0", "@cite_40", "@cite_5", "@cite_31", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2142420687", "", "2159394589", "", "", "1512819151", "", "2140271346", "2152306920", "2029803751", "2141663929", "2151954369", "2172107427", "1587282847", "2283762717", "1965650021" ], "abstract": [ "We introduce a novel measure called e-four-pointscondition (e-4PC), which assigns a value e ∈ [0,1] to every metric space quantifying how close the metric is to a tree metric. Data-sets taken from real Internet measurements indicate remarkable closeness of Internet latencies to tree metrics based on this condition. We study embeddings of e-4PC metric spaces into trees and prove tight upper and lower bounds. Specifically, we show that there are constants c1 and c2 such that, (1) every metric (X,d) which satisfies the e-4PC can be embedded into a tree with distortion (1+e)c1log|X|, and (2) for every e ∈: [0,1] and any number of nodes, there is a metric space (X,d) satisfying the e-4PC that does not embed into a tree with distortion less than (1+e)c2log|X|. In addition, we prove a lower bound on approximate distance labelings of e-4PC metrics, and give tight bounds for tree embeddings with additive error guarantees.", "", "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.", "", "", "The emergence of real life graphs with billions of nodes poses significant challenges for managing and querying these graphs. One of the fundamental queries submitted to graphs is the shortest distance query. Online BFS (breadth-first search) and offline pre-computing pairwise shortest distances are prohibitive in time or space complexity for billion-node graphs. In this paper, we study the feasibility of building distance oracles for billion-node graphs. A distance oracle provides approximate answers to shortest distance queries by using a pre-computed data structure for the graph. Sketch-based distance oracles are good candidates because they assign each vertex a sketch of bounded size, which means they have linear space complexity. However, state-of-the-art sketch-based distance oracles lack efficiency or accuracy when dealing with big graphs. In this paper, we address the scalability and accuracy issues by focusing on optimizing the three key factors that affect the performance of distance oracles: landmark selection, distributed BFS, and answer generation. We conduct extensive experiments on both real networks and synthetic networks to show that we can build distance oracles of affordable cost and efficiently answer shortest distance queries even for billion-node graphs.", "", "Computing the shortest path between a pair of vertices in a graph is a fundamental primitive in graph algorithmics. Classical exact methods for this problem do not scale up to contemporary, rapidly evolving social networks with hundreds of millions of users and billions of connections. A number of approximate methods have been proposed, including several landmark-based methods that have been shown to scale up to very large graphs with acceptable accuracy. This paper presents two improvements to existing landmark-based shortest path estimation methods. The first improvement relates to the use of shortest-path trees (SPTs). Together with appropriate short-cutting heuristics, the use of SPTs allows to achieve higher accuracy with acceptable time and memory overhead. Furthermore, SPTs can be maintained incrementally under edge insertions and deletions, which allows for a fully-dynamic algorithm. The second improvement is a new landmark selection strategy that seeks to maximize the coverage of all shortest paths by the selected landmarks. The improved method is evaluated on the DBLP, Orkut, Twitter and Skype social networks.", "Compact routing addresses the tradeoff between table sizes and stretch, which is the worst-case ratio between the length of the path a packet is routed through by the scheme and the length of an actual shortest path from source to destination. We adapt the compact routing scheme by Thorup and Zwick [2001] to optimize it for power-law graphs. We analyze our adapted routing scheme based on the theory of unweighted random power-law graphs with fixed expected degree sequence by [2000]. Our result is the first analytical bound coupled to the parameter of the power-law graph model for a compact routing scheme. Let n denote the number of nodes in the network. We provide a labeled routing scheme that, after a stretch--5 handshaking step (similar to DNS lookup in TCP IP), routes messages along stretch--3 paths. We prove that, instead of routing tables with O(n1 2) bits (O suppresses factors logarithmic in n) as in the general scheme by Thorup and Zwick, expected sizes of O(nγ log n) bits are sufficient, and that all the routing tables can be constructed at once in expected time O(n1+γ log n), with γ = τ-22 τ-3 + e, where τ∈(2,3) is the power-law exponent and e 0 (which implies e With the same techniques as for the compact routing scheme, we also adapt the approximate distance oracle by Thorup and Zwick [2001, 2005] for stretch-3 and we obtain a new upper bound of expected O(n1+γ) for space and preprocessing for random power-law graphs. Our distance oracle is the first one optimized for power-law graphs. Furthermore, we provide a linear-space data structure that can answer 5--approximate distance queries in time at most O(n1 4+e) (similar to γ, the exponent actually depends on τ and lies between e and 1 4 + e).", "Let G= (V, E) be a connected graph endowed with the standard graph-metric dGand in which longest induced simple cycle has length? (G). We prove that there exists a tree T= (V,F ) such that| dG(u, v) ?dT(u, v)| ? ??(G)2? +?for all vertices u, v?V, where?= 1 if ?(G) ?= 4, 5 and ?= 2 otherwise. The case ?(G) = 3 (i.e., G is a chordal graph) has been considered in Brandstadt, Chepoi, and Dragan, (1999) J.Algorithms 30. The proof contains an efficient algorithm for determining such a treeT .", "δ-Hyperbolic metric spaces have been defined by M. Gromov in 1987 via a simple 4-point condition: for any four points u,v,w,x, the two larger of the distance sums d(u,v)+d(w,x),d(u,w)+d(v,x),d(u,x)+d(v,w) differ by at most 2δ. They play an important role in geometric group theory, geometry of negatively curved spaces, and have recently become of interest in several domains of computer science, including algorithms and networking. In this paper, we study unweighted δ-hyperbolic graphs. Using the Layering Partition technique, we show that every n-vertex δ-hyperbolic graph with δ≥1 2 has an additive O(δlog n)-spanner with at most O(δn) edges and provide a simpler, in our opinion, and faster construction of distance approximating trees of δ-hyperbolic graphs with an additive error O(δlog n). The construction of our tree takes only linear time in the size of the input graph. As a consequence, we show that the family of n-vertex δ-hyperbolic graphs with δ≥1 2 admits a routing labeling scheme with O(δlog 2 n) bit labels, O(δlog n) additive stretch and O(log 2(4δ)) time routing protocol, and a distance labeling scheme with O(log 2 n) bit labels, O(δlog n) additive error and constant time distance decoder.", "Shortest paths and shortest path distances are important primary queries for users to query in a large graph. In this paper, we propose a new approach to answer shortest path and shortest path distance queries efficiently with an error bound. The error bound is controlled by a user-specified parameter, and the online query efficiency is achieved with prepossessing offline. In the offline preprocessing, we take a reference node embedding approach which computes the single-source shortest paths from each reference node to all the other nodes. To guarantee the user-specified error bound, we design a novel coverage-based reference node selection strategy, and show that selecting the optimal set of reference nodes is NP-hard. We propose a greedy selection algorithm which exploits the submodular property of the formulated objective function, and use a graph partitioning-based heuristic to further reduce the offline computational complexity of reference node embedding. In the online query answering, we use the precomputed distances to provide a lower bound and an upper bound of the true shortest path distance based on the triangle inequality. In addition, we propose a linear algorithm which computes the approximate shortest path between two nodes within the error bound. We perform extensive experimental evaluation on a large-scale road network and a social network and demonstrate the effectiveness and efficiency of our proposed methods.", "In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.", "A graph G is δ-hyperbolic if for any four vertices u,v,x,y of G the two larger of the three distance sums dG(u,v) + dG(x,y), dG(u,x) + dG(v,y), dG(u,y) + dG(v,x) differ by at most δ, and the smallest δ ≥ 0 for which G is δ-hyperbolic is called the hyperbolicity of G. In this paper, we construct a distance labeling scheme for bounded hyperbolicity graphs, that is a vertex labeling such that the distance between any two vertices of G can be estimated from their labels, without any other source of information. More precisely, our scheme assigns labels of O(log2n) bits for bounded hyperbolicity graphs with n vertices such that distances can be approximated within an additive error of O(log n). The label length is optimal for every additive error up to ne. We also show a lower bound of Ω(log log n) on the approximation factor, namely every s-multiplicative approximate distance labeling scheme on bounded hyperbolicity graphs with polylogarithmic labels requires s = Ω(log log n).", "Through measurements, researchers continue to produce large social graphs that capture relationships, transactions, and social interactions between users. Efficient analysis of these graphs requires algorithms that scale well with graph size. We examine node distance computation, a critical primitive in graph problems such as computing node separation, centrality computation, mutual friend detection, and community detection. For large million-node social graphs, computing even a single shortest path using traditional breadth-first-search can take several seconds. In this paper, we propose a novel node distance estimation mechanism that effectively maps nodes in high dimensional graphs to positions in low-dimension Euclidean coordinate spaces, thus allowing constant time node distance computation. We describe Orion, a pro totype graph coordinate system, and explore critical decisions in its design. Finally, we evaluate the accuracy of Orion's node distance estimates, and show that it can produce accurate results in applications such as node separation, node centrality, and ranked social search.", "δ-Hyperbolic metric spaces have been defined by M. Gromov via a simple 4-point condition: for any four points u,v,w,x, the two larger of the sums d(u,v)+d(w,x), d(u,w)+d(v,x), d(u,x)+d(v,w) differ by at most 2δ. Given a finite set S of points of a δ-hyperbolic space, we present simple and fast methods for approximating the diameter of S with an additive error 2δ and computing an approximate radius and center of a smallest enclosing ball for S with an additive error 3δ. These algorithms run in linear time for classical hyperbolic spaces and for δ-hyperbolic graphs and networks. Furthermore, we show that for δ-hyperbolic graphs G=(V,E) with uniformly bounded degrees of vertices, the exact center of S can be computed in linear time O(|E|). We also provide a simple construction of distance approximating trees of δ-hyperbolic graphs G on n vertices with an additive error O(δlog2 n). This construction has an additive error comparable with that given by Gromov for n-point δ-hyperbolic spaces, but can be implemented in O(|E|) time (instead of O(n2)). Finally, we establish that several geometrical classes of graphs have bounded hyperbolicity." ] }
1404.5002
2158162651
Many graph processing algorithms require determination of shortest-path distances between arbitrary numbers of node pairs. Since computation of exact distances between all node-pairs of a large graph, e.g., 10M nodes and up, is prohibitively expensive both in computational time and storage space, distance approximation is often used in place of exact computation. A distance oracle is a data structure that answers inter-point distance queries more efficiently than in standard O(n 2 ) in time or storage space for an n node graph, e.g., in O(nlogn). In this paper, we present a novel and scalable distance oracle that leverages the hyperbolic core of real-world large graphs for fast and scalable distance approximation via spanning trees. We show empirically that the proposed oracle significantly outperforms prior oracles on a random set of test cases drawn from public domain graph libraries. There are two sets of prior work against which we benchmark our approach. The first set, which often outperforms all other oracles, employs embedding of the graph into low dimensional Euclidean spaces with carefully constructed hyperbolic distances, but provides no guarantees on the distance estimation error. The second set leverages Gromov-type tree contraction of the graph with the additive error guaranteed not to exceed 2δ logn, where δ is the hyperbolic constant of the graph. We show that our proposed oracle 1) is significantly faster than those oracles that use hyperbolic embedding (first set) with similar approximation error and, perhaps surprisingly, 2) exhibits substantially lower average estimation error compared to Gromovlike tree contractions (second set). We substantiate our claims through numerical computations on a collection of a dozen real world networks and synthetic test cases from multiple domains, ranging in size from 10s of thousand to 10s of millions of nodes.
Exact Distance Oracles. Since exact distance oracles require the computation or storage of all pair shortest path, we do not consider these (e.g., @cite_11 @cite_22 and the references therein) in our comparison. We observe that even with the best combination of engineering insights, all-pairs shortest path computation remains far too slow for large graphs with @math nodes or edges and beyond, and this is especially unmanageable when these graphs do not fit in the main memory of the computing device. Also, we do not consider the oracles that have @math query complexity (e.g., @cite_35 @cite_16 @cite_36 ), as these are unlikely to yield efficient solutions.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_36", "@cite_16", "@cite_11" ], "mid": [ "2206683514", "2146583842", "", "2012688658", "2005945380" ], "abstract": [ "We present distance oracles for weighted undirected graphs that return distances of stretch less than 2. For the realistic case of sparse graphs, our distance oracles exhibit a smooth three-way trade-off between space, stretch and query time --- a phenomenon that does not occur in dense graphs. In particular, for any positive integer t and for any 1 ≤ α ≤ n, our distance oracle is of size O(m + n2 α) and returns distances of stretch at most (1 + 2 t+1) in time O((αμ)t), where μ = 2m n is the average degree of the graph. The query time can be further reduced to O((α + μ)t) at the expense of a small additive stretch.", "We study hierarchical hub labelings for computing shortest paths. Our new theoretical insights into the structure of hierarchical labels lead to faster preprocessing algorithms, making the labeling approach practical for a wider class of graphs. We also find smaller labels for road networks, improving the query speed.", "", "We present a distance oracle that, for weighted graphs with n vertices and m edges, is of size 8n4 3m1 3log2 3n and returns stretch-2 distances in constant time. Our oracle achieves bounds identical to the constant-time stretch-2 oracle of Pǎtrascu and Roditty, but admits significantly simpler construction and proofs.", "We propose a new exact method for shortest-path distance queries on large-scale networks. Our method precomputes distance labels for vertices by performing a breadth-first search from every vertex. Seemingly too obvious and too inefficient at first glance, the key ingredient introduced here is pruning during breadth-first searches. While we can still answer the correct distance for any pair of vertices from the labels, it surprisingly reduces the search space and sizes of labels. Moreover, we show that we can perform 32 or 64 breadth-first searches simultaneously exploiting bitwise operations. We experimentally demonstrate that the combination of these two techniques is efficient and robust on various kinds of large-scale real-world networks. In particular, our method can handle social networks and web graphs with hundreds of millions of edges, which are two orders of magnitude larger than the limits of previous exact methods, with comparable query time to those of previous methods." ] }
1404.5356
2031620091
Abstract We study the two-player safe game of Competitive Diffusion, a game-theoretic model for the diffusion of technologies or influence through a social network. In game theory, safe strategies are mixed strategies with a minimum expected gain against unknown strategies of the opponents. Safe strategies for competitive diffusion lead to maximum spread of influence in the presence of uncertainty about the other players. We study the safe game on two specific classes of trees, spiders and complete trees, and give tight bounds on the minimum expected gain. We then use these results to give an algorithm that suggests a safe strategy for a player on any tree. We test this algorithm on randomly generated trees and show that it finds strategies that are close to optimal.
In threshold models, vertices become activated once a variable associated with the neighbourhood of a vertex surpasses a certain threshold. The most commonly used is the Linear Threshold Model (see @cite_12 and @cite_9 ). In this model, each vertex @math has a threshold @math , and a vertex @math is influenced by each of its neighbours, @math , by a weight @math . A vertex becomes activated once the sum of the weights of its activated neighbours exceeds @math .
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "2061820396", "2041157860" ], "abstract": [ "Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.", "Models of collective behavior are developed for situations where actors have two alternatives and the costs and or benefits of each depend on how many other actors choose which alternative. The key concept is that of \"threshold\": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or \"equilibrium\" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ..." ] }
1404.5356
2031620091
Abstract We study the two-player safe game of Competitive Diffusion, a game-theoretic model for the diffusion of technologies or influence through a social network. In game theory, safe strategies are mixed strategies with a minimum expected gain against unknown strategies of the opponents. Safe strategies for competitive diffusion lead to maximum spread of influence in the presence of uncertainty about the other players. We study the safe game on two specific classes of trees, spiders and complete trees, and give tight bounds on the minimum expected gain. We then use these results to give an algorithm that suggests a safe strategy for a player on any tree. We test this algorithm on randomly generated trees and show that it finds strategies that are close to optimal.
In cascade models, as a vertex becomes activated, it activates each of its neighbours with a given probability. The most well-known is the Independent Cascade Model (see @cite_17 and @cite_9 ). In this model, we also start with an initial set of activated vertices. Here, each edge @math is assigned a probability @math . If vertex @math becomes activated, its neighbour @math will become activated in the next round with probability @math . The spread of influence in competitive diffusion can be seen as a cascade model where the activation probability equals 1.
{ "cite_N": [ "@cite_9", "@cite_17" ], "mid": [ "2061820396", "1495750374" ], "abstract": [ "Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.", "Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties)." ] }
1404.5356
2031620091
Abstract We study the two-player safe game of Competitive Diffusion, a game-theoretic model for the diffusion of technologies or influence through a social network. In game theory, safe strategies are mixed strategies with a minimum expected gain against unknown strategies of the opponents. Safe strategies for competitive diffusion lead to maximum spread of influence in the presence of uncertainty about the other players. We study the safe game on two specific classes of trees, spiders and complete trees, and give tight bounds on the minimum expected gain. We then use these results to give an algorithm that suggests a safe strategy for a player on any tree. We test this algorithm on randomly generated trees and show that it finds strategies that are close to optimal.
Competitive diffusion, as proposed in @cite_18 , is the first game-theoretic model in which the players are considered to be outside the social networks. Players choose initial users to influence and their goal is to reach the most users. In @cite_18 (see also erratum @cite_20 ), the authors discuss the relationship between the diameter of the graph and the existence of pure Nash equilibria. A pure Nash equilibrium is a strategy which corresponds to a set of initial vertices, whereas a mixed strategy represents a probabilistic approach where starting vertices are chosen with a certain probability. In @cite_14 , the existence of a pure Nash equilibrium for competitive diffusion on trees is shown, while in @cite_1 , results on pure Nash equilibria are given for several classes of graphs. Moreover, @cite_16 considers the competitive diffusion on a recently proposed model for on-line social networks and discusses the existence of Nash equilibria. The safe game for competitive diffusion was introduced in @cite_19 , and some results for paths were given.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_19", "@cite_16", "@cite_20" ], "mid": [ "2056621120", "2050171314", "320592555", "", "2058878353", "2157954061" ], "abstract": [ "We introduce a game-theoretic model of diffusion of technologies, advertisements, or influence through a social network. The novelty in our model is that the players are interested parties outside the network. We study the relation between the diameter of the network and the existence of pure Nash equilibria in the game. In particular, we show that if the diameter is at most two then an equilibrium exists and can be found in polynomial time, whereas if the diameter is greater than two then an equilibrium is not guaranteed to exist.", "We consider the game theoretic model of competitive information diffusion recently introduced in (2010) [1]. We show that for the case of 2 competing agents, there exists a Nash Equilibrium for this game on any tree. We also present an example to show that this is not necessarily true for 3 or more agents.", "We study a game based on a model for the spread of influence through social networks. In game theory, a Nash-equilibrium is a strategy profile in which each player’s strategy is optimized with respect to her opponents’ strategies. Here we focus on a specific two player case of the game. We show that there always exists a Nash-equilibrium for paths, cycles, trees, and Cartesian grids. We use the centroid of trees to find a Nash-equilibrium for a tree with a novel approach, which is simpler compared to previous works. We also explore the existence of Nash-equilibriums for uni-cyclic graphs, and offer some open problems.", "", "We study a recently introduced deterministic model of competitive information diffusion on the Iterated Local Transitivity (ILT) model of Online Social Networks (OSNs). In particular, we show that, for 2 competing agents, an independent Nash Equilibrium (N.E.) on the initial graph remains a N.E. for all subsequent times. We also describe an example showing that this conclusion does not hold for general N.E. in the ILT process.", "In [N. Alon, M. Feldman, A.D. Procaccia, M. Tennenholtz, A note on competitive diffusion through social networks, Inform. Process. Lett. 110 (2010) 221-225], the authors introduced a game-theoretic model of diffusion process through a network. They showed a relation between the diameter of a given network and existence of pure Nash equilibria in the game. Theorem 1 of their paper says that a pure Nash equilibrium exists if the diameter is at most two. However, we have an example which does not admit a pure Nash equilibrium even if the diameter is two. Hence we correct the statement of Theorem 1 of their paper." ] }
1404.5064
1578309918
We present a solution of a class of network utility maximization (NUM) problems using minimal communication. The constraints of the problem are inspired less by TCP-like congestion control but by problems in the area of internet of things and related areas in which the need arises to bring the behavior of a large group of agents to a social optimum. The approach uses only intermittent feedback, no inter-agent communication, and no common clock. The proposed algorithm is a combination of the classical AIMD algorithm in conjunction with a simple probabilistic rule for the agents to respond to a capacity signal. This leads to a nonhomogeneous Markov chain and we show almost sure convergence of this chain to the social optimum.
The AIMD literature is huge and it is not straightforward to discuss the available results in any sort of compact manner here. We refer interested readers to some recent works on this topic in the context of TCP and internet congestion control @cite_22 @cite_15 @cite_11 @cite_26 @cite_19 @cite_26 @cite_6 @cite_49 @cite_48 . Much of this work is based on fluid approximations of AIMD dynamics; the notable exceptions are @cite_42 @cite_1 @cite_2 . The latter of these papers make use of tools from iterated function systems to deduce the existence of such a unique probability distribution for standard linear AIMD networks (of which TCP is an example) under an assumption on the underlying probability model (albeit under very restrictive assumptions). To the best of our knowledge, this paper, along with the companion paper @cite_42 , established for the first time, the stochastic convergence of AIMD networks. However, the window of infinite length considered in this paper goes well beyond the set-up in these papers. In particular, the result presented in this paper may be considered as the limiting case of the results presented in these papers.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_48", "@cite_42", "@cite_1", "@cite_6", "@cite_19", "@cite_49", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "", "", "1867902620", "2132210216", "2062284068", "2100166367", "", "", "", "1591231997", "" ], "abstract": [ "", "", "The proposals in this document are experimental. While they may be deployed in the current Internet, they do not represent a consensus that this is the best method for high-speed congestion control. In particular, we note that alternative experimental proposals are likely to be forthcoming, and it is not well understood how the proposals in this document will interact with such alternative proposals.", "We study communication networks that employ drop-tail queueing and Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithms. It is shown that the theory of nonnegative matrices may be employed to model such networks. In particular, important network properties, such as: 1) fairness; 2) rate of convergence; and 3) throughput, can be characterized by certain nonnegative matrices. We demonstrate that these results can be used to develop tools for analyzing the behavior of AIMD communication networks. The accuracy of the models is demonstrated by several NS studies.", "This papers analyzes a class of nonlinear additive-increase multiplicative-decrease (AIMD) protocols that are widely deployed in communication networks. It is demonstrated that the use of these protocols guarantees that the system has a unique stable outcome to which it converges geometrically under all starting points. The development is based on a contraction argument and the derivation of explicit bounds on the contraction coefficient of corresponding operators in terms of the network parameters. In particular, bounds on the corresponding rate of convergence are obtained, improving upon known bounds for standard (linear) AIMD networks.", "High-speed networks with large delays present a unique environment where TCP may have a problem utilizing the full bandwidth. Several congestion control proposals have been suggested to remedy this problem. The existing protocols consider mainly two properties: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from standard TCP flows while utilizing the full bandwidth of high-speed networks. This work presents another important constraint, namely, RTT (round trip time) unfairness where competing flows with different RTTs may consume vastly unfair bandwidth shares. Existing schemes have a severe RTT unfairness problem because the congestion window increase rate gets larger as the window grows ironically the very reason that makes them more scalable. RTT unfairness for high-speed networks occurs distinctly with drop tail routers for flows with large congestion windows where packet loss can be highly synchronized. After identifying the RTT unfairness problem of existing protocols, This work presents a new congestion control scheme that alleviates RTT unfairness while supporting TCP friendliness and bandwidth scalability. The proposed congestion control algorithm uses two window size control policies called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures square RTT unfairness as well as good scalability. Under small congestion windows, binary search increase supports TCP friendliness. The simulation results confirm these properties of the protocol.", "", "", "", "The traditional TCP-Reno is not capable of achieving a high throughput in high-speed networks due to its highly conservative and loss-dependent congestion control mechanism. Although TCP-Vegas achieves a good link utilization due to its proactive type of congestion control mechanism that uses RTT as an indication of network congestion, it lacks friendliness to TCP-Reno. Protocols such as HighSpeed-TCP and Scalable-TCP have been proposed to address this problem, but they behave non-friendly to TCP-Reno especially when sharing high bandwidth links. In order to improve the friendliness with TCP-Reno and throughput performance in high-speed networks, we propose a new congestion control mechanism, TCP-xHS (High-speed) that harmonizes the fast ramping and proactive congestion controlling approaches of HighSpeed-TCP and TCP-Vegas. Through simulation results we justify the effectiveness of TCP-xHS for a wider range of link bandwidths.", "" ] }
1404.5206
1828440713
We prove that the Euler form of a metric connection on real oriented vector bundle @math over a compact oriented manifold @math can be identified, as a current, with the expectation of the random current defined by the zero-locus of a certain random section of the bundle. We also explain how to reconstruct probabilistically the metric and the connection on @math from the statistics of random sections of @math .
This line of investigation originates in the groundbreaking work of M. Kac @cite_24 and S.O. Rice @cite_15 who studied the distribution of zeros of certain random functions of one real variable. One outcome of their work is the celebrated Kac-Rice formula that gives an explicit description of the expected distribution of zeros of such functions; see @cite_11 .
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_11" ], "mid": [ "2109519791", "2114509411", "1966351146" ], "abstract": [ "", "In this section we use the representations of the noise currents given in section 2.8 to derive some statistical properties of I(t). The first six sections are concerned with the probability distribution of I(t) and of its zeros and maxima. Sections 3.7 and 3.8 are concerned with the statistical properties of the envelope of I(t). Fluctuations of integrals involving I2(t) are discussed in section 3.9. The probability distribution of a sine wave plus a noise current is given in 3.10 and in 3.11 an alternative method of deriving the results of Part III is mentioned. Prof. Uhlenbeck has pointed out that much of the material in this Part is closely connected with the theory of Markoff processes. Also S. Chandrasekhar has written a review of a class of physical problems which is related, in a general way, to the present subject.22", "We consider tensor powers L N of a positive Hermitian line bundle (L,h L ) over a non-compact complex manifold X. In the compact case, B. Shiffman and S. Zelditch proved that the zeros of random sections become asymptotically uniformly distributed as N→∞ with respect to the natural measure coming from the curvature of L. Under certain boundedness assumptions on the curvature of the canonical line bundle of X and on the Chern form of L we prove a non-compact version of this result. We give various applications, including the limiting distribution of zeros of cusp forms with respect to the principal congruence subgroups of SL 2(ℤ) and to the hyperbolic measure, the higher dimensional case of arithmetic quotients and the case of orthogonal polynomials with weights at infinity. We also give estimates for the speed of convergence of the currents of integration on the zero-divisors." ] }
1404.5206
1828440713
We prove that the Euler form of a metric connection on real oriented vector bundle @math over a compact oriented manifold @math can be identified, as a current, with the expectation of the random current defined by the zero-locus of a certain random section of the bundle. We also explain how to reconstruct probabilistically the metric and the connection on @math from the statistics of random sections of @math .
As explained e.g. in @cite_11 @cite_38 , the Kac-Rice formula has an extension to complex valued random functions of one complex variable that can be used to study the distribution of complex zeros of various ensembles of random complex polynomials. One such ensemble is obtained by regarding the degree @math polynomials in one complex variable as holomorphic sections of the degree @math holomorphic line bundle over @math . If the ensemble of sections is unitarily equivariant, then the expected distribution of zeros approaches the uniform (invariant) distribution on @math as @math ; see [ 8] EK . The same type of equidistribution phenomenon was observed in the pioneering work of S. Nonnenmaker and A. Voros @cite_37 where, among many other things, the authors studied the distributions of zeros of random holomorphic sections of holomorphic line bundles over elliptic curves.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_11" ], "mid": [ "", "1599439450", "1966351146" ], "abstract": [ "", "We study individual eigenstates of quantized area-preserving maps on the 2-torus which are classically chaotic. In order to analyze their semiclassical behavior, we use the Bargmann–Husimi representations for quantum states as well as their stellar parametrization, which encodes states through a minimal set of points in phase space (the constellation of zeros of the Husimi density). We rigorously prove that a semiclassical uniform distribution of Husimi densities on the torus entails a similar equidistribution for the corresponding constellations. We deduce from this property a universal behavior for the phase patterns of chaotic Bargmann eigenfunctions which is reminiscent of the WKB approximation for eigenstates of integrable systems (though in a weaker sense). In order to obtain more precise information on “chaotic eigenconstellations,” we then model their properties by ensembles of random states, generalizing former results on the 2-sphere to the torus geometry. This approach yields statistical predictions for the constellations which fit quite well the chaotic data. We finally observe that specific dynamical information, e.g., the presence of high peaks (like scars) in Husimi densities, can be recovered from the knowledge of a few long-wavelength Fourier coefficients, which therefore appear as valuable order parameters at the level of individual chaotic eigenfunctions.", "We consider tensor powers L N of a positive Hermitian line bundle (L,h L ) over a non-compact complex manifold X. In the compact case, B. Shiffman and S. Zelditch proved that the zeros of random sections become asymptotically uniformly distributed as N→∞ with respect to the natural measure coming from the curvature of L. Under certain boundedness assumptions on the curvature of the canonical line bundle of X and on the Chern form of L we prove a non-compact version of this result. We give various applications, including the limiting distribution of zeros of cusp forms with respect to the principal congruence subgroups of SL 2(ℤ) and to the hyperbolic measure, the higher dimensional case of arithmetic quotients and the case of orthogonal polynomials with weights at infinity. We also give estimates for the speed of convergence of the currents of integration on the zero-divisors." ] }
1404.4744
2949683436
Location-based services are increasingly used in our daily activities. In current services, users however have to give up their location privacy in order to acquire the service. The literature features a large number of contributions which aim at enhancing user privacy in location-based services. Most of these contributions obfuscate the locations of users using spatial and or temporal cloaking in order to provide k-anonymity. Although such schemes can indeed strengthen the location privacy of users, they often decrease the service quality and do not necessarily prevent the possible tracking of user movements (i.e., direction, trajectory, velocity). With the rise of Geofencing applications, tracking of movements becomes more evident since, in these settings, the service provider is not only requesting a single location of the user, but requires the movement vectors of users to determine whether the user has entered exited a Geofence of interest. In this paper, we propose a novel solution, PrivLoc, which enables the privacy-preserving outsourcing of Geofencing and location-based services to the cloud without leaking any meaningful information about the location, trajectory, and velocity of the users. Notably, PrivLoc enables an efficient and privacy-preserving intersection of movement vectors with any polygon of interest, leveraging functionality from existing Geofencing services or spatial databases. We analyze the security and privacy provisions of PrivLoc and we evaluate the performance of our scheme by means of implementation. Our results show that the performance overhead introduced by PrivLoc can be largely tolerated in realistic deployment settings.
In what follows, we briefly overview existing contributions in the area. @cite_14 , Barkkuus and Dey show that users were concerned about the ability of services to track them.
{ "cite_N": [ "@cite_14" ], "mid": [ "2163642647" ], "abstract": [ "Context-aware computing often involves tracking peoples' location. Many studies and applications highlight the importance of keeping people's location information private. We discuss two types of location- based services; location-tracking services that are based on other parties tracking the user's location and position-aware services that rely on the device's knowledge of its own location. We present an experimental case study that examines people's concern for location privacy and compare this to the use of location-based services. We find that even though the perceived usefulness of the two different types of services is the same, location- tracking services generate more concern for privacy than position-aware services. We conclude that development emphasis should be given to position-aware services but that location-tracking services have a potential for success if users are given a simple option for turning the location-tracking off." ] }
1404.4744
2949683436
Location-based services are increasingly used in our daily activities. In current services, users however have to give up their location privacy in order to acquire the service. The literature features a large number of contributions which aim at enhancing user privacy in location-based services. Most of these contributions obfuscate the locations of users using spatial and or temporal cloaking in order to provide k-anonymity. Although such schemes can indeed strengthen the location privacy of users, they often decrease the service quality and do not necessarily prevent the possible tracking of user movements (i.e., direction, trajectory, velocity). With the rise of Geofencing applications, tracking of movements becomes more evident since, in these settings, the service provider is not only requesting a single location of the user, but requires the movement vectors of users to determine whether the user has entered exited a Geofence of interest. In this paper, we propose a novel solution, PrivLoc, which enables the privacy-preserving outsourcing of Geofencing and location-based services to the cloud without leaking any meaningful information about the location, trajectory, and velocity of the users. Notably, PrivLoc enables an efficient and privacy-preserving intersection of movement vectors with any polygon of interest, leveraging functionality from existing Geofencing services or spatial databases. We analyze the security and privacy provisions of PrivLoc and we evaluate the performance of our scheme by means of implementation. Our results show that the performance overhead introduced by PrivLoc can be largely tolerated in realistic deployment settings.
Most privacy-enhancing solutions for location-based services rely on a trusted location anonymizer'' service which hides the location of users. These services either provide @math -anonymity @cite_13 @cite_6 @cite_32 @cite_25 @cite_5 or spatial temporal cloaking with an area of interest @cite_9 @cite_28 @cite_10 @cite_12 @cite_11 . A number of solutions rely on inserting fake queries in order to prevent a database server from learning the actual location reports (e.g., @cite_23 ). While these solutions provide @math -anonymity, they incur significant additional costs on the database server. Other solutions rely on location perturbation obfuscation; these solutions map the location reports to a set of pre-defined landmarks @cite_4 or blur the user location into a spatial area using linear transformations @cite_28 @cite_10 @cite_12 @cite_24 @cite_19 Such solutions indeed hide the location of users but might affect the accuracy of the location-based service. Moreover, these solutions can only hide the location of a user, but do not aim at hiding the user movement.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_10", "@cite_9", "@cite_32", "@cite_6", "@cite_24", "@cite_19", "@cite_23", "@cite_5", "@cite_13", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2157539922", "2150447982", "2056773559", "2613648655", "2119047901", "2096899416", "2089959302", "1993369312", "1689932539", "2119067110", "1815999684", "2159024459", "", "2021497016" ], "abstract": [ "Privacy is the most often-cited criticism of ubiquitous computing, and may be the greatest barrier to its long-term success. However, developers currently have little support in designing software architectures and in creating interactions that are effective in helping end-users manage their privacy. To address this problem, we present Confab, a toolkit for facilitating the development of privacy-sensitive ubiquitous computing applications. The requirements for Confab were gathered through an analysis of privacy needs for both end-users and application developers. Confab provides basic support for building ubiquitous computing applications, providing a framework as well as several customizable privacy mechanisms. Confab also comes with extensions for managing location privacy. Combined, these features allow application developers and end-users to support a spectrum of trust levels and privacy needs.", "This paper presents PrivacyGrid - a framework for supporting anonymous location-based queries in mobile information delivery systems. The PrivacyGrid framework offers three unique capabilities. First, it provides a location privacy protection preference profile model, called location P3P, which allows mobile users to explicitly define their preferred location privacy requirements in terms of both location hiding measures (e.g., location k-anonymity and location l-diversity) and location service quality measures (e.g., maximum spatial resolution and maximum temporal resolution). Second, it provides fast and effective location cloaking algorithms for location k-anonymity and location l-diversity in a mobile environment. We develop dynamic bottom-up and top-down grid cloaking algorithms with the goal of achieving high anonymization success rate and efficiency in terms of both time complexity and maintenance cost. A hybrid approach that carefully combines the strengths of both bottom-up and top-down cloaking approaches to further reduce the average anonymization time is also developed. Last but not the least, PrivacyGrid incorporates temporal cloaking into the location cloaking process to further increase the success rate of location anonymization. We also discuss PrivacyGrid mechanisms for supporting anonymous location queries. Experimental evaluation shows that the PrivacyGrid approach can provide close to optimal location k-anonymity as defined by per user location P3P without introducing significant performance penalties.", "Advances in sensing and tracking technology enable location-based applications but they also create significant privacy risks. Anonymity can provide a high degree of privacy, save service users from dealing with service providers’ privacy policies, and reduce the service providers’ requirements for safeguarding private information. However, guaranteeing anonymous usage of location-based services requires that the precise location information transmitted by a user cannot be easily used to re-identify the subject. This paper presents a middleware architecture and algorithms that can be used by a centralized location broker service. The adaptive algorithms adjust the resolution of location information along spatial or temporal dimensions to meet specified anonymity constraints based on the entities who may be using location services within a given area. Using a model based on automotive traffic counts and cartographic material, we estimate the realistically expected spatial resolution for different anonymity constraints. The median resolution generated by our algorithms is 125 meters. Thus, anonymous location-based requests for urban areas would have the same accuracy currently needed for E-911 services; this would provide sufficient resolution for wayfinding, automated bus routing services and similar location-dependent services.", "A rotary screen mounted in the median area of a preferably spherical enclosure, the rear face of which being made opaque except in its central portion, while its front face is perfectly transparent, a projector being placed rearwardly of the enclosure in central axis of the portion which is not opaque in order to enable light rays from the prospector to pass through the enclosure to reach the rotary screen.", "Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k- anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k- anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and µ-Argus are compared to MinGen. Both Datafly and µ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and µ-Argus can additionally fail to provide adequate protection.", "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.", "This paper tackles a major privacy concern in current location-based services where users have to continuously report their locations to the database server in order to obtain the service. For example, a user asking about the nearest gas station has to report her exact location. With untrusted servers, reporting the location information may lead to several privacy threats. In this paper, we present Casper1; a new framework in which mobile and stationary users can entertain location-based services without revealing their location information. Casper consists of two main components, the location anonymizer and the privacy-aware query processor. The location anonymizer blurs the users' exact location information into cloaked spatial regions based on user-specified privacy requirements. The privacy-aware query processor is embedded inside the location-based database server in order to deal with the cloaked spatial areas rather than the exact location information. Experimental results show that Casper achieves high quality location-based services while providing anonymity for both data and queries.", "Cloud computing services enable organizations and individuals to outsource the management of their data to a service provider in order to save on hardware investments and reduce maintenance costs. Only authorized users are allowed to access the data. Nobody else, including the service provider, should be able to view the data. For instance, a real-estate company that owns a large database of properties wants to allow its paying customers to query for houses according to location. On the other hand, the untrusted service provider should not be able to learn the property locations and, e.g., selling the information to a competitor. To tackle the problem, we propose to transform the location datasets before uploading them to the service provider. The paper develops a spatial transformation that re-distributes the locations in space, and it also proposes a cryptographic-based transformation. The data owner selects the transformation key and shares it with authorized users. Without the key, it is infeasible to reconstruct the original data points from the transformed points. The proposed transformations present distinct trade-offs between query efficiency and data confidentiality. In addition, we describe attack models for studying the security properties of the transformations. Empirical studies demonstrate that the proposed methods are efficient and applicable in practice.", "Recently, highly accurate positioning devices enable us to provide various types of location-based services. On the other hand, because such position data include deeply personal information, the protection of location privacy is one of the most significant problems in location-based services. In this paper, we propose an anonymous communication technique to protect the location privacy of the users of location-based services. In our proposed technique, such users generate several false position data (dummies) to send to service providers with the true position data of users. Because service providers cannot distinguish the true position data, user location privacy is protected. We also describe a cost reduction technique for communication between a client and a server. Moreover, we conducted performance study experiments on our proposed technique using practical position data. As a result of the experiments, we observed that our proposed technique protects the location privacy of people and can sufficiently reduce communication costs so that our communication techniques can be applied in practical location-based services.", "Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.", "With mobile phones becoming first-class citizens in the online world, the rich location data they bring to the table is set to revolutionize all aspects of online life including content delivery, recommendation systems, and advertising. However, user-tracking is a concern with such location-based services, not only because location data can be linked uniquely to individuals, but because the low-level nature of current location APIs and the resulting dependence on the cloud to synthesize useful representations virtually guarantees such tracking. In this paper, we propose privacy-preserving location-based matching as a fundamental platform primitive and as an alternative to exposing low-level, latitude-longitude (lat-long) coordinates to applications. Applications set rich location-based triggers and have these be fired based on location updates either from the local device or from a remote device (e.g., a friend's phone). Our Koi platform, comprising a privacy-preserving matching service in the cloud and a phone-based agent, realizes this primitive across multiple phone and browser platforms. By masking low-level lat-long information from applications, Koi not only avoids leaking privacy-sensitive information, it also eases the task of programmers by providing a higher-level abstraction that is easier for applications to build upon. Koi's privacy-preserving protocol prevents the cloud service from tracking users. We verify the non-tracking properties of Koi using a theorem prover, illustrate how privacy guarantees can easily be added to a wide range of location-based applications, and show that our public deployment is performant, being able to perform 12K matches per second on a single core.", "Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.", "", "Location privacy has been a serious concern for mobile users who use location-based services provided by the third-party provider via mobile networks. Recently, there have been tremendous efforts on developing new anonymity or obfuscation techniques to protect location privacy of mobile users. Though effective in certain scenarios, these existing techniques usually assume that a user has a constant privacy requirement along spatial and or temporal dimensions, which may not be true in real-life scenarios. In this paper, we introduce a new location privacy problem: Location-aware Location Privacy Protection (L2P2) problem, where users can define dynamic and diverse privacy requirements for different locations. The goal of the L2P2 problem is to find the smallest cloaking area for each location request so that diverse privacy requirements over spatial and or temporal dimensions are satisfied for each user. In this paper, we formalize two versions of the L2P2 problem, and propose several efficient heuristics to provide such location-aware location privacy protection for mobile users. Through multiple simulations on a large data set of trajectories for one thousand mobile users, we confirm the effectiveness and efficiency of the proposed L2P2 algorithms." ] }
1404.4744
2949683436
Location-based services are increasingly used in our daily activities. In current services, users however have to give up their location privacy in order to acquire the service. The literature features a large number of contributions which aim at enhancing user privacy in location-based services. Most of these contributions obfuscate the locations of users using spatial and or temporal cloaking in order to provide k-anonymity. Although such schemes can indeed strengthen the location privacy of users, they often decrease the service quality and do not necessarily prevent the possible tracking of user movements (i.e., direction, trajectory, velocity). With the rise of Geofencing applications, tracking of movements becomes more evident since, in these settings, the service provider is not only requesting a single location of the user, but requires the movement vectors of users to determine whether the user has entered exited a Geofence of interest. In this paper, we propose a novel solution, PrivLoc, which enables the privacy-preserving outsourcing of Geofencing and location-based services to the cloud without leaking any meaningful information about the location, trajectory, and velocity of the users. Notably, PrivLoc enables an efficient and privacy-preserving intersection of movement vectors with any polygon of interest, leveraging functionality from existing Geofencing services or spatial databases. We analyze the security and privacy provisions of PrivLoc and we evaluate the performance of our scheme by means of implementation. Our results show that the performance overhead introduced by PrivLoc can be largely tolerated in realistic deployment settings.
To prevent location tracking, Gruteser and Liu @cite_21 propose disclosure control algorithms which hide users' positions in sensitive areas and withhold path information that indicates which areas they have visited. Other schemes rely on Private Information Retrieval (PIR) algorithms in order to enable privacy-preserving queries in spatial databases @cite_20 @cite_22 . PIR schemes allow a querier to retrieve information from a database server without revealing what is actually being retrieved from the server. However, these solutions are computationally intensive and require modifications to the database server in order to process the blurred location queries.
{ "cite_N": [ "@cite_21", "@cite_22", "@cite_20" ], "mid": [ "", "2159777963", "2157855380" ], "abstract": [ "", "Mobile smartphone users frequently need to search for nearby points of interest from a location based service, but in a way that preserves the privacy of the users' locations. We present a technique for private information retrieval that allows a user to retrieve information from a database server without revealing what is actually being retrieved from the server. We perform the retrieval operation in a computationally efficient manner to make it practical for resource-constrained hardware such as smartphones, which have limited processing power, memory, and wireless bandwidth. In particular, our algorithm makes use of a variable-sized cloaking region that increases the location privacy of the user at the cost of additional computation, but maintains the same traffic cost. Our proposal does not require the use of a trusted third-party component, and ensures that we find a good compromise between user privacy and computational efficiency. We evaluated our approach with a proof-of-concept implementation over a commercial-grade database of points of interest. We also measured the performance of our query technique on a smartphone and wireless network.", "Mobile devices equipped with positioning capabilities (e.g., GPS) can ask location-dependent queries to Location Based Services (LBS). To protect privacy, the user location must not be disclosed. Existing solutions utilize a trusted anonymizer between the users and the LBS. This approach has several drawbacks: (i) All users must trust the third party anonymizer, which is a single point of attack. (ii) A large number of cooperating, trustworthy users is needed. (iii) Privacy is guaranteed only for a single snapshot of user locations; users are not protected against correlation attacks (e.g., history of user movement). We propose a novel framework to support private location-dependent queries, based on the theoretical work on Private Information Retrieval (PIR). Our framework does not require a trusted third party, since privacy is achieved via cryptographic techniques. Compared to existing work, our approach achieves stronger privacy for snapshots of user locations; moreover, it is the first to provide provable privacy guarantees against correlation attacks. We use our framework to implement approximate and exact algorithms for nearest-neighbor search. We optimize query execution by employing data mining techniques, which identify redundant computations. Contrary to common belief, the experimental results suggest that PIR approaches incur reasonable overhead and are applicable in practice." ] }
1404.4865
2194592614
In this paper, we design an analytically and experimentally better online energy and job scheduling algorithm with the objective of maximizing net profit for a service provider in green data centers. We first study the previously known algorithms and conclude that these online algorithms have provable poor performance against their worst-case scenarios. To guarantee an online algorithm's performance in hindsight, we design a randomized algorithm to schedule energy and jobs in the data centers and prove the algorithm's expected competitive ratio in various settings. Our algorithm is theoretical-sound and it outperforms the previously known algorithms in many settings using both real traces and simulated data. An optimal offline algorithm is also implemented as an empirical benchmark.
People have worked on how to use green energy in green data centers in an efficient and effective manner. Although green energy has the advantages of being cost-effective and environmental-friendly, there is a challenge in using it due to their daily seasonal variability. Another challenge is due to customers' workload fluctuations @cite_30 . There could be a mismatch between the green energy supply and the workload's energy demand in the time axis --- a heavy workload arrives when the green energy supply is low. One solution is to bank'' green energy in batteries or on the grid itself @cite_1 for later possible use. However, this approach incurs huge energy lost and high additional maintenance cost @cite_1 . Thus, an online matching of workload and energy is demanded for green data centers.
{ "cite_N": [ "@cite_30", "@cite_1" ], "mid": [ "2096092966", "1983322859" ], "abstract": [ "Batched stream processing is a new distributed data processing paradigm that models recurring batch computations on incrementally bulk-appended data streams. The model is inspired by our empirical study on a trace from a large-scale production data-processing cluster; it allows a set of effective query optimizations that are not possible in a traditional batch processing model. We have developed a query processing system called Comet that embraces batched stream processing and integrates with DryadLINQ. We used two complementary methods to evaluate the effectiveness of optimizations that Comet enables. First, a prototype system deployed on a 40-node cluster shows an I O reduction of over 40 using our benchmark. Second, when applied to a real production trace covering over 19 million machine-hours, our simulator shows an estimated I O saving of over 50 .", "Interest has been growing in powering data centers (at least partially) with renewable or \"green\" sources of energy, such as solar or wind. However, it is challenging to use these sources because, unlike the \"brown\" (carbon-intensive) energy drawn from the electrical grid, they are not always available. In this keynote talk, I will first discuss the tradeoffs involved in leveraging green energy today and the prospects for the future. I will then discuss the main research challenges and questions involved in managing the use of green energy in data centers. Next, I will describe some of the software and hardware that researchers are building to explore these challenges and questions. Specifically, I will overview systems that match a data center's computational workload to the green energy supply. I will also describe Parasol, the solar-powered micro-data center we have just built at Rutgers University. Finally, I will discuss some potential avenues for future research on this topic." ] }
1404.4865
2194592614
In this paper, we design an analytically and experimentally better online energy and job scheduling algorithm with the objective of maximizing net profit for a service provider in green data centers. We first study the previously known algorithms and conclude that these online algorithms have provable poor performance against their worst-case scenarios. To guarantee an online algorithm's performance in hindsight, we design a randomized algorithm to schedule energy and jobs in the data centers and prove the algorithm's expected competitive ratio in various settings. Our algorithm is theoretical-sound and it outperforms the previously known algorithms in many settings using both real traces and simulated data. An optimal offline algorithm is also implemented as an empirical benchmark.
The research on scheduling energy and jobs in an online manner has attracted a lot of attentions. Two data center settings have been considered: (1) centralized data centers @cite_34 @cite_26 @cite_22 @cite_8 @cite_28 , and (2) geographically distributed data centers @cite_12 @cite_4 @cite_11 @cite_14 @cite_9 @cite_29 . The objectives to optimize are usually classified as (a) to maximize green energy consumption @cite_7 @cite_31 @cite_26 @cite_27 @cite_2 @cite_3 @cite_8 ; (b) to minimize brown energy consumption or cost @cite_4 @cite_31 @cite_26 @cite_15 @cite_11 ; and (c) to maximize profits @cite_16 . In addition, some researchers incorporated the dynamic pricing of brown energy @cite_34 @cite_26 @cite_10 in their problem models.
{ "cite_N": [ "@cite_22", "@cite_29", "@cite_3", "@cite_2", "@cite_15", "@cite_10", "@cite_4", "@cite_8", "@cite_26", "@cite_7", "@cite_28", "@cite_27", "@cite_16", "@cite_34", "@cite_12", "@cite_14", "@cite_9", "@cite_31", "@cite_11" ], "mid": [ "", "2164510519", "", "", "", "", "1988032138", "", "", "2395599360", "", "", "2095546124", "2055964748", "2168245849", "1528656891", "", "", "2020196954" ], "abstract": [ "", "The large amount of energy consumed by Internet services represents significant and fast-growing financial and environmental costs. Increasingly, services are exploring dynamic methods to minimize energy costs while respecting their service-level agreements (SLAs). Furthermore, it will soon be important for these services to manage their usage of “brown energy” (produced via carbon-intensive means) relative to renewable or “green” energy. This paper introduces a general, optimization-based framework for enabling multi-data-center services to manage their brown energy consumption and leverage green energy, while respecting their SLAs and minimizing energy costs. Based on the framework, we propose a policy for request distribution across the data centers. Our policy can be used to abide by caps on brown energy consumption, such as those that might arise from Kyoto-style carbon limits, from corporate pledges on carbon-neutrality, or from limits imposed on services to encourage brown energy conservation. We evaluate our framework and policy extensively through simulations and real experiments. Our results show how our policy allows a service to trade off consumption and cost. For example, using our policy, the service can reduce brown energy consumption by 24 for only a 10 increase in cost, while still abiding by SLAs.", "", "", "", "", "Renewable (or green) energy, such as solar or wind, has at least partially powered data centers to reduce the environmental impact of traditional energy sources (brown energy with high carbon footprint). In this paper, we propose a holistic workload scheduling algorithm to minimize the brown energy consumption across multiple geographically distributed data centers with renewable energy sources. While green energy supply for a single data center is intermittent due to daily seasonal effects, our workload scheduling algorithm is aware of different amounts of green energy supply and dynamically schedules the workload across data centers. The scheduling decision adapts to workload and data center cooling dynamics. Our experiments with real workload traces demonstrate that our scheduling algorithm greatly reduces brown energy consumption by up to 40 in comparison with other scheduling policies.", "", "", "The variable and intermittent nature of many renewable energy sources makes integrating them into the electric grid challenging and limits their penetration. The current grid requires expensive, largescale energy storage and peaker plants to match such supplies to conventional loads. We present an alternative solution, in which supply-following loads adjust their power consumption to match the available renewable energy supply. We show Internet data centers running batched, data analytic workloads are well suited to be such supply-following loads. They are large energy consumers, highly instrumented, agile, and contain much scheduling slack in their workloads. We explore the problem of scheduling the workload to align with the time-varying available wind power. Using simulations driven by real life batch workloads and wind power traces, we demonstrate that simple, supply-following job schedulers yield 40-60 better renewable energy penetration than supply-oblivious schedulers.", "", "", "An increasing number of data centers today start to incorporate renewable energy solutions to cap their carbon footprint. However, the impact of renewable energy on large-scale data center design is still not well understood. In this paper, we model and evaluate data centers driven by intermittent renewable energy. Using real-world data center and renewable energy source traces, we show that renewable power utilization and load tuning frequency are two critical metrics for designing sustainable high-performance data centers. Our characterization reveals that load power fluctuation together with the intermittent renewable power supply introduce unnecessary tuning activities, which can increase the management overhead and degrade the performance of renewable energy driven data centers.", "In this paper, we propose GreenSlot, a parallel batch job scheduler for a datacenter powered by a photovoltaic solar array and the electrical grid (as a backup). GreenSlot predicts the amount of solar energy that will be available in the near future, and schedules the workload to maximize the green energy consumption while meeting the jobs' deadlines. If grid energy must be used to avoid deadline violations, the scheduler selects times when it is cheap. Our results for production scientific workloads demonstrate that Green-Slot can increase green energy consumption by up to 117 and decrease energy cost by up to 39 , compared to a conventional scheduler. Based on these positive results, we conclude that green datacenters and green-energy-aware scheduling can have a significant role in building a more sustainable IT ecosystem.", "Energy expenditure has become a significant fraction of data center operating costs. Recently, \"geographical load balancing\" has been suggested to reduce energy cost by exploiting the electricity price differences across regions. However, this reduction of cost can paradoxically increase total energy use. This paper explores whether the geographical diversity of Internet-scale systems can additionally be used to provide environmental gains. Specifically, we explore whether geographical load balancing can encourage use of \"green\" renewable energy and reduce use of \"brown\" fossil fuel energy. We make two contributions. First, we derive two distributed algorithms for achieving optimal geographical load balancing. Second, we show that if electricity is dynamically priced in proportion to the instantaneous fraction of the total energy that is brown, then geographical load balancing provides significant reductions in brown energy use. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.", "To reduce the negative environmental implications (e.g., CO 2 emission and global warming) caused by the rapidly increasing energy consumption, many Internet service operators have started taking various initiatives to operate their cloud-scale data centers with renewable energy. Unfortunately, due to the intermittent nature of renewable energy sources such as wind turbines and solar panels, currently renewable energy is often more expensive than brown energy that is produced with conventional fossil-based fuel. As a result, utilizing renewable energy may impose a considerable pressure on the sometimes stringent operation budgets of Internet service operators. Therefore, two key questions faced by many cloud-service operators are 1) how to dynamically distribute service requests among data centers in different geographical locations, based on the local weather conditions, to maximize the use of renewable energy, and 2) how to do that within their allowed operation budgets. In this paper, we propose GreenWare, a novel middleware system that conducts dynamic request dispatching to maximize the percentage of renewable energy used to power a network of distributed data centers, subject to the desired cost budget of the Internet service operator. Our solution first explicitly models the intermittent generation of renewable energy, e.g., wind power and solar power, with respect to varying weather conditions in the geographical location of each data center. We then formulate the core objective of GreenWare as a constrained optimization problem and propose an efficient request dispatching algorithm based on linear-fractional programming (LFP). We evaluate GreenWare with real-world weather, electricity price, and workload traces. Our experimental results show that GreenWare can significantly increase the use of renewable energy in cloud-scale data centers without violating the desired cost budget, despite the intermittent supplies of renewable energy in different locations and time-varying electricity prices and workloads.", "", "", "It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective." ] }
1404.4865
2194592614
In this paper, we design an analytically and experimentally better online energy and job scheduling algorithm with the objective of maximizing net profit for a service provider in green data centers. We first study the previously known algorithms and conclude that these online algorithms have provable poor performance against their worst-case scenarios. To guarantee an online algorithm's performance in hindsight, we design a randomized algorithm to schedule energy and jobs in the data centers and prove the algorithm's expected competitive ratio in various settings. Our algorithm is theoretical-sound and it outperforms the previously known algorithms in many settings using both real traces and simulated data. An optimal offline algorithm is also implemented as an empirical benchmark.
Research on geographical data centers focuses on distributing workload among distributed data centers in order to consume the available free green energy or relative cheaper brown energy at other data centers. Chen @cite_4 proposed a centralized scheduler that migrates workload across geographical data centers according to the green energy supply at different data centers. Lin @cite_11 proposed online algorithms for scheduling workloads across geographical data center with the goal to minimize total energy cost. Although the proposed algorithm did reduce the energy cost but the total energy consumption increased. Liu @cite_12 further studied how the geographical load balancing and the proportional brown energy pricing scheme could help encourage the use of green energy and reduce the use of brown energy. Zhang @cite_14 and Le @cite_9 @cite_3 researched on scheduling online services across multiple data centers to maximize green energy consumption.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_3", "@cite_12", "@cite_11" ], "mid": [ "1528656891", "1988032138", "", "", "2168245849", "2020196954" ], "abstract": [ "To reduce the negative environmental implications (e.g., CO 2 emission and global warming) caused by the rapidly increasing energy consumption, many Internet service operators have started taking various initiatives to operate their cloud-scale data centers with renewable energy. Unfortunately, due to the intermittent nature of renewable energy sources such as wind turbines and solar panels, currently renewable energy is often more expensive than brown energy that is produced with conventional fossil-based fuel. As a result, utilizing renewable energy may impose a considerable pressure on the sometimes stringent operation budgets of Internet service operators. Therefore, two key questions faced by many cloud-service operators are 1) how to dynamically distribute service requests among data centers in different geographical locations, based on the local weather conditions, to maximize the use of renewable energy, and 2) how to do that within their allowed operation budgets. In this paper, we propose GreenWare, a novel middleware system that conducts dynamic request dispatching to maximize the percentage of renewable energy used to power a network of distributed data centers, subject to the desired cost budget of the Internet service operator. Our solution first explicitly models the intermittent generation of renewable energy, e.g., wind power and solar power, with respect to varying weather conditions in the geographical location of each data center. We then formulate the core objective of GreenWare as a constrained optimization problem and propose an efficient request dispatching algorithm based on linear-fractional programming (LFP). We evaluate GreenWare with real-world weather, electricity price, and workload traces. Our experimental results show that GreenWare can significantly increase the use of renewable energy in cloud-scale data centers without violating the desired cost budget, despite the intermittent supplies of renewable energy in different locations and time-varying electricity prices and workloads.", "Renewable (or green) energy, such as solar or wind, has at least partially powered data centers to reduce the environmental impact of traditional energy sources (brown energy with high carbon footprint). In this paper, we propose a holistic workload scheduling algorithm to minimize the brown energy consumption across multiple geographically distributed data centers with renewable energy sources. While green energy supply for a single data center is intermittent due to daily seasonal effects, our workload scheduling algorithm is aware of different amounts of green energy supply and dynamically schedules the workload across data centers. The scheduling decision adapts to workload and data center cooling dynamics. Our experiments with real workload traces demonstrate that our scheduling algorithm greatly reduces brown energy consumption by up to 40 in comparison with other scheduling policies.", "", "", "Energy expenditure has become a significant fraction of data center operating costs. Recently, \"geographical load balancing\" has been suggested to reduce energy cost by exploiting the electricity price differences across regions. However, this reduction of cost can paradoxically increase total energy use. This paper explores whether the geographical diversity of Internet-scale systems can additionally be used to provide environmental gains. Specifically, we explore whether geographical load balancing can encourage use of \"green\" renewable energy and reduce use of \"brown\" fossil fuel energy. We make two contributions. First, we derive two distributed algorithms for achieving optimal geographical load balancing. Second, we show that if electricity is dynamically priced in proportion to the instantaneous fraction of the total energy that is brown, then geographical load balancing provides significant reductions in brown energy use. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.", "It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective." ] }
1404.4865
2194592614
In this paper, we design an analytically and experimentally better online energy and job scheduling algorithm with the objective of maximizing net profit for a service provider in green data centers. We first study the previously known algorithms and conclude that these online algorithms have provable poor performance against their worst-case scenarios. To guarantee an online algorithm's performance in hindsight, we design a randomized algorithm to schedule energy and jobs in the data centers and prove the algorithm's expected competitive ratio in various settings. Our algorithm is theoretical-sound and it outperforms the previously known algorithms in many settings using both real traces and simulated data. An optimal offline algorithm is also implemented as an empirical benchmark.
Most of prior work focuses on either maximizing green energy consumption or minimizing brown energy consumption cost except @cite_16 which studied the net profit maximization problem for centralized data center service providers. Actually there is a trade-off between the minimization of energy expenditure and the maximization of net profit. @cite_16 proposed a systematic approach to maximize green data center's profit with a stochastic assumption on the workload. The workload that they studied is restricted to online service requests with variable arrival rates. In this paper, we study the profit maximization problem in a more general setting. In particular, we make no particular assumptions over the workload's stochastic property and we allow the workloads to include a batch job which requests to be simultaneously executed on multiple nodes. In addition, we incorporate varying brown energy prices in our model.
{ "cite_N": [ "@cite_16" ], "mid": [ "2095546124" ], "abstract": [ "An increasing number of data centers today start to incorporate renewable energy solutions to cap their carbon footprint. However, the impact of renewable energy on large-scale data center design is still not well understood. In this paper, we model and evaluate data centers driven by intermittent renewable energy. Using real-world data center and renewable energy source traces, we show that renewable power utilization and load tuning frequency are two critical metrics for designing sustainable high-performance data centers. Our characterization reveals that load power fluctuation together with the intermittent renewable power supply introduce unnecessary tuning activities, which can increase the management overhead and degrade the performance of renewable energy driven data centers." ] }
1404.4702
2951572289
We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube @math . Informally, a function @math is self-bounding if for every @math , @math upper bounds the sum of all the @math marginal decreases in the value of the function at @math . Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by (2000) in the context of concentration of measure inequalities. Our main result is a nearly tight @math -approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be @math -approximated in @math by a polynomial of degree @math over @math variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger @math approximation and proved nearly tight bounds of @math on the degree and @math on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and @math approximation by low-degree polynomials. This technique can also be used to get tighter bounds on @math approximation by low-degree polynomials and faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of @math .
Below we briefly mention some of the other related work. We direct the reader to @cite_36 and @cite_26 for more detailed surveys. Balcan and Harvey study learning of submodular functions without assumptions on the distribution and also require that the algorithm output a value which is within a multiplicative approximation factor of the true value with probability @math (the model is referred to as PMAC learning ). This is a very demanding setting and indeed one of the main results in @cite_36 is a factor- @math inapproximability bound for submodular functions. This notion of approximation is also considered in subsequent works of Badanidiyuru al and Balcan al @cite_3 @cite_29 where upper and lower approximation bounds are given for other related classes of functions such as XOS and subadditive. We emphasize that these strong lower bounds rely on a very specific distribution concentrated on a sparse set of points, and show that this setting is very different from uniform product distributions which are the focus of this paper.
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_26", "@cite_3" ], "mid": [ "", "2963394384", "2963852843", "48624319" ], "abstract": [ "", "A core element of microeconomics and game theory is that consumers have valuation functions over bundles of goods and that these valuations functions drive their purchases. A common assumption is that these functions are subadditive meaning that the value given to a bundle is at most the sum of values on the individual items. In this paper, we provide nearly tight guarantees on the efficient learnability of subadditive valuations. We also provide nearly tight bounds for the subclass of XOS (fractionally subadditive) valuations, also widely used in the literature. We additionally leverage the structure of valuations in a number of interesting subclasses and obtain algorithms with stronger learning guarantees.", "We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercubef0; 1g n . Our main result is the following structural theorem: any submodular function is -close in‘2 to a real-valued decision tree (DT) of depth O(1= 2 ). This immediately implies that any submodular function is -close to a function of at most", "Motivated by the problem of querying and communicating bidders' valuations in combinatorial auctions, we study how well different classes of set functions can be sketched. More formally, let f be a function mapping subsets of some ground set [n] to the non-negative real numbers. We say that f' is an α-sketch of f if for every set S, the value f'(S) lies between f(S) α and f(S), and f' can be specified by poly(n) bits. We show that for every subadditive function f there exists an α-sketch where α = n1 2. O(polylog(n)). Furthermore, we provide an algorithm that finds these sketches with a polynomial number of demand queries. This is essentially the best we can hope for since: 1. We show that there exist subadditive functions (in fact, XOS functions) that do not admit an o(n1 2) sketch. (Balcan and Harvey [3] previously showed that there exist functions belonging to the class of substitutes valuations that do not admit an O(n1 3) sketch.) 2. We prove that every deterministic algorithm that accesses the function via value queries only cannot guarantee a sketching ratio better than n1−e. We also show that coverage functions, an interesting subclass of submodular functions, admit arbitrarily good sketches. Finally, we show an interesting connection between sketching and learning. We show that for every class of valuations, if the class admits an α-sketch, then it can be α-approximately learned in the PMAC model of Balcan and Harvey. The bounds we prove are only information-theoretic and do not imply the existence of computationally efficient learning algorithms in general." ] }
1404.4702
2951572289
We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube @math . Informally, a function @math is self-bounding if for every @math , @math upper bounds the sum of all the @math marginal decreases in the value of the function at @math . Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by (2000) in the context of concentration of measure inequalities. Our main result is a nearly tight @math -approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be @math -approximated in @math by a polynomial of degree @math over @math variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger @math approximation and proved nearly tight bounds of @math on the degree and @math on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and @math approximation by low-degree polynomials. This technique can also be used to get tighter bounds on @math approximation by low-degree polynomials and faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of @math .
In @cite_51 learning of submodular functions over the uniform distribution is motivated by problems in differentially-private data release. They show that submodular functions with range @math are @math -approximated by a collection of @math @math -Lipschitz submodular functions. Each @math -Lipschitz submodular function can be @math -approximated by a constant. This leads to a learning algorithm running in time @math , which however requires value oracle access to the target function, in order to build the collection.
{ "cite_N": [ "@cite_51" ], "mid": [ "2121372565" ], "abstract": [ "Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ)model. This gives a complete answer to the question when running time is not a concern. We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1 average error. This presents progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms." ] }
1404.4702
2951572289
We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube @math . Informally, a function @math is self-bounding if for every @math , @math upper bounds the sum of all the @math marginal decreases in the value of the function at @math . Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by (2000) in the context of concentration of measure inequalities. Our main result is a nearly tight @math -approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be @math -approximated in @math by a polynomial of degree @math over @math variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger @math approximation and proved nearly tight bounds of @math on the degree and @math on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and @math approximation by low-degree polynomials. This technique can also be used to get tighter bounds on @math approximation by low-degree polynomials and faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of @math .
In a recent work, Raskhodnikova and Yaroslavtsev consider learning and testing of submodular functions taking values in the range @math (referred to as pseudo-Boolean ) @cite_15 . The error of a hypothesis in their framework is the probability that the hypothesis disagrees with the unknown function. They build on the approach from @cite_51 to show that pseudo-Boolean submodular functions can be expressed as @math -DNF and then give a @math -time PAC learning algorithm using value queries. Subsequently, Blais al proved existence of a junta of size @math and used it to give an algorithm for testing submodularity using @math value queries @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_51" ], "mid": [ "", "2964184416", "2121372565" ], "abstract": [ "", "We prove that any submodular function f: 0, 1 n → 0, 1,..., k can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f: 0, 1 n → 0, 1,..., k . Our algorithm runs in time polynomial in n,", "Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ)model. This gives a complete answer to the question when running time is not a concern. We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1 average error. This presents progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms." ] }
1404.4316
2951627457
This paper addresses the challenge of establishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with the Regionlets object detection framework. It achieved 46.1 mean average precision on the PASCAL VOC 2007 dataset, and 44.1 on the PASCAL VOC 2010 dataset, which dramatically improves the original Regionlets approach without DNPs.
Generic object detection has been improved over years, due to better deformation modeling, more effective multi-viewpoints handling and occlusion handling. Complete survey of the object detection literature is certainly beyond the scope of this paper. Representative works include but not limited to Histogram of Oriented Gradients @cite_23 , Deformable Part-based Model and its extensions @cite_30 , @cite_26 , . This paper aims at incorporating discriminative power of a learned deep CNN into these successful object detection frameworks. The execution of the idea is based on object detection framework which is currently the state-of-the-art detection approach without using a deep neural network. More details about are introduced in .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_23" ], "mid": [ "2168356304", "2110226160", "2161969291" ], "abstract": [ "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "Generic object detection is confronted by dealing with different degrees of variations in distinct object classes with tractable computations, which demands for descriptive and flexible object representations that are also efficient to evaluate for many locations. In view of this, we propose to model an object class by a cascaded boosting classifier which integrates various types of features from competing local regions, named as region lets. A region let is a base feature extraction region defined proportionally to a detection window at an arbitrary resolution (i.e. size and aspect ratio). These region lets are organized in small groups with stable relative positions to delineate fine grained spatial layouts inside objects. Their features are aggregated to a one-dimensional feature within one group so as to tolerate deformations. Then we evaluate the object bounding box proposal in selective search from segmentation cues, limiting the evaluation locations to thousands. Our approach significantly outperforms the state-of-the-art on popular multi-class detection benchmark datasets with a single method, without any contexts. It achieves the detection mean average precision of 41.7 on the PASCAL VOC 2007 dataset and 39.7 on the VOC 2010 for 20 object categories. It achieves 14.7 mean average precision on the Image Net dataset for 200 object categories, outperforming the latest deformable part-based model (DPM) by 4.7 .", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds." ] }
1404.4316
2951627457
This paper addresses the challenge of establishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with the Regionlets object detection framework. It achieved 46.1 mean average precision on the PASCAL VOC 2007 dataset, and 44.1 on the PASCAL VOC 2010 dataset, which dramatically improves the original Regionlets approach without DNPs.
More discriminative and robust features are always desirable in object detection, which are arguably one of the most important domain knowledge developed in computer vision community in past years. Most of these features are based on colors @cite_25 , gradients @cite_23 , textures @cite_14 @cite_19 or relative high order information such as covariance @cite_2 . These features are generic and have been demonstrated to be very effective in object detection. However, none of them encodes high-level information. The DNPs proposed in this paper complement existing features in this aspect. Their combination produces much better performance than applying either one individually.
{ "cite_N": [ "@cite_14", "@cite_19", "@cite_23", "@cite_2", "@cite_25" ], "mid": [ "2163808566", "2548197316", "2161969291", "", "2066477856" ], "abstract": [ "This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed", "By combining Histograms of Oriented Gradients (HOG) and Local Binary Pattern (LBP) as the feature set, we propose a novel human detection approach capable of handling partial occlusion. Two kinds of detectors, i.e., global detector for whole scanning windows and part detectors for local regions, are learned from the training data using linear SVM. For each ambiguous scanning window, we construct an occlusion likelihood map by using the response of each block of the HOG feature to the global detector. The occlusion likelihood map is then segmented by Mean-shift approach. The segmented portion of the window with a majority of negative response is inferred as an occluded region. If partial occlusion is indicated with high likelihood in a certain scanning window, part detectors are applied on the unoccluded regions to achieve the final classification on the current scanning window. With the help of the augmented HOG-LBP feature and the global-part occlusion handling method, we achieve a detection rate of 91.3 with FPPW= 10−6, 94.7 with FPPW= 10−5, and 97.9 with FPPW= 10−4 on the INRIA dataset, which, to our best knowledge, is the best human detection performance on the INRIA dataset. The global-part occlusion handling method is further validated using synthesized occlusion data constructed from the INRIA and Pascal dataset.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification, leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-of-the-art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14 in mean AP over conventional state-of-the-art methods." ] }
1404.4316
2951627457
This paper addresses the challenge of establishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with the Regionlets object detection framework. It achieved 46.1 mean average precision on the PASCAL VOC 2007 dataset, and 44.1 on the PASCAL VOC 2010 dataset, which dramatically improves the original Regionlets approach without DNPs.
The proposed approach is a new example of transfer learning, transferring the knowledge learned from large-scale image classification (in this case, ImageNet image classification) to generic object detection. There have been some very interesting approaches in transferring the learned knowledge by deep neural networks. For example, @cite_27 and @cite_3 illustrated transfer learning with unlabeled data or labels from other tasks. Our work shares a similar spirit but in a different context. It transfers the knowledge learned from a classification task to object detection by trickling high-level information in top convolutional layers in a deep CNN down to low-level image patches.
{ "cite_N": [ "@cite_27", "@cite_3" ], "mid": [ "2122922389", "2165698076" ], "abstract": [ "We present a new machine learning framework called \"self-taught learning\" for using unlabeled data in supervised classification tasks. We do not assume that the unlabeled data follows the same class labels or generative distribution as the labeled data. Thus, we would like to use a large number of unlabeled images (or audio samples, or text documents) randomly downloaded from the Internet to improve performance on a given image (or audio, or text) classification task. Such unlabeled data is significantly easier to obtain than in typical semi-supervised or transfer learning settings, making self-taught learning widely applicable to many practical learning problems. We describe an approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data. These features form a succinct input representation and significantly improve classification performance. When using an SVM for classification, we further show how a Fisher kernel can be learned for this representation.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research." ] }
1404.3959
1537777897
Given the fast rise of increasingly autonomous artificial agents and robots, a key acceptability criterion will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action. Instead, this work focuses on the persuasive acts themselves (e.g. "is it morally acceptable that a machine lies or appeals to the emotions of a person to persuade her, even if for a good end?"). Exploiting a behavioral approach, based on human assessment of moral dilemmas -- i.e. without any prior assumption of underlying ethical theories -- this paper reports on a set of experiments. These experiments address the type of persuader (human or machine), the strategies adopted (purely argumentative, appeal to positive emotions, appeal to negative emotions, lie) and the circumstances. Findings display no differences due to the agent, mild acceptability for persuasion and reveal that truth-conditional reasoning (i.e. argument validity) is a significant dimension affecting subjects' judgment. Some implications for the design of intelligent persuasive systems are discussed.
Recently there has also been a growing interest in persuasive internet and mobile services, see the survey in @cite_4 @cite_6 . In parallel with this growth of application-oriented studies, there has been a growing interest in finding new cheap and fast' evaluation methodologies to assess effectiveness of persuasive communication by means of crowdsourcing approaches @cite_30 @cite_21 @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_21", "@cite_6" ], "mid": [ "1984022436", "2112786892", "2951692431", "1965291015", "2024245356" ], "abstract": [ "Amazon’s Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.", "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.", "In recent years there has been a growing interest in crowdsourcing methodologies to be used in experimental research for NLP tasks. In particular, evaluation of systems and theories about persuasion is difficult to accommodate within existing frameworks. In this paper we present a new cheap and fast methodology that allows fast experiment building and evaluation with fully-automated analysis at a low cost. The central idea is exploiting existing commercial tools for advertising on the web, such as Google AdWords, to measure message impact in an ecological setting. The paper includes a description of the approach, tips for how to use AdWords for scientific research, and results of pilot experiments on the impact of affective text variations which confirm the effectiveness of the approach.", "We examine how firms can create word-of-mouth peer influence and social contagion by designing viral features into their products and marketing campaigns. To econometrically identify the effectiveness of different viral features in creating social contagion, we designed and conducted a randomized field experiment involving the 1.4 million friends of 9,687 experimental users on Facebook.com. We find that viral features generate econometrically identifiable peer influence and social contagion effects. More surprisingly, we find that passive-broadcast viral features generate a 246 increase in peer influence and social contagion, whereas adding active-personalized viral features generate only an additional 98 increase. Although active-personalized viral messages are more effective in encouraging adoption per message and are correlated with more user engagement and sustained product use, passive-broadcast messaging is used more often, generating more total peer adoption in the network. Our work provides a model for how randomized trials can identify peer influence in social networks. This paper was accepted by Pradeep Chintagunta and Preyas Desai, special issue editors. This paper was accepted by Pradeep Chintagunta and Preyas Desai, special issue editors.", "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4 ) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement." ] }
1404.3959
1537777897
Given the fast rise of increasingly autonomous artificial agents and robots, a key acceptability criterion will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action. Instead, this work focuses on the persuasive acts themselves (e.g. "is it morally acceptable that a machine lies or appeals to the emotions of a person to persuade her, even if for a good end?"). Exploiting a behavioral approach, based on human assessment of moral dilemmas -- i.e. without any prior assumption of underlying ethical theories -- this paper reports on a set of experiments. These experiments address the type of persuader (human or machine), the strategies adopted (purely argumentative, appeal to positive emotions, appeal to negative emotions, lie) and the circumstances. Findings display no differences due to the agent, mild acceptability for persuasion and reveal that truth-conditional reasoning (i.e. argument validity) is a significant dimension affecting subjects' judgment. Some implications for the design of intelligent persuasive systems are discussed.
Brain studies are providing further interesting clues. For the normal footbridge case, @cite_18 showed brain activation patterns in areas associated with emotional processing larger than in the bystander case - and longer reaction times. The latter data can be interpreted as showing that it takes longer to come to terms with affective distress when trying to consider it permissible to push the big man from the footbridge than in the bystander case.
{ "cite_N": [ "@cite_18" ], "mid": [ "2129900561" ], "abstract": [ "The long-standing rationalist tradition in moral psychology emphasizes the role of reason in moral judgment. A more recent trend places increased emphasis on emotion. Although both reason and emotion are likely to play important roles in moral judgment, relatively little is known about their neural correlates, the nature of their interaction, and the factors that modulate their respective behavioral influences in the context of moral judgment. In two functional magnetic resonance imaging (fMRI) studies using moral dilemmas as probes, we apply the methods of cognitive neuroscience to the study of moral judgment. We argue that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. These results may shed light on some puzzling patterns in moral judgment observed by contemporary philosophers." ] }
1404.3959
1537777897
Given the fast rise of increasingly autonomous artificial agents and robots, a key acceptability criterion will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action. Instead, this work focuses on the persuasive acts themselves (e.g. "is it morally acceptable that a machine lies or appeals to the emotions of a person to persuade her, even if for a good end?"). Exploiting a behavioral approach, based on human assessment of moral dilemmas -- i.e. without any prior assumption of underlying ethical theories -- this paper reports on a set of experiments. These experiments address the type of persuader (human or machine), the strategies adopted (purely argumentative, appeal to positive emotions, appeal to negative emotions, lie) and the circumstances. Findings display no differences due to the agent, mild acceptability for persuasion and reveal that truth-conditional reasoning (i.e. argument validity) is a significant dimension affecting subjects' judgment. Some implications for the design of intelligent persuasive systems are discussed.
In summary, these experiments suggest that three factors are involved in the assessment of all-in impermissibility: cost benefit analysis, checking for rule violations and emotional activations @cite_3 . Depending on the conditions, each of the factors can play a major role, and several variants of these scenarios have been suggested in the literature (see for example @cite_25 ). In the following we will focus on the discussed three trolley scenarios because: a) the occupy a central place in the moral dilemma literature; b) they have proven to be capable of soliciting different moral acceptance judgments; c) they are sensitive to the three main factors for direct action acceptability assessment (cost benefit analysis, rule violations and emotional activation).
{ "cite_N": [ "@cite_25", "@cite_3" ], "mid": [ "2127699214", "2073045358" ], "abstract": [ "Recent findings suggest that exerting executive control influences responses to moral dilemmas. In our study, subjects judged how morally appropriate it would be for them to kill one person to save others. They made these judgments in 24 dilemmas that systematically varied physical directness of killing, personal risk to the subject, inevitability of the death, and intentionality of the action. All four of these variables demonstrated main effects. Executive control was indexed by scores on working-memory-capacity (WMC) tasks. People with higher WMC found certain types of killing more appropriate than did those with lower WMC and were more consistent in their judgments. We also report interactions between manipulated variables that implicate complex emotion-cognition integration processes not captured by current dual-process views of moral judgment.", "Recent work shows an important asymmetry in lay intuitions about moral dilemmas. Most people think it is permissible to divert a train so that it will kill one innocent person instead of five, but most people think that it is not permissible to push a stranger in front of a train to save five innocents. We argue that recent emotion-based explanations of this asymmetry have neglected the contribution that rules make to reasoning about moral dilemmas. In two experiments, we find that participants show a parallel asymmetry about versions of the dilemmas that have minimized emotional force. In a third experiment, we find that people distinguish between whether an action violates a moral rule and whether it is, all things considered, wrong. We propose that judgments of whether an action is wrong, all things considered, implicate a complex set of psychological processes, including representations of rules, emotional responses, and assessments of costs and benefits." ] }
1404.3840
2950005842
Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model, named GaussianFace, to enrich the diversity of training data. In comparison to existing methods, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. Extensive experiments demonstrate the effectiveness of the proposed model in learning from diverse data sources and generalize to unseen domain. Specifically, the accuracy of our algorithm achieves an impressive accuracy rate of 98.52 on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53 ) on LFW is surpassed.
Human and computer performance on face recognition has been compared extensively @cite_52 @cite_20 @cite_21 @cite_36 @cite_37 @cite_8 . These studies have shown that computer-based algorithms were more accurate than humans in well-controlled environments (e.g., frontal view, natural expression, and controlled illumination), whilst still comparable to humans in the poor condition (e.g., frontal view, natural expression, and uncontrolled illumination). However, the above conclusion is only verified on face datasets with controlled variations, where only one factor changes at a time @cite_52 @cite_20 . To date, there has been virtually no work showing that computer-based algorithms could surpass human performance on unconstrained face datasets, such as LFW, which exhibits natural (multifactor) variations in pose, lighting, expression, race, ethnicity, age, gender, clothing, hairstyles, and other parameters.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_36", "@cite_21", "@cite_52", "@cite_20" ], "mid": [ "1966831001", "1944849869", "2149481809", "2149753900", "2152433273", "2066117401" ], "abstract": [ "Since 2005, human and computer performance has been systematically compared as part of face recognition competitions, with results being reported for both still and video imagery. The key results from these competitions are reviewed. To analyze performance across studies, the cross-modal performance analysis (CMPA) framework is introduced. The CMPA framework is applied to experiments that were part of face a recognition competition. The analysis shows that for matching frontal faces in still images, algorithms are consistently superior to humans. For video and difficult still face pairs, humans are superior. Finally, based on the CMPA framework and a face performance index, we outline a challenge problem for developing algorithms that are superior to humans for the general face recognition problem.", "The paper reviews the characteristics of human face recognition that should be reflected in any psychologically plausible computational model of face recognition. We then summarise recent results which compare aspects of human face perception and memory with the performance of two computer models which each claim some degree of biological plausibility. We show how the performance of each is correlated with human performance on the same images, but that each explains rather different aspects of human performance with these faces. We conclude with a discussion of the coding of image sequences by humans and computers.", "Automatic retrieval of face images from police mug-shot databases is critically important for law enforcement agencies. It can effectively help investigators to locate or narrow down potential suspects. However, in many cases, a photo image of a suspect is not available and the best substitute is often a sketch drawing based on the recollection of an eyewitness. We present a novel photo retrieval system using face sketches. By transforming a photo image into a sketch, we reduce the difference between photo and sketch significantly, thus allowing effective matching between the two. Experiments over a data set containing 188 people clearly demonstrate the efficacy of the algorithm.", "Face recognition technologies have seen dramatic improvements in performance over the past decade, and such systems are now widely used for security and commercial applications. Since recognizing faces is a task that humans are understood to be very good at, it is common to want to compare automatic face recognition (AFR) and human face recognition (HFR) in terms of biometric performance. This paper addresses this question by: 1) conducting verification tests on volunteers (HFR) and commercial AFR systems and 2) developing statistical methods to support comparison of the performance of different biometric systems. HFR was tested by presenting face-image pairs and asking subjects to classify them on a scale of ldquoSame,rdquo ldquoProbably Same,rdquo ldquoNot sure,rdquo ldquoProbably Different,rdquo and ldquoDifferentrdquo; the same image pairs were presented to AFR systems, and the biometric match score was measured. To evaluate these results, two new statistical evaluation techniques are developed. The first is a new way to normalize match-score distributions, where a normalized match score is calculated as a function of the angle from a representation of [false match rate, false nonmatch rate] values in polar coordinates from some center. Using this normalization, we develop a second methodology to calculate an average detection error tradeoff (DET) curve and show that this method is equivalent to direct averaging of DET data along each angle from the center. This procedure is then applied to compare the performance of the best AFR algorithms available to us in the years 1999, 2001, 2003, 2005, and 2006, in comparison to human scores. Results show that algorithms have dramatically improved in performance over that time. In comparison to the performance of the best AFR system of 2006, 29.2 of human subjects performed better, while 37.5 performed worse.", "There has been significant progress in improving the performance of computer-based face recognition algorithms over the last decade. Although algorithms have been tested and compared extensively with each other, there has been remarkably little work comparing the accuracy of computer-based face recognition systems with humans. We compared seven state-of-the-art face recognition algorithms with humans on a face-matching task. Humans and algorithms determined whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. Three algorithms surpassed human performance matching face pairs prescreened to be \"difficult\" and six algorithms surpassed humans on \"easy\" face pairs. Although illumination variation continues to challenge face recognition algorithms, current algorithms compete favorably with humans. The superior performance of the best algorithms over humans, in light of the absolute performance levels of the algorithms, underscores the need to compare algorithms with the best current control-humans.", "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions." ] }
1404.3913
2072382202
The tremendous increase in the size and heterogeneity of supercomputers makes it very difficult to predict the performance of a scheduling algorithm. Therefore, dynamic solutions, where scheduling decisions are made at runtime have overpassed static allocation strategies. The simplicity and efficiency of dynamic schedulers such as Hadoop are a key of the success of the MapReduce framework. Dynamic schedulers such as StarPU, PaRSEC or StarSs are also developed for more constrained computations, e.g. task graphs coming from linear algebra. To make their decisions, these runtime systems make use of some static information, such as the distance of tasks to the critical path or the affinity between tasks and computing resources (CPU, GPU, …) and of dynamic information, such as where input data are actually located. In this paper, we concentrate on two elementary linear algebra kernels, namely the outer product and the matrix multiplication. For each problem, we propose several dynamic strategies that can be used at runtime and we provide an analytic study of their theoretical performance. We prove that the theoretical analysis provides very good estimate of the amount of communications induced by a dynamic strategy and can be used in order to efficiently determine thresholds used in dynamic scheduler, thus enabling to choose among them for a given problem and architecture.
As mentioned in the introduction, several runtime systems have been recently proposed to schedule applications on parallel systems. Among other successful projects, we may cite StarPU @cite_0 , from INRIA Bordeaux (France), DAGuE and PaRSEC @cite_8 @cite_9 from ICL, Univ. of Tennessee Knoxville (USA) StarSs @cite_3 from Barcelona Supercomputing Center (Spain) or KAAPI @cite_7 from INRIA Grenoble (France). Most of these tools enable, to a certain extent, to schedule an application described as a task graph (usually available in the beginning of the computation, but sometimes generated and discovered during the execution itself), onto a parallel platforms. Most of these tools allow to harness complex platforms, such as multicores and hybrid platforms, including GPUs or other accelerators. These runtime systems usually keep track of the occupation of each computing devices and allocate new tasks on the processing unit that is expected to minimize its completion time. Our goal in this paper in to provide an analysis of such dynamic schedulers for simple operations, that do not involve tasks dependencies but massive data reuse.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_0" ], "mid": [ "2113941519", "2122747952", "2087440962", "2087085699", "2121893797" ], "abstract": [ "The high availability of multiprocessor clusters for computer science seems to be very attractive to the engineer because,at a first level, such computers aggregate high performances. Nevertheless, obtaining peak performances on irregular applications such as computer algebra problems remains a challenging problem. The delay to access memory is non uniform and the irregularity of computations requires to use scheduling algorithms in order to automatically balance the workload among the processors. This paper focuses on the runtime support implementation to exploit with great efficiency the computation resources of a multiprocessor cluster. The originality of our approach relies on the implementation of an efficient work-stealing algorithm for a macro data flow computation based on minor extension of POSIX thread interface.", "The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures is a tremendous task for the whole scientific computing community. We present DAGuE a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. Applications we consider can be expressed as a Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented in a compact, problem-size independent format that can be queried on-demand to discover data dependencies, in a totally distributed fashion. DAGuE assigns computation threads to the cores, overlaps communications and computations and uses a dynamic, fully-distributed scheduler based on cache awareness, data-locality and task priority. We demonstrate the efficiency of our approach, using several micro-benchmarks to analyze the performance of different components of the framework, and a linear algebra factorization as a use case.", "New high-performance computing system designs with steeply escalating processor and core counts, burgeoning heterogeneity and accelerators, and increasingly unpredictable memory access times call for one or more dramatically new programming paradigms. These new approaches must react and adapt quickly to unexpected contentions and delays, and they must provide the execution environment with sufficient intelligence and flexibility to rearrange the execution to improve resource utilization. The authors present an approach based on task parallelism that reveals the application's parallelism by expressing its algorithm as a task flow. This strategy allows the algorithm to be decoupled from the data distribution and the underlying hardware, since the algorithm is entirely expressed as flows of data. This kind of layering provides a clear separation of concerns among architecture, algorithm, and data distribution. Developers benefit from this separation because they can focus solely on the algorithmic level without the constraints involved with programming for current and future hardware trends.", "Programming models for multicore and many-core systems are listed as one of the main challenges in the near future for computing research. These programming models should be able to exploit the underlying platform, but also should have good programmability to enable programmer productivity. With respect to the heterogeneity and hierarchy of the underlying platforms, the programming models should take them into account but they should also enable the programmer to be unaware of the complexity of the hardware. In this paper we present an extension of the StarSs syntax to support task hierarchy. A motivation for such a hierarchical approach is presented through experimentation with CellSs. A prototype implementation of such a hierarchical task-based programming model that combines a first task level with SMPSs and a second task level with CellSs is presented. The preliminary results obtained when executing a matrix multiplication and a Cholesky factorization show the viability and potential of the approach and the current issues raised.", "In the field of HPC, the current hardware trend is to design multiprocessor architectures featuring heterogeneous technologies such as specialized coprocessors (e.g. Cell BE) or data-parallel accelerators (e.g. GPUs). Approaching the theoretical performance of these architectures is a complex issue. Indeed, substantial efforts have already been devoted to efficiently offload parts of the computations. However, designing an execution model that unifies all computing units and associated embedded memory remains a main challenge. We therefore designed StarPU, an original runtime system providing a high-level, unified execution model tightly coupled with an expressive data management library. The main goal of StarPU is to provide numerical kernel designers with a convenient way to generate parallel tasks over heterogeneous hardware on the one hand, and easily develop and tune powerful scheduling algorithms on the other hand. We have developed several strategies that can be selected seamlessly at run-time, and we have analyzed their efficiency on several algorithms running simultaneously over multiple cores and a GPU. In addition to substantial improvements regarding execution times, we have obtained consistent superlinear parallelism by actually exploiting the heterogeneous nature of the machine. We eventually show that our dynamic approach competes with the highly optimized MAGMA library and overcomes the limitations of the corresponding static scheduling in a portable way. Copyright © 2010 John Wiley & Sons, Ltd." ] }
1404.3913
2072382202
The tremendous increase in the size and heterogeneity of supercomputers makes it very difficult to predict the performance of a scheduling algorithm. Therefore, dynamic solutions, where scheduling decisions are made at runtime have overpassed static allocation strategies. The simplicity and efficiency of dynamic schedulers such as Hadoop are a key of the success of the MapReduce framework. Dynamic schedulers such as StarPU, PaRSEC or StarSs are also developed for more constrained computations, e.g. task graphs coming from linear algebra. To make their decisions, these runtime systems make use of some static information, such as the distance of tasks to the critical path or the affinity between tasks and computing resources (CPU, GPU, …) and of dynamic information, such as where input data are actually located. In this paper, we concentrate on two elementary linear algebra kernels, namely the outer product and the matrix multiplication. For each problem, we propose several dynamic strategies that can be used at runtime and we provide an analytic study of their theoretical performance. We prove that the theoretical analysis provides very good estimate of the amount of communications induced by a dynamic strategy and can be used in order to efficiently determine thresholds used in dynamic scheduler, thus enabling to choose among them for a given problem and architecture.
Many studies have proposed to use queuing theory @cite_10 to study the behavior of simple parallel systems and their dynamic evolution. Among many others, @cite_5 propose to use such stochastic models in order to model computing Grids, and Mitzenmacher @cite_11 studies how not-to-date information can lead to bad scheduling decisions in a simple parallel system.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2031038315", "2152530926", "2154007983" ], "abstract": [ "In this paper we propose a new routing policy to route jobs to clusters in computational grids. This routing policy is based on index tables computed at each cluster. These tables can be computed off-line or on-line. Their computations use predictions about the average future behavior of the grid. We show how can be used in practice for task allocations in computational grids. We also report numerous simulations providing numerical evidence of the efficiency of our index routing policy compared with the classical brokers used in most production grids today.", "Praise for the Third Edition: \"This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented.\"IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.", "We consider the problem of load balancing in dynamic distributed systems in cases where new incoming tasks can make use of old information. For example, consider a multiprocessor system where incoming tasks with exponentially distributed service requirements arrive as a Poisson process, the tasks must choose a processor for service, and a task knows when making this choice the processor queue lengths from T seconds ago. What is a good strategy for choosing a processor in order for tasks to minimize their expected time in the system? Such models can also be used to describe settings where there is a transfer delay between the time a task enters a system and the time it reaches a processor for service. Our models are based on considering the behavior of limiting systems where the number of processors goes to infinity. The limiting systems can be shown to accurately describe the behavior of sufficiently large systems and simulations demonstrate that they are reasonably accurate even for systems with a small number of processors. Our studies of specific models demonstrate the importance of using randomness to break symmetry in these systems and yield important rules of thumb for system design. The most significant result is that only small amounts of queue length information can be extremely useful in these settings; for example, having incoming tasks choose the least loaded of two randomly chosen processors is extremely effective over a large range of possible system parameters. In contrast, using global information can actually degrade performance unless used carefully; for example, unlike most settings where the load information is current, having tasks go to the apparently least loaded server can significantly hurt performance." ] }
1404.3913
2072382202
The tremendous increase in the size and heterogeneity of supercomputers makes it very difficult to predict the performance of a scheduling algorithm. Therefore, dynamic solutions, where scheduling decisions are made at runtime have overpassed static allocation strategies. The simplicity and efficiency of dynamic schedulers such as Hadoop are a key of the success of the MapReduce framework. Dynamic schedulers such as StarPU, PaRSEC or StarSs are also developed for more constrained computations, e.g. task graphs coming from linear algebra. To make their decisions, these runtime systems make use of some static information, such as the distance of tasks to the critical path or the affinity between tasks and computing resources (CPU, GPU, …) and of dynamic information, such as where input data are actually located. In this paper, we concentrate on two elementary linear algebra kernels, namely the outer product and the matrix multiplication. For each problem, we propose several dynamic strategies that can be used at runtime and we provide an analytic study of their theoretical performance. We prove that the theoretical analysis provides very good estimate of the amount of communications induced by a dynamic strategy and can be used in order to efficiently determine thresholds used in dynamic scheduler, thus enabling to choose among them for a given problem and architecture.
Recently, mean field techniques @cite_12 @cite_4 have been proposed for analyzing such dynamic processes. They give a formal framework to derive a system of ordinary differential equations that is the limit of a Markovian system when the number of objects goes to infinity. Such techniques have been used for the first time in @cite_13 where the author derives differential equations for a system of homogeneous processors who steal a single job when idle.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_12" ], "mid": [ "2125103654", "2029804570", "2124856311" ], "abstract": [ "In this paper we analyze the limiting behavior of several randomized work stealing algorithms in a dynamic setting. Our models represent the limiting behavior of systems as the number of processors grows to infinity using differential equations. The advantages of this approach include the ability to model a large variety of systems and to provide accurate numerical approximations of system behavior even when the number of processors is relatively small. We show how this approach can yield significant intuition about the behavior of work stealing algorithms in realistic settings.", "We consider models of N interacting objects, where the interaction is via a common resource and the distribution of states of all objects. We introduce the key scaling concept of intensity; informally, the expected number of transitions per object per time slot is of the order of the intensity. We consider the case of vanishing intensity, i.e. the expected number of object transitions per time slot is o(N). We show that, under mild assumptions and for large N, the occupancy measure converges, in mean square (and thus in probability) over any finite horizon, to a deterministic dynamical system. The mild assumption is essentially that the coefficient of variation of the number of object transitions per time slot remains bounded with N. No independence assumption is needed anywhere. The convergence results allow us to derive properties valid in the stationary regime. We discuss when one can assure that a stationary point of the ODE is the large N limit of the stationary probability distribution of the state of one object for the system with N objects. We use this to develop a critique of the fixed point method sometimes used in conjunction with the decoupling assumption.", "We study the convergence of Markov decision processes, composed of a large number of objects, to optimization problems on ordinary differential equations. We show that the optimal reward of such a Markov decision process, which satisfies a Bellman equation, converges to the solution of a continuous Hamilton-Jacobi-Bellman (HJB) equation based on the mean field approximation of the Markov decision process. We give bounds on the difference of the rewards and an algorithm for deriving an approximating solution to the Markov decision process from a solution of the HJB equations. We illustrate the method on three examples pertaining, respectively, to investment strategies, population dynamics control and scheduling in queues. They are used to illustrate and justify the construction of the controlled ODE and to show the advantage of solving a continuous HJB equation rather than a large discrete Bellman equation." ] }
1404.3945
1972158761
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
In all aforementioned works, the base station of a point-to-multipoint network (such as cellular, Wi-Fi and WiMAX and roadside to vehicle networks) was assumed to be responsible for the recovery of erased packets. This can pose a threat on the resources of such base stations and their abilities to deliver the required huge data rates especially in future wireless standards. This problem becomes more severe in roadside to vehicle networks since the vehicles usually bypass the roadside senders very fast and thus cannot rely on it for packet recovery but rather on completing the missing packets among themselves. One alternative to this problem is the notion of cooperative data exchange (CDE) introduced in @cite_32 . In this configuration, clients can cooperate to exchange data by sending IDNC recovery packets to each other over short range and more reliable communication channels, thus allowing the base station to serve other clients. This CDE model is also important for fast and reliable data communications over ad-hoc networks, such vehicular and sensor networks. Consequently, it is very important to study the minimization of delays and number of transmissions in such IDNC-based CDE systems.
{ "cite_N": [ "@cite_32" ], "mid": [ "2144549564" ], "abstract": [ "The advantages of coded cooperative data exchange has been studied in the literature. In this problem, a group of wireless clients are interested in the same set of packets (a multicast scenario). Each client initially holds a subset of packets and wills to obtain its missing packets in a cooperative setting by exchanging packets with its peers. Cooperation via short range transmission links among the clients (which are faster, cheaper and more reliable) is an alternative for retransmissions by the base station. In this paper, we extend the problem of cooperative data exchange to the case of multiple unicasts to a set of n clients, where each client c i is interested in a specific message x i and the clients cooperate with each others to compensate the errors occurred over the downlink. Moreover, our proposed method maintains the secrecy of individuals' messages at the price of a substantially small overhead." ] }
1404.3945
1972158761
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Unlike conventional point-to-multipoint scenario, the IDNC based CDE systems require not only decisions on packet combinations but also on which client to send in every transmission in order to achieve a certain quality for one of the network metrics. Recently Aboutorab and al. @cite_18 considered the problem of minimizing the sum decoding delay for CDE in a centralized fashion. By centralized, we mean that a central unit (such as the base station in the cellular example) takes the decisions on which client to send which packet combination in each transmission.
{ "cite_N": [ "@cite_18" ], "mid": [ "2030775967" ], "abstract": [ "This paper investigates the use of instantly decodable network coding (IDNC) for minimizing the mean decoding delay in multicast cooperative data exchange systems, where the clients cooperate with each other to obtain their missing packets. Here, IDNC is used to reduce the decoding delay of each transmission across all clients. We first introduce a new framework to find the optimum client and coded packet that result in the minimum mean decoding delay. However, since finding the optimum solution of the proposed framework is NP-hard, we further propose a heuristic algorithm that aims to minimize the lower bound on the expected decoding delay in each transmission. The effectiveness of the proposed algorithm is assessed through simulations." ] }
1404.3875
2268442582
Randomly generating structured objects is important in testing and optimizing functional programs, whereas generating random @math -terms is more specifically needed for testing and optimizing compilers. For that a tool called QuickCheck has been proposed, but in this tool the control of the random generation is left to the programmer. Ten years ago, a method called Boltzmann samplers has been proposed to generate combinatorial structures. In this paper, we show how Boltzmann samplers can be developed to generate lambda-terms, but also other data structures like trees. These samplers rely on a critical value which parameters the main random selector and which is exhibited here with explanations on how it is computed. Haskell programs are proposed to show how samplers are actually implemented.
In the introduction we cited papers that are clearly connected to this work. In a recent work, @cite_5 propose an improved random generation of binary trees and Motzkin trees, based on R ' e my algorithm @cite_22 (or algorithm R as Knuth calls it @cite_14 ). Instead of growing the trees from the root, they propose like R ' e my to grow the trees from inside by an operation called grafting. It is not clear how this can be generalized to @math -terms as this requires to find a combinatorial interpretation for the holonomic equations [which] is not [...] always possible, and even for simple combinatorial objects this is not elementary'' (Conclusion of @cite_5 page 16).
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_22" ], "mid": [ "129748510", "123238105", "300201706" ], "abstract": [ "We present a new uniform random sampler for binary trees with @math internal nodes consuming @math random bits on average. This makes it quasi-optimal and out-performs the classical Remy algorithm. We also present a sampler for unary-binary trees with @math nodes taking @math random bits on average. Both are the first linear-time algorithms to be optimal up to a constant.", "This multivolume work on the analysis of algorithms has long been recognized as the definitive description of classical computer science.The three complete volumes published to date already comprise a unique and invaluable resource in programming theory and practice. Countless readers have spoken about the profound personal influence of Knuth's writings. Scientists have marveled at the beauty and elegance of his analysis, while practicing programmers have successfully applied his “cookbook” solutions to their day-to-day problems. All have admired Knuth for the breadth, clarity, accuracy, and good humor found in his books.To begin the fourth and later volumes of the set, and to update parts of the existing three, Knuth has created a series of small books called fascicles, which will be published at regular intervals. Each fascicle will encompass a section or more of wholly new or revised material. Ultimately, the content of these fascicles will be rolled up into the comprehensive, final versions of each volume, and the enormous undertaking that began in 1962 will be complete.Volume 4, Fascicle 4This latest fascicle covers the generation of all trees, a basic topic that has surprisingly rich ties to the first three volumes of The Art of Computer Programming. In thoroughly discussing this well-known subject, while providing 124 new exercises, Knuth continues to build a firm foundation for programming. To that same end, this fascicle also covers the history of combinatorial generation. Spanning many centuries, across many parts of the world, Knuth tells a fascinating story of interest and relevance to every artful programmer, much of it never before told. The story even includes a touch of suspense: two problems that no one has yet been able to solve.", "Presentation d'une methode de type geometrique pour denombrer les arbres binaires de taille n; on s'interesse d'abord a un type d'objets plus complexes les arbres binaires a feuilles numerotees, et on obtient pour ceux-ci un procede de construction iteratif tres simple, l'insertion des feuilles par numeros croissants" ] }
1404.3875
2268442582
Randomly generating structured objects is important in testing and optimizing functional programs, whereas generating random @math -terms is more specifically needed for testing and optimizing compilers. For that a tool called QuickCheck has been proposed, but in this tool the control of the random generation is left to the programmer. Ten years ago, a method called Boltzmann samplers has been proposed to generate combinatorial structures. In this paper, we show how Boltzmann samplers can be developed to generate lambda-terms, but also other data structures like trees. These samplers rely on a critical value which parameters the main random selector and which is exhibited here with explanations on how it is computed. Haskell programs are proposed to show how samplers are actually implemented.
We would like also to mention papers on counting @math -terms @cite_1 @cite_21 and evaluating their combinatorial proprieties namely @cite_25 @cite_17 @cite_4 @cite_23 . Another related paper is @cite_9 which proposes Haskell programs for enumerating structures.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_21", "@cite_1", "@cite_23", "@cite_25", "@cite_17" ], "mid": [ "2102379027", "2031872041", "2963504583", "2056068307", "2023605482", "2296468158", "2963337056" ], "abstract": [ "We investigate the asymptotic number of elements of size @math in a particular class of closed lambda-terms (so-called @math -terms) which are related to axiom systems of combinatory logic. By deriving a differential equation for the generating function of the counting sequence we obtain a recurrence relation which can be solved asymptotically. We derive differential equations for the generating functions of the counting sequences of other more general classes of terms as well: the class of @math -terms and that of closed lambda-terms. Using elementary arguments we obtain upper and lower estimates for the number of closed lambda-terms of size @math . Moreover, a recurrence relation is derived which allows an efficient computation of the counting sequence. @math -terms are discussed briefly.", "In mathematics, an enumeration of a set S is a bijective function from (an initial segment of) the natural numbers to S. We define \"functional enumerations\" as efficiently computable such bijections. This paper describes a theory of functional enumeration and provides an algebra of enumerations closed under sums, products, guarded recursion and bijections. We partition each enumerated set into numbered, finite subsets. We provide a generic enumeration such that the number of each part corresponds to the size of its values (measured in the number of constructors). We implement our ideas in a Haskell library called testing-feat, and make the source code freely available. Feat provides efficient \"random access\" to enumerated values. The primary application is property-based testing, where it is used to define both random sampling (for example QuickCheck generators) and exhaustive enumeration (in the style of SmallCheck). We claim that functional enumeration is the best option for automatically generating test cases from large groups of mutually recursive syntax tree types. As a case study we use Feat to test the pretty-printer of the Template Haskell library (uncovering several bugs).", "Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a given size are there? What is a ''typical'' structure of a simply typable term? Despite their ostensible simplicity, these questions still remain unanswered, whereas solutions to such problems are essential for testing compilers and optimizing programs whose expected efficiency depends on the size of terms. Our approach toward the afore-mentioned problems may be later extended to any language with bound variables, i.e., with scopes and declarations. This paper presents two complementary approaches: one, theoretical, uses complex analysis and generating functions, the other, experimental, is based on a generator of lambda-terms. Thanks to de Bruijn indices, we provide three families of formulas for the number of closed lambda terms of a given size and we give four relations between these numbers which have interesting combinatorial interpretations. As a by-product of the counting formulas, we design an algorithm for generating lambda terms. Performed tests provide us with experimental data, like the average depth of bound variables and the average number of head lambdas. We also create random generators for various sorts of terms. Thereafter, we conduct experiments that answer questions like: What is the ratio of simply typable terms among all terms? (Very small!) How are simply typable lambda terms distributed among all lambda terms? (A typable term almost always starts with an abstraction.) In this paper, abstractions and applications have size 1 and variables have size 0.", "Despite @l-calculus is now three quarters of a century old, no formula counting @l-terms has been proposed yet, and the combinatorics of @l-calculus is considered a hard problem. The difficulty lies in the fact that the recursive expression of the numbers of terms of size n with at most m free variables contains the number of terms of size n-1 with at most m+1 variables. This leads to complex recurrences that cannot be handled by classical analytic methods. Here based on de Bruijn indices (another presentation of @l-calculus) we propose several results on counting untyped lambda terms, i.e., on telling how many terms belong to such or such class, according to the size of the terms and or to the number of free variables. We extend the results to normal forms.", "This paper presents a bijection between combinatorial maps and a class of enriched trees, corresponding to a class of expression trees in some logical systems (constrained lambda terms). Starting from two alternative definitions of combinatorial maps: the classical definition by gluing half-edges, and a definition by non-ambiguous depth-first traversal, we derive non-trivial asymptotic expansions and efficient random generation of logic formulae (syntactic trees) in the BCI or BCK systems.", "We aim at the asymptotic enumeration of lambda-terms of a given size where the order of nesting of abstractions is bounded whereas the size is tending to infinity. This is done by means of a generating function approach and singularity analysis. The generating functions appear to be composed of nested square roots which exhibit unexpected phenomena. We derive the asymptotic number of such lambda-terms and it turns out that the order depends on the bound of the height. Furthermore, we present some observations when generating such lambda randomly and explain why powerful tools for random generation, such as Boltzmann samplers, face serious difficulties in generating lambda-terms.", "We present a quantitative analysis of various (syntactic and behavioral) prop- erties of random �-terms. Our main results show that asymptotically, almost all terms are strongly normalizing and that any fixed closed term almost never appears in a random term. Surprisingly, in combinatory logic (the translation of the �-calculus into combi- nators), the result is exactly opposite. We show that almost all terms are not strongly normalizing. This is due to the fact that any fixed combinator almost always appears in a random combinator." ] }
1404.4038
1652809728
This work presents a sound probabilistic method for enforcing adherence of the marginal probabilities of a multi-label model to automatically discovered deterministic relationships among labels. In particular we focus on discovering two kinds of relationships among the labels. The first one concerns pairwise positive entailement: pairs of labels, where the presence of one implies the presence of the other in all instances of a dataset. The second concerns exclusion: sets of labels that do not coexist in the same instances of the dataset. These relationships are represented with a Bayesian network. Marginal probabilities are entered as soft evidence in the network and adjusted through probabilistic inference. Our approach offers robust improvements in mean average precision compared to the standard binary relavance approach across all 12 datasets involved in our experiments. The discovery process helps interesting implicit knowledge to emerge, which could be useful in itself.
A Bayesian network structure to encode the relationships among labels as well as between the input attributes and the labels was presented in @cite_14 . The proposed algorithm, called LEAD, starts by building binary relevance models and continues by learning a Bayesian network on the residuals of these models. Then another set of binary models is learned, one for each label, but this time incorporating the parents of each label according to the constructed Bayesian network as additional features. For prediction, these models are queried top-down according to the constructed Bayesian network. LEAD implicitly discovers probabilistic relationships among labels by learning the Bayesian network structure directly from data, while our approach explicitly discovers deterministic positive entailment and mutual exclusion relationships among labels from the data, which are then used to define the network structure.
{ "cite_N": [ "@cite_14" ], "mid": [ "2129026672" ], "abstract": [ "In multi-label learning, each training example is associated with a set of labels and the task is to predict the proper label set for the unseen example. Due to the tremendous (exponential) number of possible label sets, the task of learning from multi-label examples is rather challenging. Therefore, the key to successful multi-label learning is how to effectively exploit correlations between different labels to facilitate the learning process. In this paper, we propose to use a Bayesian network structure to efficiently encode the conditional dependencies of the labels as well as the feature set, with the feature set as the common parent of all labels. To make it practical, we give an approximate yet efficient procedure to find such a network structure. With the help of this network, multi-label learning is decomposed into a series of single-label classification problems, where a classifier is constructed for each label by incorporating its parental labels as additional features. Label sets of unseen examples are predicted recursively according to the label ordering given by the network. Extensive experiments on a broad range of data sets validate the effectiveness of our approach against other well-established methods." ] }
1404.4038
1652809728
This work presents a sound probabilistic method for enforcing adherence of the marginal probabilities of a multi-label model to automatically discovered deterministic relationships among labels. In particular we focus on discovering two kinds of relationships among the labels. The first one concerns pairwise positive entailement: pairs of labels, where the presence of one implies the presence of the other in all instances of a dataset. The second concerns exclusion: sets of labels that do not coexist in the same instances of the dataset. These relationships are represented with a Bayesian network. Marginal probabilities are entered as soft evidence in the network and adjusted through probabilistic inference. Our approach offers robust improvements in mean average precision compared to the standard binary relavance approach across all 12 datasets involved in our experiments. The discovery process helps interesting implicit knowledge to emerge, which could be useful in itself.
A method for uncovering deterministic causal structures is introduced in @cite_9 . Similarly to our work, it aims at constructing a Bayesian network out of automatically discovered deterministic relationships. Important differences are that it does not consider latent variables, as in our representation of exclusions and our treatment of unaccounted causes of a label via leak nodes. It therefore requires relationships to be supported from the full dataset, which limits its practical usefulness, as rarely such relationships appear in real-world data.
{ "cite_N": [ "@cite_9" ], "mid": [ "2171533214" ], "abstract": [ "While standard procedures of causal reasoning as procedures analyzing causal Bayesian networks are custom-built for (non-deterministic) probabilistic structures, this paper introduces a Boolean procedure that uncovers deterministic causal structures. Contrary to existing Boolean methodologies, the procedure advanced here successfully analyzes structures of arbitrary complexity. It roughly involves three parts: first, deterministic dependencies are identified in the data; second, these dependencies are suitably minimalized in order to eliminate redundancies; and third, one or—in case of ambiguities—more than one causal structure is assigned to the minimalized deterministic dependencies." ] }
1404.3186
2144575244
We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.
Test-suite based program repair generates patches and examines patches with a given test suite. Le @cite_16 propose GenProg, a test-suite based program repair approach using genetic programming. In GenProg, a program is viewed as an Abstract Syntax Tree (AST) while a patch is a newly-generated AST by weighting statements in the program. Based on genetic programming, candidate patches are generated via multiple trials. Then for each candidate patch, a given test suite is evaluated to identify the patch with all passing test cases. The role of genetic programming is to obtain new ASTs by copying and replacing nodes in the original AST. An evaluation on 16 C programs shows that GenProg can generate patches with an average success rate of 77 percent.
{ "cite_N": [ "@cite_16" ], "mid": [ "2145373440" ], "abstract": [ "This paper describes GenProg, an automated method for repairing defects in off-the-shelf, legacy programs without formal specifications, program annotations, or special coding practices. GenProg uses an extended form of genetic programming to evolve a program variant that retains required functionality but is not susceptible to a given defect, using existing test suites to encode both the defect and required functionality. Structural differencing algorithms and delta debugging reduce the difference between this variant and the original program to a minimal repair. We describe the algorithm and report experimental results of its success on 16 programs totaling 1.25 M lines of C code and 120K lines of module code, spanning eight classes of defects, in 357 seconds, on average. We analyze the generated repairs qualitatively and quantitatively to demonstrate that the process efficiently produces evolved programs that repair the defect, are not fragile input memorizations, and do not lead to serious degradation in functionality." ] }
1404.3186
2144575244
We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.
@cite_6 propose SemFix, a program repair approach via semantic analysis. In contrast to genetic programming in GenProg, SemFix generates patches by combining symbolic execution, constraint solving, and program synthesis. As mentioned in Section , SemFix generate constraints by formulating passed test cases and then solve such constraints via traversing a search space of repair expressions. Compared with GenProg, SemFix reports a higher success rate on C programs within less time cost. In this paper, we also focus on program repair by leveraging constraint solving and program synthesis. The key difference compared to SemFix is that is able to repair missing preconditions, a kind of fault that is not handled by SemFix.
{ "cite_N": [ "@cite_6" ], "mid": [ "2016027000" ], "abstract": [ "Debugging consumes significant time and effort in any major software development project. Moreover, even after the root cause of a bug is identified, fixing the bug is non-trivial. Given this situation, automated program repair methods are of value. In this paper, we present an automated repair method based on symbolic execution, constraint solving and program synthesis. In our approach, the requirement on the repaired code to pass a given set of tests is formulated as a constraint. Such a constraint is then solved by iterating over a layered space of repair expressions, layered by the complexity of the repair code. We compare our method with recently proposed genetic programming based repair on SIR programs with seeded bugs, as well as fragments of GNU Coreutils with real bugs. On these subjects, our approach reports a higher success-rate than genetic programming based repair, and produces a repair faster." ] }
1404.3186
2144575244
We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.
@cite_3 propose Par, a repair approach using fix patterns representing common ways of fixing common bugs. These fix patterns can avoid the nonsensical patches due to the randomness of some mutation operators. Based on the fix patterns, 119 bugs are examined for patch generation. In this work, the evaluation of patches are contributed by 253 human subjects, including 89 students and 164 developers.
{ "cite_N": [ "@cite_3" ], "mid": [ "2076719273" ], "abstract": [ "Patch generation is an essential software maintenance task because most software systems inevitably have bugs that need to be fixed. Unfortunately, human resources are often insufficient to fix all reported and known bugs. To address this issue, several automated patch generation techniques have been proposed. In particular, a genetic-programming-based patch generation technique, GenProg, proposed by , has shown promising results. However, these techniques can generate nonsensical patches due to the randomness of their mutation operations. To address this limitation, we propose a novel patch generation approach, Pattern-based Automatic program Repair (Par), using fix patterns learned from existing human-written patches. We manually inspected more than 60,000 human-written patches and found there are several common fix patterns. Our approach leverages these fix patterns to generate program patches automatically. We experimentally evaluated Par on 119 real bugs. In addition, a user study involving 89 students and 164 developers confirmed that patches generated by our approach are more acceptable than those generated by GenProg. Par successfully generated patches for 27 out of 119 bugs, while GenProg was successful for only 16 bugs." ] }
1404.3186
2144575244
We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.
Martinez and Monperrus @cite_10 mine historical repair actions to reasoning future actions with a probabilistic model. Based on a fine granularity of abstract syntax trees, this work analyzes over 62 thousands versioning transactions in 14 repositories of open-source Java projects to collect probabilistic distributions of repair actions. Such distributions can be used as prior knowledge to guide program repairing. Program synthesis is a related topic to program repair. Program synthesis aims to form a new program by synthesizing existing program components. @cite_17 mine program oracles based on examples and employs SMT solvers to synthesize constraints. In this work, manual or formal specifications are replaced by input-output oracles. They evaluate this work on 25 benchmark examples in program deobfuscation. Their following work @cite_4 addresses the same problem by encoding the synthesis constraint with a first-order logic formula. The maximum size of constraint is quadratic in the number of given components.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_17" ], "mid": [ "2168156367", "2162960800", "" ], "abstract": [ "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.", "We consider the problem of synthesizing loop-free programs that implement a desired functionality using components from a given library. Specifications of the desired functionality and the library components are provided as logical relations between their respective input and output variables. The library components can be used at most once, and hence the library is required to contain a reasonable overapproximation of the multiset of the components required. We solve the above component-based synthesis problem using a constraint-based approach that involves first generating a synthesis constraint, and then solving the constraint. The synthesis constraint is a first-order ∃∀ logic formula whose size is quadratic in the number of components. We present a novel algorithm for solving such constraints. Our algorithm is based on counterexample guided iterative synthesis paradigm and uses off-the-shelf SMT solvers. We present experimental results that show that our tool Brahma can efficiently synthesize highly nontrivial 10-20 line loop-free bitvector programs. These programs represent a state space of approximately 2010 programs, and are beyond the reach of the other tools based on sketching and superoptimization.", "" ] }
1404.3186
2144575244
We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result -- if there exists one -- into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls.
In our work, fault localization is used as a step to provide faulty statements. The goal of fault localization @cite_1 is to rank suspicious statements (or blocks, classes) to find out the location of bugs. A general framework of fault localization is to collect program spectrum (a matrix of testing results based on a given test suite) and to sort statements in the spectrum with specific metrics (e.g., Tarantula @cite_1 and Ochiai @cite_9 ). Among existing metrics in fault localization, Ochiai @cite_9 has been evaluated as one of the most effective ones. In Ochiai, each statement is assigned with its suspiciousness value, which is the Ochiai index between the number of failed test cases and the number of covered test cases.
{ "cite_N": [ "@cite_9", "@cite_1" ], "mid": [ "2128049346", "1971137495" ], "abstract": [ "Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important technique for the development of dependable software. In this paper we study different similarity coefficients that are applied in the context of a program spectral approach to software fault localization (single programming mistakes). The coefficients studied are taken from the systems diagnosis automated debugging tools Pinpoint, Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). We evaluate these coefficients on the Siemens Suite of benchmark faults, and assess their effectiveness in terms of the position of the actual fault in the probability ranking of fault candidates produced by the diagnosis technique. Our experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the tools mentioned. In terms of the amount of code that needs to be inspected, this coefficient improves 5 on average over the next best technique, and up to 30 in specific cases", "Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has had and can be expected to have." ] }
1404.3461
1837026580
In this paper, we propose a 2D based partition method for solving the problem of Ranking under Team Context(RTC) on datasets without a priori. We first map the data into 2D space using its minimum and maximum value among all dimensions. Then we construct window queries with consideration of current team context. Besides, during the query mapping procedure, we can pre-prune some tuples which are not top ranked ones. This pre-classified step will defer processing those tuples and can save cost while providing solutions for the problem. Experiments show that our algorithm performs well especially on large datasets with correctness.
LSH(Locality Sensitive Hash) @cite_13 proposed by gionis1999similarity was tailored to solving problem of similarity search in high dimensional space. Its hash function will maximize the probability of collision of two similar objects. To reduce the space cost, Satuluri:2012:BLS:2140436.2140440 proposed a tunable LSH method called BayesLSH. Moreover, PLSH proposed in @cite_12 extended the LSH to a parallel computing environment, which had a higher performance on large scale stream data. Indexing high dimensional data based on can enhance the performance of NN searching and lots of previous work have discussed about this. Related partition strategy falls into two categories and . strategy partitions data based on cluster technique while method partitions exploits statistical information of data.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "1502916507", "2156855109" ], "abstract": [ "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "Given a collection of objects and an associated similarity measure, the all-pairs similarity search problem asks us to find all pairs of objects with similarity greater than a certain user-specified threshold. Locality-sensitive hashing (LSH) based methods have become a very popular approach for this problem. However, most such methods only use LSH for the first phase of similarity search - i.e. efficient indexing for candidate generation. In this paper, we present BayesLSH, a principled Bayesian algorithm for the subsequent phase of similarity search - performing candidate pruning and similarity estimation using LSH. A simpler variant, BayesLSH-Lite, which calculates similarities exactly, is also presented. Our algorithms are able to quickly prune away a large majority of the false positive candidate pairs, leading to significant speedups over baseline approaches. For BayesLSH, we also provide probabilistic guarantees on the quality of the output, both in terms of accuracy and recall. Finally, the quality of BayesLSH's output can be easily tuned and does not require any manual setting of the number of hashes to use for similarity estimation, unlike standard approaches. For two state-of-the-art candidate generation algorithms, AllPairs and LSH, BayesLSH enables significant speedups, typically in the range 2x-20x for a wide variety of datasets." ] }
1404.3461
1837026580
In this paper, we propose a 2D based partition method for solving the problem of Ranking under Team Context(RTC) on datasets without a priori. We first map the data into 2D space using its minimum and maximum value among all dimensions. Then we construct window queries with consideration of current team context. Besides, during the query mapping procedure, we can pre-prune some tuples which are not top ranked ones. This pre-classified step will defer processing those tuples and can save cost while providing solutions for the problem. Experiments show that our algorithm performs well especially on large datasets with correctness.
Recent research conducted in @cite_9 integrated features of tree like index structure and partition strategy. The PL-Tree proposed by PLTree recursively partitioned the data into hypercubes with labels and constructed tree dynamically. However, its main strength is in providing an efficient range query.
{ "cite_N": [ "@cite_9" ], "mid": [ "160836526" ], "abstract": [ "The quest for processing data in high-dimensional space has resulted in a number of innovative indexing mechanisms. Choosing an appropriate indexing method for a given set of data requires careful consideration of data properties, data construction methods, and query types. We present a new indexing method to support efficient point queries, range queries, and k-nearest neighbor queries. Our method indexes objects dynamically using algebraic techniques, and it can substantially reduce the negative impacts of the \"curse of dimensionality\". In particular, our method partitions the data space recursively into hypercubes of certain capacity and labels each hypercube using the Cantor pairing function, so that all objects in the same hypercube have the same label. The bijective property and the computational efficiency of the Cantor pairing function make it possible to efficiently map between high-dimensional vectors and scalar labels. The partitioning and labeling process splits a subspace if the data items contained in it exceed its capacity. From the data structure point of view, our method constructs a tree where each parent node contains a number of labels and child pointers, and we call it a PL-tree . We compare our method with popular indexing algorithms including R*-tree, X-tree, quad-tree, and iDistance. Our numerical results show that the dynamic PL-tree indexing significantly outperforms the existing indexing mechanisms." ] }
1404.3461
1837026580
In this paper, we propose a 2D based partition method for solving the problem of Ranking under Team Context(RTC) on datasets without a priori. We first map the data into 2D space using its minimum and maximum value among all dimensions. Then we construct window queries with consideration of current team context. Besides, during the query mapping procedure, we can pre-prune some tuples which are not top ranked ones. This pre-classified step will defer processing those tuples and can save cost while providing solutions for the problem. Experiments show that our algorithm performs well especially on large datasets with correctness.
Besides concentrating on accuracy of resultset, researches in @cite_5 @cite_15 customize NN query into different contexts for providing higher quality result. In this paper, we plan to design a method in solving the NN problem under team context defined in @cite_6 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_6" ], "mid": [ "", "2033576682", "1925368782" ], "abstract": [ "", "In the last years, recommender systems have achieved a great popularity. Many different techniques have been developed and applied to this field. However, in many cases the algorithms do not obtain the expected results. In particular, when the applied model does not fit the real data the results are especially bad. This happens because many times models are directly applied to a domain without a previous analysis of the data. In this work we study the most popular datasets in the movie recommendation domain, in order to understand how the users behave in this particular context. We have found some remarkable facts that question the utility of the similarity measures traditionally used in k-Nearest Neighbors (kNN) algorithms. These findings can be useful in order to develop new algorithms. In particular, we modify traditional kNN algorithms by introducing a new similarity measure specially suited for sparse contexts, where users have rated very few items. Our experiments show slight improvements in prediction accuracy, which proves the importance of a thorough dataset analysis as a previous step to any algorithm development.", "Context-aware database has drawn increasing attention from both industry and academia recently by taking users' current situation and environment into consideration. However, most of the literature focus on individual context, overlooking the team users. In this paper, we investigate how to integrate team context into database query process to help the users' get top-ranked database tuples and make the team more competitive. We introduce naive and optimized query algorithm to select the suitable records and show that they output the same results while the latter is more computational efficient. Extensive empirical studies are conducted to evaluate the query approaches and demonstrate their effectiveness and efficiency." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Advances have been made in languages for formally specifying information-flow properties in hardware like Caisson @cite_1 . Tiwari al developed and verified an information-flow secure processor and microkernel, but that was not in the context of a mobile-phone SoC and involved radical modifications to the processor compared to those required by NoC-based security mechanisms @cite_2 . Volpano proposed dividing memory accesses in time to limit covert channels @cite_67 . Information-flow techniques could be generally applicable to help verify the security of the trusted components identified in .
{ "cite_N": [ "@cite_67", "@cite_1", "@cite_2" ], "mid": [ "", "2150619336", "2151071112" ], "abstract": [ "", "Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programming languages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programming languages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects. We have implemented a compiler for Caisson that translates designs into Verilog and then synthesizes the designs using existing tools. As an example of Caisson's usefulness we have addressed an open problem in secure hardware by creating the first-ever provably information-flow secure processor with micro-architectural features including pipelining and cache. We synthesize the secure processor and empirically compare it in terms of chip area, power consumption, and clock frequency with both a standard (insecure) commercial processor and also a processor augmented at the gate level to dynamically track information flow. Our processor is competitive with the insecure processor and significantly better than dynamic tracking.", "High assurance systems used in avionics, medical implants, and cryptographic devices often rely on a small trusted base of hardware and software to manage the rest of the system. Crafting the core of such a system in a way that achieves flexibility, security, and performance requires a careful balancing act. Simple static primitives with hard partitions of space and time are easier to analyze formally, but strict approaches to the problem at the hardware level have been extremely restrictive, failing to allow even the simplest of dynamic behaviors to be expressed. Our approach to this problem is to construct a minimal but configurable architectural skeleton. This skeleton couples a critical slice of the low level hardware implementation with a microkernel in a way that allows information flow properties of the entire construction to be statically verified all the way down to its gate-level implementation. This strict structure is then made usable by a runtime system that delivers more traditional services (e.g. communication interfaces and long-living contexts) in a way that is decoupled from the information flow properties of the skeleton. To test the viability of this approach we design, test, and statically verify the information-flow security of a hardware software system complete with support for unbounded operation, inter-process communication, pipelined operation, and I O with traditional devices. The resulting system is provably sound even when adversaries are allowed to execute arbitrary code on the machine, yet is flexible enough to allow caching, pipelining, and other common case optimizations." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Other techniques are complementary to these lines of advancement in that they offer approaches for satisfying the assumptions of our threat model. Moats and Drawbridges'' is the name of a technique for physically isolating components of an FPGA and connecting them through constrained interfaces so that they can be analyzed independently @cite_10 @cite_68 .
{ "cite_N": [ "@cite_68", "@cite_10" ], "mid": [ "164289057", "2122711125" ], "abstract": [ "The purpose of Handbook of FPGA Design Security is to provide a practical approach to managing security in FPGA designs for researchers and practitioners in the electronic design automation (EDA) and FPGA communities, including corporations, industrial and government research labs, and academics. Handbook of FPGA Design Security combines theoretical underpinnings with a practical design approach and worked examples for combating real world threats. To address the spectrum of lifecycle and operational threats against FPGA systems, a holistic view of FPGA security is presented, from formal top level specification to low level policy enforcement mechanisms. This perspective integrates recent advances in the fields of computer security theory, languages, compilers, and hardware. The net effect is a diverse set of static and runtime techniques that, working in cooperation, facilitate the composition of robust, dependable, and trustworthy systems using commodity components.", "Blurring the line between software and hardware, reconfigurable devices strike a balance between the raw high speed of custom silicon and the post-fabrication flexibility of general-purpose processors. While this flexibility is a boon for embedded system developers, who can now rapidly prototype and deploy solutions with performance approaching custom designs, this results in a system development methodology where functionality is stitched together from a variety of \"soft IP cores,\" often provided by multiple vendors with different levels of trust. Unlike traditional software where resources are managed by an operating system, soft IP cores necessarily have very fine grain control over the underlying hardware. To address this problem, the embedded systems community requires novel security primitives which address the realities of modern reconfigurable hardware. We propose an isolation primitive, moats and drawbridges, that are built around four design properties: logical isolation, interconnect traceability, secure reconfigurable broadcast, and configuration scrubbing. Each of these is a fundamental operation with easily understood formal properties, yet maps cleanly and efficiently to a wide variety of reconfigurable devices. We carefully quantify the required overheads on real FPGAs and demonstrate the utility of our methods by applying them to the practical problem of memory protection." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
SurfNoC schedules multiple protection domains onto NoC resources in such a way that non-interference between the domains can be verified at the gate level @cite_26 . This could complement by preventing unauthorized communications channels between domains from being constructed in the NoC fabric.
{ "cite_N": [ "@cite_26" ], "mid": [ "2141273270" ], "abstract": [ "As multicore processors find increasing adoption in domains such as aerospace and medical devices where failures have the potential to be catastrophic, strong performance isolation and security become first-class design constraints. When cores are used to run separate pieces of the system, strong time and space partitioning can help provide such guarantees. However, as the number of partitions or the asymmetry in partition bandwidth allocations grows, the additional latency incurred by time multiplexing the network can significantly impact performance. In this paper, we introduce SurfNoC, an on-chip network that significantly reduces the latency incurred by temporal partitioning. By carefully scheduling the network into waves that flow across the interconnect, data from different domains carried by these waves are strictly non-interfering while avoiding the significant overheads associated with cycle-by-cycle time multiplexing. We describe the scheduling policy and router microarchitecture changes required, and evaluate the information-flow security of a synthesizable implementation through gate-level information flow analysis. When comparing our approach for varying numbers of domains and network sizes, we find that in many cases SurfNoC can reduce the latency overhead of implementing cycle-level non-interference by up to 85 ." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Richards and Lester defined a shallow, monadic embedding of a subset of Bluespec into PVS and performed demonstrative proofs using the PVS theorem prover on a 50-line Bluespec design @cite_62 . Their techniques may be complementary to our model checking approach for proving properties that are amenable to theorem proving. Katelman defined a deep embedding of BTRS into Maude @cite_3 . BTRS is an intermediate language used by the Bluespec compiler. Our shallow embedding has the potential for higher performance, since we translate Bluespec rules into native Maude rules. Bluespec compilers could potentially output multiple BTRS representations for a single design, complicating verification. Finally, our embedding corresponds more closely to Bluespec code, which could make it easier to understand and respond to output from the verification tools.
{ "cite_N": [ "@cite_62", "@cite_3" ], "mid": [ "2087814790", "2242425235" ], "abstract": [ "We embed a non-trivial subset of Bluespec SystemVerilog (BSV) in the higher order logic of the PVS theorem prover. Owing to the clean semantics of BSV, application of monadic techniques leads to a surprisingly elegant embedding, in which hardware designs are translated into logic almost verbatim, preserving types and language constructs. The resulting specifications are compatible with the built-in model checker of PVS, which can automatically prove an important class of temporal logic theorems, and can also be used in conjunction with the powerful proof strategies of PVS, including automatic predicate abstraction, to verify a broader class of properties than can be achieved with model checking alone. Bluespec SystemVerilog is a hardware description language based on the guarded action model of concurrency. It has an elegant semantics, which has previously been shown to support design verification by hand proof: to date, however, little work has been conducted on the application of automated reasoning to BSV designs.", "This dissertation perceives a similarity between two activities: that of coordinating the search for simulation traces toward reaching verification closure, and that of coordinating the search for a proof within a theorem prover. The programmatic coordination of simulation is difficult with existing tools for digital circuit verification because stimuli generation, simulation execution, and analysis of simulation results are all decoupled. A new programming language to address this problem, analogous to the mechanism for orchestrating proof search tactics within a theorem prover, is defined wherein device simulation is made a first-class notion. This meta-language for functional verification is first formalized in a parametric way over hardware description languages using rewriting logic, and subsequently a more richly featured software tool for Verilog designs, implemented as an embedded domain-specific language in Haskell, is described and used to demonstrate the novelty of the programming language and to conduct two case studies. Additionally, three hardware description languages are given formal semantics using rewriting logic and we demonstrate the use of executable rewriting logic tools to formally analyze devices implemented in those languages." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
The line of work about NoC access control started with the Security Enhanced Communication Architecture (SECA) @cite_69 , a mechanism for monitoring and controlling bus data transfers using a variety of address- and value-based stateful and stateless policies. Those policies provide a high level of expressiveness at the cost of increased complexity compared to , which would complicate formal analysis. SECA is validated using a multi-core mobile phone SoC containing a cryptoprocessor that shares a RAM with the other cores and requires protection for its portion of the RAM.
{ "cite_N": [ "@cite_69" ], "mid": [ "2025718175" ], "abstract": [ "In this work, we propose and investigate the idea of enhancing a System-on-Chip (SoC) communication architecture (the fabric that integrates system components and carries the communication traffic between them) to facilitate higher security. We observe that a wide range of common security attacks are manifested as abnormalities in the system-level communication traffic. Therefore, the communication architecture, with its global system-level visibility, can be used to detect them. The communication architecture can also effectively react to security attacks by disallowing the offending communication transactions, or by notifying appropriate components of a security violation. We describe the general principles involved in a security-enhanced communication architecture (SECA) and show how several security objectives can be encoded in terms of policies that govern the inter-component communication traffic. We detail the implementation of SECA in the context of a popular commercial on-chip bus architecture (the AMBA architecture from ARM) through a combination of a centralized security enforcement module, and enhancements to the bus interfaces of system components. We illustrate how SECA can be used to enhance embedded system security in several application scenarios. A simple instance of SECA has been implemented in a commercial application processor SoC for mobile phones. We provide results of experiments performed to validate the proposed concepts through system-level simulation, and evaluate their overheads through hardware implementation using a commercial design flow." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Cotret al extended distributed address-based controls with support for encrypting and monitoring the integrity of data sent to memory outside the SoC @cite_27 @cite_56 . These mechanisms can address more powerful threats, but introduce additional area overhead and latency.
{ "cite_N": [ "@cite_27", "@cite_56" ], "mid": [ "2149040906", "2121030089" ], "abstract": [ "The need for security in embedded systems has strongly increased since several years. Nowadays, it is possible to integrate several processors in a single chip. The design of such multiprocessor systems-on-chip (MPSoC) must be done with a lot of care as the execution of applications may lead to potential vulnerabilities such as revelation of critical data and private information. Thus it becomes mandatory to deal with security issues all along the design cycle of the MPSoC in order to guarantee a global protection. Among the critical points, the protection of the communications is very sensible as most of the data are exchanged through the communication architecture of the system. This paper targets this point and proposes a solution with distributed enhancements to secure data exchanges and to monitor communications within a MPSoC. In order to validate our contribution, a case study based on a generic multiprocessor architecture is considered.", "Security in MPSoC is gaining an increasing attention since several years. Digital convergence is one of the numerous reasons explaining such a focus on embedded systems as much sensitive and secret data are now stored, manipulated and exchanged in these systems. Most solutions are currently built at the software level, we believe hardware enhancements also play a major role in system protection. One strategic point is the communication layer as all data goes through it. Monitoring and controlling communications enable to fend off attacks before system corruption. In this work, we propose an efficient solution with several hardware enhancements to secure data exchanges in a bus-based MPSoC. Our approach relies on low complexity distributed firewalls connected to all critical IPs of the system. Designers can deploy different security policies (access right, data format, authentication, confidentiality) in order to protect the system in a flexible way. To illustrate the benefit of such a solution, implementations are discussed for different MPSoCs implemented on Xilinx Virtex-6 FPGAs. Results demonstrate a reduction up to 33 in terms of latency overhead compared to existing efforts." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Some NoC-based approaches use the main NoC to carry policy configuration traffic, which is efficient and flexible @cite_32 . One design uses dedicated virtual channels on a shared NoC @cite_14 . dedicates physically-separate interconnects between the integrity core and interposers, making it simpler to determine that only an authorized entity, the integrity core, can specify the policy.
{ "cite_N": [ "@cite_14", "@cite_32" ], "mid": [ "2136613319", "2091178531" ], "abstract": [ "This paper presents a first solution for NoC-based communication security. Our proposal is based on simple network interfaces implementing distributed security rule checking and a separation between security and application channels. We detail a four- step security policy and show how, with usual NOC techniques, a designer can protect a reconfigurable SOC against attacks that result in abnormal communication behaviors. We introduce a new kind of relative and self-complemented street-sign routing adapted to path-based IP identification and reconfigurable architectures needs. Our approach is illustrated with a synthetic set-top box, we also show how to transform a real-life bus-based security solution to match our NOC-based architecture", "Security is gaining increasing relevance in the development of embedded devices. Towards a secure system at each level of design, this paper addresses security aspects related to network-on-chip (NoC) architectures, foreseen as the communication infrastructure of next-generation embedded devices. In the context of NoC-based multiprocessor systems, we focus on the topic, not yet thoroughly faced, of data protection. In this paper, we present a secure NoC architecture composed of a set of data protection units (DPUs) implemented within the network interfaces. The run-time configuration of the programmable part of the DPUs is managed by a central unit, the network security manager (NSM). The DPU, similar to a firewall, can check and limit the access rights (none, read, write, or both) of processors accessing data and instructions in a shared memory - in particular distinguishing between the operating roles (supervisor user and secure unsecure) of the processing elements. We explore different alternative implementations for the DPU and demonstrate how this unit does not affect the network latency if the memory request has the appropriate rights. We also focus on the dynamic updating of the DPUs to support their utilization in dynamic environments, and on the utilization of authentication techniques to increase the level of security." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
NoC-MPU involves an MPU controlling each master's access to the NoC @cite_22 . Each MPU uses policies stored in NoC-accessible memory as page tables and cached in permission lookaside buffers. The policies are parameterized on memory addresses and compartment identifiers. Compartment identifiers can distinguish software on a single physical master device. The authors envision a global trusted agent'' running on a dedicated processor to configure page tables, which is analogous to the integrity kernel. Supporting in-memory page tables increases the complexity of each MPU compared to interposers.
{ "cite_N": [ "@cite_22" ], "mid": [ "2133628626" ], "abstract": [ "For many embedded systems, data protection is becoming a major issue. On those systems, processors are often heterogeneous and prevent from deploying a common, trusted hypervisor on all of them. Multiple native software stacks are thus bound to share the resources without protection between them. NoC-MPU is a Memory Protection Unit allowing to support the secure and flexible co-hosting of multiple native software stacks running in multiple protection domains, on any shared memory MP-SoC using a NoC. This paper presents a complete hardware architecture of this NoC-MPU mechanism, along with a software trusted model organization." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Kim and Villasenor focused on the threat of Trojan IP and implemented measures to prevent illicit transfers that rely on the broadcast nature of a particular bus type @cite_70 . A contribution of their paper that is complementary to is their approach for dealing with availability attacks launched by malicious IP. They also briefly mention the implications of protocol violations by Trojan IP, but do not analyze those in depth.
{ "cite_N": [ "@cite_70" ], "mid": [ "2149752542" ], "abstract": [ "While the issue of Trojan ICs has been receiving increasing amounts of attention, the overwhelming majority of anti-Trojan measures aim to address the problem during verification. While such methods are an important part of an overall anti-Trojan strategy, it is statistically inevitable that some Trojans will escape verification-stage detection, in particular in light of the increasing size and complexity of system-on-chip (SoC) solutions and the increasing use of third-party designs. In contrast with much of the previous work in this area, we specifically focus on run-time methods to identify the attacks of a Trojan and to adapt the system and respond accordingly. We describe a solution including a bus architecture in which the arbitration, address decoding, multiplexing, wrapping, and other components protect against malicious use of the bus." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Other works have proposed encryption and integrity monitoring for sensitive code and data in main memory @cite_60 @cite_66 . These confidentiality and integrity enforcement techniques do not address access control for peripherals and any unencrypted data, so it is still important to implement such access control in a verifiable manner.
{ "cite_N": [ "@cite_66", "@cite_60" ], "mid": [ "1978703818", "1810006187" ], "abstract": [ "We present Bastion, a new hardware-software architecture for protecting security-critical software modules in an untrusted software stack. Our architecture is composed of enhanced microprocessor hardware and enhanced hypervisor software. Each trusted software module is provided with a secure, fine-grained memory compartment and its own secure persistent storage area. Bastion is the first architecture to provide direct hardware protection of the hypervisor from both software and physical attacks, before employing the hypervisor to provide the same protection to security-critical OS and application modules. Our implementation demonstrates the feasibility of bypassing an untrusted commodity OS to provide application security and shows better security with higher performance when compared to the Trusted Platform Module (TPM), the current industry state-of-the-art security chip. We provide a proof-of-concept implementation on the OpenSPARC platform.", "Vulnerabilities in complex software are a major threat to the security of today's computer systems, with the alarming prevalence of malware and rootkits making it difficult to guarantee security in a networked environment. Due to the widespread application of information technology to all aspects of society, these vulnerabilities threaten virtually all aspects of modern life. To protect software and data against these threats, we describe simple extensions to the Power Architecture for running Secure Executables. By using a combination of cryptographic techniques and context labeling in the CPU, these Secure Executables are protected on disk, in memory, and through all stages of execution against malicious or compromised software, and other hardware. Moreover, we show that this can be done efficiently, without significant performance penalty. Secure Executables can run simultaneously with unprotected executables; existing applications can be transformed directly into Secure Executables without changes to the source code." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Some NoC protection mechanisms have been developed commercially, but the details of their designs and the analysis processes that have been applied to them are not publicly available. Texas Instruments has applied for a patent on hardware technology that includes the ability to restrict bus accesses and execute operating systems in isolation on virtual cores @cite_16 .
{ "cite_N": [ "@cite_16" ], "mid": [ "1662735931" ], "abstract": [ "An electronic system (1400) includes a processor (1422, 2610) having a pipeline, a bus (2655) coupled to the pipeline, a storage (1435, 1440, 2650) coupled to the bus (2655), the storage (1435, 2650) having a real time operating system (RTOS) and a real-time application, a non-real-time operating system (HLOS), a secure environment kernel (SE), and a software monitor (2310); and protective circuitry (2460) coupled to the processor and operable to establish a first signal (VP1_Active) and a second signal (NS) each having states and together having combinations of the states representing a first category (2430) for the real-time operating system and the real-time application, a second category (2420) for the non-real-time operating system, and a third category (2450) for the secure environment kernel." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Huffmire al proposed a mechanism to convert memory access policies into Verilog that could be synthesized into a reference monitor @cite_41 . They targeted reconfigurable systems in which the reference monitor could be replaced at runtime by reconfiguring a portion of the FPGA. Such systems are not yet commonplace.
{ "cite_N": [ "@cite_41" ], "mid": [ "2118599102" ], "abstract": [ "While processor based systems often enforce memory protection to prevent the unintended sharing of data between processes, current systems built around reconfigurable hardware typically offer no such protection. Several reconfigurable cores are often integrated onto a single chip where they share external resources such as memory. While this enables small form factor and low cost designs, it opens up the opportunity for modules to intercept or even interfere with the operation of one another. We investigate the design and synthesis of a memory protection mechanism capable of enforcing policies expressed as a formal language. Our approach includes a specialized compiler that translates a policy of legal sharing to reconfigurable logic blocks which can be directly transferred to an FPGA. The efficiency of our access language design flow is evaluated in terms of area and cycle time across a variety of security scenarios." ] }
1404.3465
2140483360
Mobile devices are in roles where the integrity and confidentiality of their apps and data are of paramount importance. They usually contain a System-on-Chip (SoC), which integrates microprocessors and peripheral Intellectual Property (IP) connected by a Network-on-Chip (NoC). Malicious IP or software could compromise critical data. Some types of attacks can be blocked by controlling data transfers on the NoC using Memory Management Units (MMUs) and other access control mechanisms. However, commodity processors do not provide strong assurances regarding the correctness of such mechanisms, and it is challenging to verify that all access control mechanisms in the system are correctly configured. We propose a NoC Firewall (NoCF) that provides a single locus of control and is amenable to formal analysis. We demonstrate an initial analysis of its ability to resist malformed NoC commands, which we believe is the first effort to detect vulnerabilities that arise from NoC protocol violations perpetrated by erroneous or malicious IP.
Individual cores or groups of cores on a Tilera Tile processor are capable of running independent instances of Linux, and cores can be partitioned using hardware access control mechanisms that block individual links between cores @cite_0 . However, those mechanisms are not applied to memory controller and cache links, which are regulated using TLBs. could potentially be used to regulate such links.
{ "cite_N": [ "@cite_0" ], "mid": [ "2095314640" ], "abstract": [ "IMesh, the tile processor architecture's on-chip interconnection network, connects the multicore processor's tiles with five 2D mesh networks, each specialized for a different use. taking advantage of the five networks, the C-based ILIB interconnection library efficiently maps program communication across the on-chip interconnect. the tile processor's first implementation, the tile64, contains 64 cores and can execute 192 billion 32-bit operations per second at 1 Ghz." ] }
1404.3537
1536839926
In this report, we present work towards a framework for modeling and checking behavior of spatially distributed component systems. Design goals of our framework are the ability to model spatial behavior in a component oriented, simple and intuitive way, the possibility to automatically analyse and verify systems and integration possibilities with other modeling and verification tools. We present examples and the verification steps necessary to prove properties such as range coverage or the absence of collisions between components and technical details.
Related to our work is the work on path planning for robots (e.g., @cite_25 @cite_22 ). In our work, however, we are concentrating on checking existing properties of systems rather than optimization or discovery of new possible paths. Collision detection for robots in combination with motion planning has been studied for a long time, see, e.g., @cite_29 and @cite_7 . Strongly related to motion planning is the task of efficiently handling geometric reasoning. On this geometric interpretation level, techniques have been investigated to structure the tasks of detecting possible inference between geometric objects (e.g., @cite_3 and @cite_27 ) for efficient analysis.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_29", "@cite_3", "@cite_27", "@cite_25" ], "mid": [ "", "1516027685", "10899980", "1999250663", "1993850994", "2090170700" ], "abstract": [ "", "The Complexity of Robot Motion Planning makes original contributions both to robotics and to the analysis of algorithms. In this groundbreaking monograph John Canny resolves long-standing problems concerning the complexity of motion planning and, for the central problem of finding a collision free path for a jointed robot in the presence of obstacles, obtains exponential speedups over existing algorithms by applying high-powered new mathematical techniques.Canny's new algorithm for this \"generalized movers' problem,\" the most-studied and basic robot motion planning problem, has a single exponential running time, and is polynomial for any given robot. The algorithm has an optimal running time exponent and is based on the notion of roadmaps - one-dimensional subsets of the robot's configuration space. In deriving the single exponential bound, Canny introduces and reveals the power of two tools that have not been previously used in geometric algorithms: the generalized (multivariable) resultant for a system of polynomials and Whitney's notion of stratified sets. He has also developed a novel representation of object orientation based on unnormalized quaternions which reduces the complexity of the algorithms and enhances their practical applicability.After dealing with the movers' problem, the book next attacks and derives several lower bounds on extensions of the problem: finding the shortest path among polyhedral obstacles, planning with velocity limits, and compliant motion planning with uncertainty. It introduces a clever technique, \"path encoding,\" that allows a proof of NP-hardness for the first two problems and then shows that the general form of compliant motion planning, a problem that is the focus of a great deal of recent work in robotics, is non-deterministic exponential time hard. Canny proves this result using a highly original construction.John Canny received his doctorate from MIT And is an assistant professor in the Computer Science Division at the University of California, Berkeley. The Complexity of Robot Motion Planning is the winner of the 1987 ACM Doctoral Dissertation Award.", "Collision detection is a basic tool whose performance is of capital importancein order to achieve e\u000eciency in many robotics and computer graphics applica-tions, such as motion planning, obstacle avoidance, virtual prototyping, com-puter animation, physical-based modeling, dynamic simulation, and, in general,all those tasks involving the simulated motion of solids which cannot penetrateone another. In these applications, collision detection appears as a module orprocedure which exchanges information with other parts of the system concern-ing motion, kinematic and dynamic behaviour, etc. It is a widespread opinionto consider collision detection as the main bottleneck in these kinds of appli-cations.In fact, static interference detection, collision detection and the generationof con guration-space obstacles can be viewed as instances of the same prob-lem, where objects are tested for interference at a particular position, along atrajectory and throughout the whole workspace, respectively. The structure ofthis chapter reects this fact.Thus, the main guidelines in static interference detection are presented inSection 2. It is shown how hierarchical representations allow to focus on relevantregions where interference is most likely to occur, speeding up the whole inter-ference test procedure. Some interference tests reduce to detecting intersectionsbetween simple enclosing shapes, such as spheres or boxes aligned with the co-ordinate axes. However, in some situations, this approximate approach does notsu\u000ece, and exact basic interference tests (for polyhedral environments) are re-quired. The most widely used such test is that involving a segment (standing foran edge) and a polygon in 3D space (standing for a face of a polyhedron). In thiscontext, it has recently been proved that interference detection between non-convex polyhedra can be reduced, like many other problems in ComputationalGeometry, to checking some signs of vertex determinants, without computingnew geometric entities.Interference tests lie at the base of most collision detection algorithms,which are the subject of Section 3. These algorithms can be grouped into fourapproaches: multiple interference detection, swept volume interference, space-", "We present a data structure and an algorithm for efficient and exact interference detection amongst complex models undergoing rigid motion. The algorithm is applicable to all general polygonal models. It pre-computes a hierarchical representation of models using tight-fitting oriented bounding box trees (OBBTrees). At runtime, the algorithm traverses two such trees and tests for overlaps between oriented bounding boxes based on a separating axis theorem, which takes less than 200 operations in practice. It has been implemented and we compare its performance with other hierarchical data structures. In particular, it can robustly and accurately detect all the contacts between large complex geometries composed of hundreds of thousands of polygons at interactive rates. CR", "We present efficient algorithms for interference detection between geometric models described by linear or curved boundaries and undergoing rigid motion. The set of models include surfaces described by rational spline patches or piecewise algebraic functions. In contrast to previous approaches, we first describe an efficient algorithm for interference detection between convex polytopes using coherence and local features. Then an extension using hierarchical representation to concave polytopes is presented. We apply these algorithms along with properties of input models, local and global algebraic methods for solving polynomial equations, and the geometric formulation of the problem to devise efficient algorithms for convex and nonconvex curved objects. Finally, a scheduling scheme to reduce the frequency of interference detection in large environments is described. These algorithms have been successfully implemented and we discuss their performance in various environments.", "The problem of automatic collision-free path planning is central to mobile robot applications. An approach to automatic path planning based on a quadtree representation is presented. Hierarchical path-searching methods are introduced, which make use of this multiresolution representation, to speed up the path planning process considerably. The applicability of this approach to mobile robot path planning is discussed." ] }
1404.3291
1878518397
Similarity comparisons of the form "Is object a more similar to b than to c?" are useful for computer vision and machine learning applications. Unfortunately, an embedding of @math points is specified by @math triplets, making collecting every triplet an expensive task. In noticing this difficulty, other researchers have investigated more intelligent triplet sampling techniques, but they do not study their effectiveness or their potential drawbacks. Although it is important to reduce the number of collected triplets, it is also important to understand how best to display a triplet collection task to a user. In this work we explore an alternative display for collecting triplets and analyze the monetary cost and speed of the display. We propose best practices for creating cost effective human intelligence tasks for collecting triplets. We show that rather than changing the sampling algorithm, simple changes to the crowdsourcing UI can lead to much higher quality embeddings. We also provide a dataset as well as the labels collected from crowd workers.
Perceptual similarity embeddings are useful for many tasks within the field, such as metric learning @cite_2 , image search exploration @cite_13 , learning semantic clusters @cite_0 , and finding similar musical genres and artists @cite_9 @cite_4 . Our work is useful to authors who wish to collect data to create such embeddings. The common idea behind all of this work is that these authors use triplets to collect their embeddings.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "1579725375", "2088247287", "", "2129156852", "2165065544" ], "abstract": [ "The rise of digital music distribution has provided users with unprecedented access to vast song catalogs. In order to help users cope with large collections, music information retrieval systems have been developed to automatically analyze, index, and recommend music based on a user's preferences or search criteria. This dissertation proposes machine learning approaches to content-based, query-by-example search, and investigates applications in music information retrieval. The proposed methods automatically infer and optimize content-based similarity, fuse heterogeneous feature modalities, efficiently index and search under the optimized distance metric, and finally, generate sequential playlists for a specified context or style. Robust evaluation procedures are proposed to counteract issues of subjectivity and lack of explicit ground truth in music similarity and playlist generation.", "This paper considers the problem of learning an embedding of data based on similarity triplets of the form “A is more similar to B than to C”. This learning setting is of relevance to scenarios in which we wish to model human judgements on the similarity of objects. We argue that in order to obtain a truthful embedding of the underlying data, it is insufficient for the embedding to satisfy the constraints encoded by the similarity triplets. In particular, we introduce a new technique called t-Distributed Stochastic Triplet Embedding (t-STE) that collapses similar points and repels dissimilar points in the embedding — even when all triplet constraints are satisfied. Our experimental evaluation on three data sets shows that as a result, t-STE is much better than existing techniques at revealing the underlying data structure.", "", "We address the problem of visual category recognition by learning an image-to-image distance function that attempts to satisfy the following property: the distance between images from the same category should be less than the distance between images from different categories. We use patch-based feature vectors common in object recognition work as a basis for our image-to-image distance functions. Our large-margin formulation for learning the distance functions is similar to formulations used in the machine learning literature on distance metric learning, however we differ in that we learn local distance functions?a different parameterized function for every image of our training set?whereas typically a single global distance function is learned. This was a novel approach first introduced in Frome, Singer, & Malik, NIPS 2006. In that work we learned the local distance functions independently, and the outputs of these functions could not be compared at test time without the use of additional heuristics or training. Here we introduce a different approach that has the advantage that it learns distance functions that are globally consistent in that they can be directly compared for purposes of retrieval and classification. The output of the learning algorithm are weights assigned to the image features, which is intuitively appealing in the computer vision setting: some features are more salient than others, and which are more salient depends on the category, or image, being considered. We train and test using the Caltech 101 object recognition benchmark.", "Starting from a member of an image database designated the \"query image,\" traditional image retrieval techniques, for example, search by visual similarity, allow one to locate additional instances of a target category residing in the database. However, in many cases, the query image or, more generally, the target category, resides only in the mind of the user as a set of subjective visual patterns, psychological impressions, or \"mental pictures.\" Consequently, since image databases available today are often unstructured and lack reliable semantic annotations, it is often not obvious how to initiate a search session; this is the \"page zero problem.\" We propose a new statistical framework based on relevance feedback to locate an instance of a semantic category in an unstructured image database with no semantic annotation. A search session is initiated from a random sample of images. At each retrieval round, the user is asked to select one image from among a set of displayed images-the one that is closest in his opinion to the target class. The matching is then \"mental.\" Performance is measured by the number of iterations necessary to display an image which satisfies the user, at which point standard techniques can be employed to display other instances. Our core contribution is a Bayesian formulation which scales to large databases. The two key components are a response model which accounts for the user's subjective perception of similarity and a display algorithm which seeks to maximize the flow of information. Experiments with real users and two databases of 20,000 and 60,000 images demonstrate the efficiency of the search process." ] }
1404.3291
1878518397
Similarity comparisons of the form "Is object a more similar to b than to c?" are useful for computer vision and machine learning applications. Unfortunately, an embedding of @math points is specified by @math triplets, making collecting every triplet an expensive task. In noticing this difficulty, other researchers have investigated more intelligent triplet sampling techniques, but they do not study their effectiveness or their potential drawbacks. Although it is important to reduce the number of collected triplets, it is also important to understand how best to display a triplet collection task to a user. In this work we explore an alternative display for collecting triplets and analyze the monetary cost and speed of the display. We propose best practices for creating cost effective human intelligence tasks for collecting triplets. We show that rather than changing the sampling algorithm, simple changes to the crowdsourcing UI can lead to much higher quality embeddings. We also provide a dataset as well as the labels collected from crowd workers.
Our work bears much similarity to Crowd Kernel Learning @cite_12 and Active MDS @cite_1 . These algorithms focus on collecting triplets one at a time, but sampling the triplets first. The idea behind these systems is that the bulk of the information in the embedding can be captured within a very small number of triplets, since most triplets convey redundant information. For instance, Crowd Kernel Learning @cite_12 considers each triplet individually, modeling the information gain learned from that triplet as a probability distribution over embedding space. Active MDS @cite_1 consider a set of triplets as a partial ranking with respect to each object in the embedding, placing geometric constraints on the locations where each point may lie. In our work we focus on altering UI design to improve speed and quality of triplet collection.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "1968265719", "2951342632" ], "abstract": [ "Low-dimensional embedding based on non-metric data (e.g., non-metric multidimensional scaling) is a problem that arises in many applications, especially those involving human subjects. This paper investigates the problem of learning an embedding of n objects into d-dimensional Euclidean space that is consistent with pairwise comparisons of the type “object a is closer to object b than c.” While there are O(n3) such comparisons, experimental studies suggest that relatively few are necessary to uniquely determine the embedding up to the constraints imposed by all possible pairwise comparisons (i.e., the problem is typically over-constrained). This paper is concerned with quantifying the minimum number of pairwise comparisons necessary to uniquely determine an embedding up to all possible comparisons. The comparison constraints stipulate that, with respect to each object, the other objects are ranked relative to their proximity. We prove that at least Q(dn log n) pairwise comparisons are needed to determine the embedding of all n objects. The lower bounds cannot be achieved by using randomly chosen pairwise comparisons. We propose an algorithm that exploits the low-dimensional geometry in order to accurately embed objects based on relatively small number of sequentially selected pairwise comparisons and demonstrate its performance with experiments.", "We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form \"is object 'a' more similar to 'b' or to 'c'?\" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the \"crowd kernel.\" SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as \"is striped\" among neckties and \"vowel vs. consonant\" among letters." ] }
1404.3637
1574523753
In this paper, we introduce a game theoretic frame- work for studying the problem of minimizing the delay of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self- interested players in a non-cooperative potential game. The utility functions are designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash bargaining solution. Three games are developed: the first aims to reduce the completion time, the second to reduce the maximum decoding delay and the third the sum decoding delay. We improve these formulations to include punishment policy upon collision occurrence and achieve the Nash bargaining solution. Through extensive simulations, our framework is tested against the best performance that could be found in the conventional point- to-multipoint (PMP) recovery process in numerous cases: first we simulate the problem with complete information. We, then, simulate with incomplete information and finally we test it in lossy feedback scenario. Numerical results show that our formulation with complete information largely outperforms the conventional PMP scheme in most situations and achieves a lower delay. They also show that the completion time formulation with incomplete information also outperforms the conventional PMP. Index Terms—Cooperative data exchange, instantly decodable network coding, non-cooperative games, potential game, Nash equilibrium.
In all aforementioned works, the base station of a point-to-multipoint network (such as cellular, Wi-Fi and WiMAX and roadside to vehicle networks) was assumed to be responsible for the recovery of erased packets. This can pose a threat on the resources of such base stations and their abilities to deliver the required huge data rates especially in future wireless standards. This problem becomes more severe in roadside to vehicle networks since the vehicles usually bypass the roadside senders very fast and thus cannot rely on it for packet recovery but rather on completing the missing packets among themselves. One alternative to this problem is the notion of cooperative data exchange (CDE) introduced in @cite_44 . In this configuration, clients can cooperate to exchange data by sending IDNC recovery packets to each other over short range and more reliable communication channels, thus allowing the base station to serve other clients. This CDE model is also important for fast and reliable data communications over ad-hoc networks, such vehicular and sensor networks. Consequently, it is very important to study the minimization of delays and number of transmissions in such IDNC-based CDE systems.
{ "cite_N": [ "@cite_44" ], "mid": [ "2144549564" ], "abstract": [ "The advantages of coded cooperative data exchange has been studied in the literature. In this problem, a group of wireless clients are interested in the same set of packets (a multicast scenario). Each client initially holds a subset of packets and wills to obtain its missing packets in a cooperative setting by exchanging packets with its peers. Cooperation via short range transmission links among the clients (which are faster, cheaper and more reliable) is an alternative for retransmissions by the base station. In this paper, we extend the problem of cooperative data exchange to the case of multiple unicasts to a set of n clients, where each client c i is interested in a specific message x i and the clients cooperate with each others to compensate the errors occurred over the downlink. Moreover, our proposed method maintains the secrecy of individuals' messages at the price of a substantially small overhead." ] }
1404.3637
1574523753
In this paper, we introduce a game theoretic frame- work for studying the problem of minimizing the delay of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self- interested players in a non-cooperative potential game. The utility functions are designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash bargaining solution. Three games are developed: the first aims to reduce the completion time, the second to reduce the maximum decoding delay and the third the sum decoding delay. We improve these formulations to include punishment policy upon collision occurrence and achieve the Nash bargaining solution. Through extensive simulations, our framework is tested against the best performance that could be found in the conventional point- to-multipoint (PMP) recovery process in numerous cases: first we simulate the problem with complete information. We, then, simulate with incomplete information and finally we test it in lossy feedback scenario. Numerical results show that our formulation with complete information largely outperforms the conventional PMP scheme in most situations and achieves a lower delay. They also show that the completion time formulation with incomplete information also outperforms the conventional PMP. Index Terms—Cooperative data exchange, instantly decodable network coding, non-cooperative games, potential game, Nash equilibrium.
Unlike conventional point-to-multipoint scenario, the IDNC based CDE systems require not only decisions on packet combinations but also on which client to send in every transmission in order to achieve a certain quality for one of the network metrics. Recently Aboutorab and al. @cite_24 considered the problem of minimizing the sum decoding delay for CDE in a centralized fashion. By centralized, we mean that a central unit (such as the base station in the cellular example) takes the decisions on which client to send and which packet combination in each transmission.
{ "cite_N": [ "@cite_24" ], "mid": [ "2030775967" ], "abstract": [ "This paper investigates the use of instantly decodable network coding (IDNC) for minimizing the mean decoding delay in multicast cooperative data exchange systems, where the clients cooperate with each other to obtain their missing packets. Here, IDNC is used to reduce the decoding delay of each transmission across all clients. We first introduce a new framework to find the optimum client and coded packet that result in the minimum mean decoding delay. However, since finding the optimum solution of the proposed framework is NP-hard, we further propose a heuristic algorithm that aims to minimize the lower bound on the expected decoding delay in each transmission. The effectiveness of the proposed algorithm is assessed through simulations." ] }
1404.3637
1574523753
In this paper, we introduce a game theoretic frame- work for studying the problem of minimizing the delay of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self- interested players in a non-cooperative potential game. The utility functions are designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash bargaining solution. Three games are developed: the first aims to reduce the completion time, the second to reduce the maximum decoding delay and the third the sum decoding delay. We improve these formulations to include punishment policy upon collision occurrence and achieve the Nash bargaining solution. Through extensive simulations, our framework is tested against the best performance that could be found in the conventional point- to-multipoint (PMP) recovery process in numerous cases: first we simulate the problem with complete information. We, then, simulate with incomplete information and finally we test it in lossy feedback scenario. Numerical results show that our formulation with complete information largely outperforms the conventional PMP scheme in most situations and achieves a lower delay. They also show that the completion time formulation with incomplete information also outperforms the conventional PMP. Index Terms—Cooperative data exchange, instantly decodable network coding, non-cooperative games, potential game, Nash equilibrium.
The aforementioned work considered a perfect and prompt feedback from all the players. This assumption is too idealistic given the impairments on feedback link @cite_18 of wireless networks, due to shadowing, high interference and fading. In this situation, players will need to transmit several subsequent packets without having any information (or partial information) about their reception status at the other players. Moreover, this scenario introduce a new dimension to the problem since the information at the different players is no longer common knowledge. When IDNC based CDE is used in such uncertainties and non symmetric information, the players will no longer be certain on the outcome of their actions, and thus will not be certain about the resulting completion time and decoding delays.
{ "cite_N": [ "@cite_18" ], "mid": [ "1485005937" ], "abstract": [ "In this paper, we consider the problem of min- imizing the multicast decoding delay of generalized instantly decodable network coding (G-IDNC) over persistent forward and feedback erasure channels with feedback intermittence. In such an environment, the sender does not always receive acknowledgement from the receivers after each transmission. Moreover, both the forward and feedback channels are subject to persistent erasures, which can be modelled by a two state (good and bad states) Markov chain known as Gilbert-Elliott channel (GEC). Due to such feedback imperfections, the sender is unable to determine subsequent instantly decodable packets combination for all receivers. Given this harsh channel and feedback model, we first derive expressions for the probability distributions of decoding delay increments and then employ these expressions in formulating the minimum decoding problem in such environment as a maximum weight clique problem in the G-IDNC graph. We also show that the problem formulations in simpler channel and feedback models are special cases of our generalized formulation. Since this problem is NP-hard, we design a greedy algorithm to solve it and compare it to blind approaches proposed in literature. Through extensive simulations, our adaptive algorithm is shown to outperform the blind approaches in all situations and to achieve significant improvement in the decoding delay, especially when the channel is highly persistent. Index Terms—Multicast Channels, Persistent Erasure Chan- nels, G-IDNC, Decoding Delay, Lossy Intermittent Feedback, Maximum Weight Clique Problem." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
The majority of computational attention modeling studies follow the Feature Integration Theory @cite_8 . In particular, the pioneering work by Itti al @cite_17 @cite_37 first explored the computational aspect of FIT by searching for center-surround patterns across multiple feature channels and image scales. This method was further extended through integration of color contrast @cite_0 , symmetry @cite_23 , . Random Center Surround Saliency @cite_35 adopted a similar center-surround heuristic but with center size and region randomly sampled. Harel al @cite_50 introduced a graph-based model that treated feature maps as fully connected nodes, while the nodes communicated according to their dissimilarity and distance in a Markovian way. Saliency was activated as the equilibrium distribution.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_8", "@cite_0", "@cite_23", "@cite_50", "@cite_17" ], "mid": [ "2052245719", "", "2030863706", "2162216530", "2121600399", "2135957164", "2128272608" ], "abstract": [ "In this article we propose a novel approach to compute an image saliency map based on computing local saliencies over random rectangular regions of interest. Unlike many of the existing methods, the proposed approach does not require any training bases, operates on the image at the original scale and has only a single parameter which requires tuning. It has been tested on the two distinct tasks of salient region detection (using MSRA dataset) and eye gaze prediction (using York University and MIT datasets). The proposed method achieves state-of-the-art performance on the eye gaze prediction task as compared with nine other state-of-the-art methods.", "", "Abstract Object perception may involve seeing, recognition, preparation of actions, and emotional responses — functions that human brain imaging and neuropsychology suggest are localized separately. Perhaps because of this specialization, object perception is remarkably rapid and efficient. Representations of componential structure and interpolation from view-dependent images both play a part in object recognition. Unattended objects may be implicitly registered, but recent experiments suggest that attention is required to bind features, to represent three-dimensional structure, and to mediate awareness.", "Visual attention is a mechanism which filters out redundant visual information and detects the most relevant parts of our visual field. Automatic determination of the most visually relevant areas would be useful in many applications such as image and video coding, watermarking, video browsing, and quality assessment. Many research groups are currently investigating computational modeling of the visual attention system. The first published computational models have been based on some basic and well-understood human visual system (HVS) properties. These models feature a single perceptual layer that simulates only one aspect of the visual system. More recent models integrate complex features of the HVS and simulate hierarchical perceptual representation of the visual input. The bottom-up mechanism is the most occurring feature found in modern models. This mechanism refers to involuntary attention (i.e., salient spatial visual features that effortlessly or involuntary attract our attention). This paper presents a coherent computational approach to the modeling of the bottom-up visual attention. This model is mainly based on the current understanding of the HVS behavior. Contrast sensitivity functions, perceptual decomposition, visual masking, and center-surround interactions are some of the features implemented in this model. The performances of this algorithm are assessed by using natural images and experimental measurements from an eye-tracking system. Two adequate well-known metrics (correlation coefficient and Kullbacl-Leibler divergence) are used to validate this model. A further metric is also defined. The results from this model are finally compared to those from a reference bottom-up model.", "Humans are very sensitive to symmetry in visual patterns. Symmetry is detected and recognized very rapidly. While viewing symmetrical patterns eye fixations are concentrated along the axis of symmetry or the symmetrical center of the patterns. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of saliency. These models do not take symmetry into account. In this paper, we discuss local symmetry as measure of saliency. We developed a number of symmetry models an performed an eye tracking study with human participants viewing photographic images to test the models. The performance of our symmetry models is compared with the contrast saliency model of [1]. The results show that the symmetry models better match the human data than the contrast model. This indicates that symmetry is a salient structural feature for humans, a finding which can be exploited in computer vision.", "A new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: first forming activation maps on certain feature channels, and then normalizing them in a way which highlights conspicuity and admits combination with other maps. The model is simple, and biologically plausible insofar as it is naturally parallelized. This model powerfully predicts human fixations on 749 variations of 108 natural images, achieving 98 of the ROC area of a human-based control, whereas the classical algorithms of Itti & Koch ([2], [3], [4]) achieve only 84 .", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
Several saliency models adopted a probabilistic approach and modeled the statistics of image features. Itti and Baldi @cite_12 defined saliency as surprise that arised from the divergence of prior and posterior belief. SUN @cite_24 was a Bayesian framework using natural statistics, in which bottom-up saliency was defined as self-information. Bruce and Tsotsos @cite_31 proposed an attention model based on information maximization of image patches. Garcia al @cite_13 defined the saliency by computing the Hotelling's T-squared statistics of each multi-scale feature channel. Gao al @cite_21 considered saliency in a discriminative setting by defining the KL-divergence between features and class labels.
{ "cite_N": [ "@cite_21", "@cite_24", "@cite_31", "@cite_13", "@cite_12" ], "mid": [ "2130637418", "2133589685", "2139047169", "2017245158", "2039546655" ], "abstract": [ "The classical hypothesis, that bottom-up saliency is a center-surround process, is combined with a more recent hypothesis that all saliency decisions are optimal in a decision-theoretic sense. The combined hypothesis is denoted as discriminant center-surround saliency, and the corresponding optimal saliency architecture is derived. This architecture equates the saliency of each image location to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround, at that location. It is shown that the resulting saliency detector makes accurate quantitative predictions for various aspects of the psychophysics of human saliency, including non-linear properties beyond the reach of previous saliency models. Furthermore, it is shown that discriminant center-surround saliency can be easily generalized to various stimulus modalities (such as color, orientation and motion), and provides optimal solutions for many other saliency problems of interest for computer vision. Optimal solutions, under this hypothesis, are derived for a number of the former (including static natural images, dense motion fields, and even dynamic textures), and applied to a number of the latter (the prediction of human eye fixations, motion-based saliency in the presence of ego-motion, and motion-based saliency in the presence of highly dynamic backgrounds). In result, discriminant saliency is shown to predict eye fixations better than previous models, and produces background subtraction algorithms that outperform the state-of-the-art in computer vision.", "We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.", "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.", "This paper presents a novel approach to visual saliency that relies on a contextually adapted representation produced through adaptive whitening of color and scale features. Unlike previous models, the proposal is grounded on the specific adaptation of the basis of low level features to the statistical structure of the image. Adaptation is achieved through decorrelation and contrast normalization in several steps in a hierarchical approach, in compliance with coarse features described in biological visual systems. Saliency is simply computed as the square of the vector norm in the resulting representation. The performance of the model is compared with several state-of-the-art approaches, in predicting human fixations using three different eye-tracking datasets. Referring this measure to the performance of human priority maps, the model proves to be the only one able to keep the same behavior through different datasets, showing free of biases. Moreover, it is able to predict a wide set of relevant psychophysical observations, to our knowledge, not reproduced together by any other model before.", "We propose a formal Bayesian definition of surprise to capture subjective aspects of sensory information. Surprise measures how data affects an observer, in terms of differences between posterior and prior beliefs about the world. Only data observations which substantially affect the observer’s beliefs yield surprise, irrespectively of how rare or informative in Shannon’s sense these observations are. We test the framework by quantifying the extent to which humans may orient attention and gaze towards surprising events or items while watching television. To this end, we implement a simple computational model where a low-level, sensory form of surprise is computed by simple simulated early visual neurons. Bayesian surprise is a strong attractor of human attention, with 72 of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84 when focusing the analysis onto regions simultaneously selected by all observers. The proposed theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
A special class of saliency detection schemes was frequency-domain methods. Hou and Zhang @cite_3 proposed a spectral residual method, which defined saliency as irregularities in amplitude information. Guo @cite_39 explored the phase information in the frequency domain with a Quaternion Fourier Transform. Recently, Hou al @cite_47 introduced a simple image descriptor, based on which a competitive fast saliency detection algorithm was devised.
{ "cite_N": [ "@cite_47", "@cite_3", "@cite_39" ], "mid": [ "2037328649", "2146103513", "2170869852" ], "abstract": [ "We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods.", "The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method.", "Salient areas in natural scenes are generally regarded as the candidates of attention focus in human eyes, which is the key stage in object detection. In computer vision, many models have been proposed to simulate the behavior of eyes such as SaliencyToolBox (STB), neuromorphic vision toolkit (NVT) and etc., but they demand high computational cost and their remarkable results mostly rely on the choice of parameters. Recently a simple and fast approach based on Fourier transform called spectral residual (SR) was proposed, which used SR of the amplitude spectrum to obtain the saliency map. The results are good, but the reason is questionable." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
Different from our proposal, the conventional practice in fusing saliency at different image scales and feature channels was through linear combination. Borji @cite_10 proposed a model that combined a global saliency model AIM @cite_31 and a local model @cite_17 @cite_37 through linear addition of normalized maps. Some models learned the linear combination weights for feature channels. Judd al @cite_32 trained a linear SVM from human eye fixation data to optimally combine the activation of several low-, mid- and high-level features. With a similar idea, Zhao and Koch @cite_44 adopted a regression-based approach.
{ "cite_N": [ "@cite_37", "@cite_32", "@cite_44", "@cite_31", "@cite_10", "@cite_17" ], "mid": [ "", "1510835000", "2151900481", "2139047169", "2169632643", "2128272608" ], "abstract": [ "", "For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene. Where eye tracking devices are not a viable option, models of saliency can be used to predict fixation locations. Most saliency approaches are based on bottom-up computation that does not consider top-down image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 15 viewers on 1003 images and use this database as training and testing examples to learn a model of saliency based on low, middle and high-level image features. This large database of eye tracking data is publicly available with this paper.", "Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications.", "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in die primate visual cortex. It is further shown that the proposed salicney measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.", "We introduce a saliency model based on two key ideas. The first one is considering local and global image patch rarities as two complementary processes. The second one is based on our observation that for different images, one of the RGB and Lab color spaces outperforms the other in saliency detection. We propose a framework that measures patch rarities in each color space and combines them in a final map. For each color channel, first, the input image is partitioned into non-overlapping patches and then each patch is represented by a vector of coefficients that linearly reconstruct it from a learned dictionary of patches from natural scenes. Next, two measures of saliency (Local and Global) are calculated and fused to indicate saliency of each patch. Local saliency is distinctiveness of a patch from its surrounding patches. Global saliency is the inverse of a patch's probability of happening over the entire image. The final saliency map is built by normalizing and fusing local and global saliency maps of all channels from both color systems. Extensive evaluation over four benchmark eye-tracking datasets shows the significant advantage of our approach over 10 state-of-the-art saliency models.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
Our model is characterized by a top-down flow of information. But it differs from most existing saliency detection models that incorporate top-down components such as @cite_34 @cite_19 @cite_24 @cite_41 in two aspects. First, a biased prior ( , context clues, object features, task-related factors) is often needed in those models, serving as the goal of top-down modulation, which is not necessary in our model. Second, hierarchical structure of the visual cortex is not considered in those models, but plays a significant role in our model.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_34", "@cite_41" ], "mid": [ "2133589685", "2106848651", "2030031014", "1996326832" ], "abstract": [ "We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model’s bottom-up saliency maps perform as well as or better than existing algorithms in predicting people’s fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.", "Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.", "As you drive into the centre of town, cars and trucks approach from several directions, and pedestrians swarm into the intersection. The wind blows a newspaper into the gutter and a pigeon does something unexpected on your windshield. This would be a demanding and stressful situation, but you would probably make it to the other side of town without mishap. Why is this situation taxing, and how do you cope?", "In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach." ] }
1404.2999
2094493060
A number of psychological and physiological evidences suggest that early visual attention works in a coarse-to-fine way, which lays a basis for the reverse hierarchy theory (RHT). This theory states that attention propagates from the top level of the visual hierarchy that processes gist and abstract information of input, to the bottom level that processes local details. Inspired by the theory, we develop a computational model for saliency detection in images. First, the original image is downsampled to different scales to constitute a pyramid. Then, saliency on each layer is obtained by image super-resolution reconstruction from the layer above, which is defined as unpredictability from this coarse-to-fine reconstruction. Finally, saliency on each layer of the pyramid is fused into stochastic fixations through a probabilistic model, where attention initiates from the top layer and propagates downward through the pyramid. Extensive experiments on two standard eye-tracking datasets show that the proposed method can achieve competitive results with state-of-the-art models.
Nevertheless, there were a few preliminary studies trying to make use of the hierarchical structure for saliency detection and attention modeling. The Selective Tuning Model @cite_20 was such a model. It was a biologically plausible neural network that modeled visual attention as a forward winner-takes-all process among units in each visual layer. A recent study @cite_42 used hierarchical structure to combine multi-scale saliency, with a hierarchical inference procedure that enforces the saliency of a region to be consistent across different layers.
{ "cite_N": [ "@cite_42", "@cite_20" ], "mid": [ "2002781701", "2089597841" ], "abstract": [ "When dealing with objects with complex structures, saliency detection confronts a critical problem - namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed.", "A model for aspects of visual attention based on the concept of selective tuning is presented. It provides for a solution to the problems of selection in an image, information routing through the visual processing hierarchy and task-specific attentional bias. The central thesis is that attention acts to optimize the search procedure inherent in a solution to vision. It does so by selectively tuning the visual processing network which is accomplished by a top-down hierarchy of winner-take-all processes embedded within the visual processing pyramid. Comparisons to other major computational models of attention and to the relevant neurobiology are included in detail throughout the paper. The model has been implemented; several examples of its performance are shown. This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system." ] }
1404.2677
2949487840
We study the problem of designing a data structure that reports the positions of the distinct @math -majorities within any range of an array @math , without storing @math . A @math -majority in a range @math , for @math , is an element that occurs more than @math times in @math . We show that @math bits are necessary for any data structure able just to count the number of distinct @math -majorities in any range. Then, we design a structure using @math bits that returns one position of each @math -majority of @math in @math time, on a RAM machine with word size @math (it can output any further position where each @math -majority occurs in @math additional time). Finally, we show how to remove a @math factor from the time by adding @math bits of space to the structure.
Range @math -majority queries were introduced by Karpinski and Nekrich @cite_19 , who presented an @math -words structure with @math query time. @cite_0 improved their word-space and query time to @math and @math , respectively. @cite_3 presented another trade-off, where the space is @math bits and the query time is @math . Here @math denotes the empirical entropy of the distribution of elements in @math (we use @math to denote the logarithm in base 2). The best current result in general is by @cite_9 , where the space is @math words and the query time is @math . All these results assume that @math is fixed at construction time.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_9", "@cite_3" ], "mid": [ "2156839950", "2951951354", "2949155877", "6548167" ], "abstract": [ "Given an array A of size n, we consider the problem of answering range majority queries: given a query range [i..j] where 1=", "We study a new variant of colored orthogonal range searching problem: given a query rectangle @math all colors @math , such that at least a fraction @math of all points in @math are of color @math , must be reported. We describe several data structures for that problem that use pseudo-linear space and answer queries in poly-logarithmic time.", "Karpinski and Nekrich (2008) introduced the problem of parameterized range majority, which asks to preprocess a string of length @math such that, given the endpoints of a range, one can quickly find all the distinct elements whose relative frequencies in that range are more than a threshold @math . Subsequent authors have reduced their time and space bounds such that, when @math is given at preprocessing time, we need either @math space and optimal @math query time or linear space and @math query time, where @math is the alphabet size. In this paper we give the first linear-space solution with optimal @math query time. For the case when @math is given at query time, we significantly improve previous bounds, achieving either @math space and optimal @math query time or compressed space and @math query time. Along the way, we consider the complementary problem of parameterized range minority that was recently introduced by (2012), who achieved linear space and @math query time even for variable @math . We improve their solution to use either nearly optimally compressed space with no slowdown, or optimally compressed space with nearly no slowdown. Some of our intermediate results, such as density-sensitive query time for one-dimensional range counting, may be of independent interest.", "We show how to store a compressed two-dimensional array such that, if we are asked for the elements with high relative frequency in a range, we can quickly return a short list of candidates that includes them. More specifically, given an m × n array A and a fraction α > 0, we can store A in O(mn(H + 1)log2(1 α) bits, where H is the entropy of the elements' distribution in A, such that later, given a rectangular range in A and a fraction β ≥ α, in O(1 β) time we can return a list of O(1 β) distinct array elements that includes all the elements that have relative frequency at least β in that range. We do not verify that the elements in the list have relative frequency at least β, so the list may contain false positives. In the case when m = 1, i.e., A is a string, we improve this space bound by a factor of log(1 α), and explore a spacetime trade off for verifying the frequency of the elements in the list. This leads to an O(n min(log(1 α), H +1) log n) bit data structure for strings that, in O(1 β) time, can return the O(1 β) elements that have relative frequency at least β in a given range, without false positives, for β ≥ α." ] }
1404.2677
2949487840
We study the problem of designing a data structure that reports the positions of the distinct @math -majorities within any range of an array @math , without storing @math . A @math -majority in a range @math , for @math , is an element that occurs more than @math times in @math . We show that @math bits are necessary for any data structure able just to count the number of distinct @math -majorities in any range. Then, we design a structure using @math bits that returns one position of each @math -majority of @math in @math time, on a RAM machine with word size @math (it can output any further position where each @math -majority occurs in @math additional time). Finally, we show how to remove a @math factor from the time by adding @math bits of space to the structure.
For the case where @math is also a part of the query input, data structures of space (in words) @math and @math were proposed by @cite_3 and @cite_24 , respectively. Very recently, @cite_9 brought down the space occupancy to @math words, where @math is the number of distinct elements in @math . The query time is @math in all cases. @cite_9 also presented a compressed solution using @math bits, with slightly higher query time. All these solutions include a (sometimes compressed) representation of @math , thus they are not encodings. As far as we know, ours is the first encoding for this problem. For further reading, we recommend the recent survey by Skala @cite_2 .
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_3", "@cite_2" ], "mid": [ "2155992938", "2949155877", "6548167", "" ], "abstract": [ "We consider range queries that search for low-frequency elements (least frequent elements and @math ?-minorities) in arrays. An @math ?-minority of a query range has multiplicity no greater than an @math ? fraction of the elements in the range. Our data structure for the least frequent element range query problem requires @math O(n) space, @math O(n3 2) preprocessing time, and @math O(n) query time. A reduction from boolean matrix multiplication to this problem shows the hardness of simultaneous improvements in both preprocessing time and query time. Our data structure for the @math ?-minority range query problem requires @math O(n) space, supports queries in @math O(1 ?) time, and allows @math ? to be specified at query time.", "Karpinski and Nekrich (2008) introduced the problem of parameterized range majority, which asks to preprocess a string of length @math such that, given the endpoints of a range, one can quickly find all the distinct elements whose relative frequencies in that range are more than a threshold @math . Subsequent authors have reduced their time and space bounds such that, when @math is given at preprocessing time, we need either @math space and optimal @math query time or linear space and @math query time, where @math is the alphabet size. In this paper we give the first linear-space solution with optimal @math query time. For the case when @math is given at query time, we significantly improve previous bounds, achieving either @math space and optimal @math query time or compressed space and @math query time. Along the way, we consider the complementary problem of parameterized range minority that was recently introduced by (2012), who achieved linear space and @math query time even for variable @math . We improve their solution to use either nearly optimally compressed space with no slowdown, or optimally compressed space with nearly no slowdown. Some of our intermediate results, such as density-sensitive query time for one-dimensional range counting, may be of independent interest.", "We show how to store a compressed two-dimensional array such that, if we are asked for the elements with high relative frequency in a range, we can quickly return a short list of candidates that includes them. More specifically, given an m × n array A and a fraction α > 0, we can store A in O(mn(H + 1)log2(1 α) bits, where H is the entropy of the elements' distribution in A, such that later, given a rectangular range in A and a fraction β ≥ α, in O(1 β) time we can return a list of O(1 β) distinct array elements that includes all the elements that have relative frequency at least β in that range. We do not verify that the elements in the list have relative frequency at least β, so the list may contain false positives. In the case when m = 1, i.e., A is a string, we improve this space bound by a factor of log(1 α), and explore a spacetime trade off for verifying the frequency of the elements in the list. This leads to an O(n min(log(1 α), H +1) log n) bit data structure for strings that, in O(1 β) time, can return the O(1 β) elements that have relative frequency at least β in a given range, without false positives, for β ≥ α.", "" ] }
1404.2677
2949487840
We study the problem of designing a data structure that reports the positions of the distinct @math -majorities within any range of an array @math , without storing @math . A @math -majority in a range @math , for @math , is an element that occurs more than @math times in @math . We show that @math bits are necessary for any data structure able just to count the number of distinct @math -majorities in any range. Then, we design a structure using @math bits that returns one position of each @math -majority of @math in @math time, on a RAM machine with word size @math (it can output any further position where each @math -majority occurs in @math additional time). Finally, we show how to remove a @math factor from the time by adding @math bits of space to the structure.
The time can be improved to @math on a RAM machine of @math bits by sampling, for each increasing interval of @math of length more than @math , one value out of @math . Predecessor data structures are built on the samples of each interval, taking at most @math bits. Then we first run a predecessor query on @math , which takes time @math @cite_21 , and finish with an @math -time binary search between the resulting samples.
{ "cite_N": [ "@cite_21" ], "mid": [ "2949195484" ], "abstract": [ "We develop a new technique for proving cell-probe lower bounds for static data structures. Previous lower bounds used a reduction to communication games, which was known not to be tight by counting arguments. We give the first lower bound for an explicit problem which breaks this communication complexity barrier. In addition, our bounds give the first separation between polynomial and near linear space. Such a separation is inherently impossible by communication complexity. Using our lower bound technique and new upper bound constructions, we obtain tight bounds for searching predecessors among a static set of integers. Given a set Y of n integers of l bits each, the goal is to efficiently find predecessor(x) = max y in Y | y <= x , by representing Y on a RAM using space S. In external memory, it follows that the optimal strategy is to use either standard B-trees, or a RAM algorithm ignoring the larger block size. In the important case of l = c*lg n, for c>1 (i.e. polynomial universes), and near linear space (such as S = n*poly(lg n)), the optimal search time is Theta(lg l). Thus, our lower bound implies the surprising conclusion that van Emde Boas' classic data structure from [FOCS'75] is optimal in this case. Note that for space n^ 1+eps , a running time of O(lg l lglg l) was given by Beame and Fich [STOC'99]." ] }