aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1202.1567
|
2117613335
|
To save time and money, businesses and individuals have begun outsourcing their data and computations to cloud computing services. These entities would, however, like to ensure that the queries they request from the cloud services are being computed correctly. In this paper, we use the principles of economics and competition to vastly reduce the complexity of query verification on outsourced data. We consider two cases: First, we consider the scenario where multiple non-colluding data outsourcing services exist, and then we consider the case where only a single outsourcing service exists. Using a game theoretic model, we show that given the proper incentive structure, we can effectively deter dishonest behavior on the part of the data outsourcing services with very few computational and monetary resources. We prove that the incentive for an outsourcing service to cheat can be reduced to zero. Finally, we show that a simple verification method can achieve this reduction through extensive experimental evaluation.
|
Many of the schemes require complex cryptographic protocols. Some encrypt the data itself, relying on homomorphic schemes to allow the cloud provider to perform the computation @cite_14 @cite_7 . A homomorphic operation will always be less efficient than the operation on the plaintext, rendering the overhead of these protocols greater by orders of magnitude. Others, such as @cite_19 , rely on relatively simpler cryptographic primitives, like secure hash functions. To maintain integrity, our scheme will also use hash functions. Our verification framework is, however, simpler than these, and can be used to improve the expected runtime of any of these verification schemes.
|
{
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_7"
],
"mid": [
"1566967335",
"1557386445",
"2942812477"
],
"abstract": [
"In this paper we propose and analyze a method for proofs of actual query execution in an outsourced database framework, in which a client outsources its data management needs to a specialized provider. The solution is not limited to simple selection predicate queries but handles arbitrary query types. While this work focuses mainly on read-only, compute-intensive (e.g. data-mining) queries, it also provides preliminary mechanisms for handling data updates (at additional costs). We introduce query execution proofs; for each executed batch of queries the database service provider is required to provide a strong cryptographic proof that provides assurance that the queries were actually executed correctly over their entire target data set. We implement a proof of concept and present experimental results in a real-world data mining application, proving the deployment feasibility of our solution. We analyze the solution and show that its overheads are reasonable and are far outweighed by the added security benefits. For example an assurance level of over 95 can be achieved with less than 25 execution time overhead.",
"We introduce and formalize the notion of Verifiable Computation, which enables a computationally weak client to \"outsource\" the computation of a function F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The primary constraint is that the verification of the proof should require substantially less computational effort than computing F(i) from scratch. We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(mċpoly(λ)) time, where m is the bit-length of the output of F, and λ is a security parameter. The protocol requires a one-time pre-processing stage by the client which takes O(|C|ċpoly(λ)) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values.",
"We are interested in the integrity of the query results from an outsourced database service provider. Alice passes a set D of d-dimensional points, together with some authentication tag T, to an untrusted service provider Bob. Later, Alice issues some query over D to Bob, and Bob should produce the query result and a proof based on D and T. Alice wants to verify the integrity of the query result with the help of the proof, using only the private key. In this paper, we consider aggregate query conditional on multidimensional range selection. In its basic form, a query asks for the total number of data points within a d-dimensional range. We are concerned about the number of communication bits required and the size of the tag T. We give a scheme that requires O(d log N) communication bits to authenticate an aggregate count query conditional on d-dimensional range selection, where N is the number of points in the dataset. The security of our scheme relies on Generalized Knowledge of Exponent Assumption proposed by Wu and Stinson [1]. The low communication bandwidth is achieved due to a new functional encryption scheme, which exploits a special property of BBG HIBE scheme [2]. Besides counting, our scheme can be extended to support summing, finding of the minimum and usual (nonaggregate) range selection with similar complexity, and the proposed approach potentially can be applied to other queries by using suitable functional encryption schemes."
]
}
|
1202.1567
|
2117613335
|
To save time and money, businesses and individuals have begun outsourcing their data and computations to cloud computing services. These entities would, however, like to ensure that the queries they request from the cloud services are being computed correctly. In this paper, we use the principles of economics and competition to vastly reduce the complexity of query verification on outsourced data. We consider two cases: First, we consider the scenario where multiple non-colluding data outsourcing services exist, and then we consider the case where only a single outsourcing service exists. Using a game theoretic model, we show that given the proper incentive structure, we can effectively deter dishonest behavior on the part of the data outsourcing services with very few computational and monetary resources. We prove that the incentive for an outsourcing service to cheat can be reduced to zero. Finally, we show that a simple verification method can achieve this reduction through extensive experimental evaluation.
|
The work of Canetti, Riva, and Rothblum @cite_6 also makes use of multiple outsourcing services for query verification. However, they make use of all the services all the time, and require a logarithmic number of rounds to ensure verifiability of computation. In addition, they assume that at least one of the cloud providers is in fact honest. We, in contrast, do not assume that any provider is honest, merely that they are (meaning that the provider wishes to maximize his profits), and we only use additional providers a fraction of the time. In addition, we only require one round of computation.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2138736975"
],
"abstract": [
"The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client."
]
}
|
1202.1367
|
2171634015
|
The broad adoption of the web as a communication medium has made it possible to study social behavior at a new scale. With social media networks such as Twitter, we can collect large data sets of online discourse. Social science researchers and journalists, however, may not have tools available to make sense of large amounts of data or of the structure of large social networks. In this paper, we describe our recent extensions to Truthy, a system for collecting and analyzing political discourse on Twitter. We introduce several new analytical perspectives on online discourse with the goal of facilitating collaboration between individuals in the computational and social sciences. The design decisions described in this article are motivated by real-world use cases developed in collaboration with colleagues at the Indiana University School of Journalism. Author Keywords
|
Twitinfo http: www.twitinfo.csail.mit.edu is a website presenting research on network analysis and visualizations of Twitter data. Its content is collected in automatically identified bursts" of tweets @cite_4 . Twitinfo also calculates the top tweeted URLs in each burst, and plots each tweet on a map, colored according to sentiment. Twitinfo focuses on specific memes, identified by the researchers, and is thus somewhat limited for users who might wish to investigate arbitrary topics.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2128721751"
],
"abstract": [
"Microblogs are a tremendous repository of user-generated content about world events. However, for people trying to understand events by querying services like Twitter, a chronological log of posts makes it very difficult to get a detailed understanding of an event. In this paper, we present TwitInfo, a system for visualizing and summarizing events on Twitter. TwitInfo allows users to browse a large collection of tweets using a timeline-based display that highlights peaks of high tweet activity. A novel streaming algorithm automatically discovers these peaks and labels them meaningfully using text from the tweets. Users can drill down to subevents, and explore further via geolocation, sentiment, and popular URLs. We contribute a recall-normalized aggregate sentiment visualization to produce more honest sentiment overviews. An evaluation of the system revealed that users were able to reconstruct meaningful summaries of events in a small amount of time. An interview with a Pulitzer Prize-winning journalist suggested that the system would be especially useful for understanding a long-running event and for identifying eyewitnesses. Quantitatively, our system can identify 80-100 of manually labeled peaks, facilitating a relatively complete view of each event studied."
]
}
|
1202.0457
|
2952933963
|
We study the exact and optimal repair of multiple failures in codes for distributed storage. More particularly, we examine the use of interference alignment to build exact scalar minimum storage coordinated regenerating codes (MSCR). We show that it is possible to build codes for the case of k = 2 and d > k by aligning interferences independently but that this technique cannot be applied as soon as k > 2 and d > k. Our results also apply to adaptive regenerating codes.
|
Among all possible regenerating codes, most of the studies have focused on the minimum storage point. For MSR codes that are able to repair single failures ( @math ), studies have heavily relied on interference alignment, first applied to @math in @cite_18 . The best known scalar codes either use interference alignment @cite_6 to allow @math , or use the product matrix framework @cite_11 to allow @math . However, scalar codes cannot be used to achieve @math as shown in @cite_14 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_6",
"@cite_11"
],
"mid": [
"",
"2056826630",
"2126295689",
"2150777202"
],
"abstract": [
"",
"Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary k of n nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary d nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 ≥ 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d <; 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k + 1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d <; 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.",
"The high repair cost of (n, k) Maximum Distance Separable (MDS) erasure codes has recently motivated a new class of MDS codes, called Repair MDS codes, that can significantly reduce repair bandwidth over conventional MDS codes. In this paper, we describe (n, k, d) Exact-Repair MDS codes, which allow for any failed node to be repaired exactly with access to d survivor nodes, where k ≤ d ≤ n-1. We construct Exact-Repair MDS codes that are optimal in repair bandwidth for the cases of: (α) k n ≤ 1 2 and d ≥ 2k - 11; (b) k ≤ 3. Our codes are deterministic and require a finite-field size of at most 2(n - k). Our constructive codes are based on interference alignment techniques.",
"Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1]."
]
}
|
1202.0457
|
2952933963
|
We study the exact and optimal repair of multiple failures in codes for distributed storage. More particularly, we examine the use of interference alignment to build exact scalar minimum storage coordinated regenerating codes (MSCR). We show that it is possible to build codes for the case of k = 2 and d > k by aligning interferences independently but that this technique cannot be applied as soon as k > 2 and d > k. Our results also apply to adaptive regenerating codes.
|
For the case of multiple failures @math , only scalar MSCR codes ( @math ) have been considered. Previous work @cite_19 only considered the degenerated case of @math where the costs of coordinated cooperative regenerating codes is equivalent to the costs of erasure correcting codes with lazy repairs. In this work, where @math , the repair boils down to repairing in parallel @math independent erasure correcting codes (, no network coding is needed). The work we present in this paper is the first to consider a non-degenerated case @math and to apply interference alignment when multiple failures are repaired simultaneously leading to the codes we define in , which are restricted to @math . Furthermore, in , we show that independant interference alignment with scalar codes is not sufficient for building exact MSCR codes when @math . With respect to the MBR point, the best known construction @cite_11 are scalar codes based on the product matrix framework and allow the repair for any value of @math . Some interesting alternative codes @cite_1 @cite_15 allow repair by transfer (, without performing any linear operation) and rely on fractional repetition codes.
|
{
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_1",
"@cite_11"
],
"mid": [
"2111915261",
"",
"2953110495",
"2150777202"
],
"abstract": [
"When there are multiple storage node failures in distributed storage system, regenerating them individually is suboptimal as far as repair bandwidth minimization is concerned. The tradeoff between storage and repair bandwidth is derived in the case where data exchange among the newcomers is enabled. The tradeoff curve with cooperation is strictly better than the one without cooperation. An explicit construction of cooperative regenerating code is given.",
"",
"Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A subspace based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.",
"Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1]."
]
}
|
1202.0457
|
2952933963
|
We study the exact and optimal repair of multiple failures in codes for distributed storage. More particularly, we examine the use of interference alignment to build exact scalar minimum storage coordinated regenerating codes (MSCR). We show that it is possible to build codes for the case of k = 2 and d > k by aligning interferences independently but that this technique cannot be applied as soon as k > 2 and d > k. Our results also apply to adaptive regenerating codes.
|
When multiple failures are repaired simultaneously, the only MBCR codes again consider the case of @math and map to repairing @math independant erasure correcting codes @cite_3 . The existence of MBCR codes when @math remains an open question.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2166253834"
],
"abstract": [
"In order to provide high data reliability, distributed storage systems disperse data with redundancy to multiple storage nodes. Regenerating codes is a new class of erasure codes to introduce redundancy for the purpose of improving the data repair performance in distributed storage. Most of the studies on regenerating codes focus on the single-failure recovery, but it is not uncommon to see two or more node failures at the same time in large storage networks. To exploit the opportunity of repairing multiple failed nodes simultaneously, a cooperative repair mechanism, in the sense that the nodes to be repaired can exchange data among themselves, is investigated. A lower bound on the repair-bandwidth for cooperative repair is derived and a construction of a family of exact cooperative regenerating codes matching this lower bound is presented."
]
}
|
1202.0031
|
2951918546
|
Online social media provide multiple ways to find interesting content. One important method is highlighting content recommended by user's friends. We examine this process on one such site, the news aggregator Digg. With a stochastic model of user behavior, we distinguish the effects of the content visibility and interestingness to users. We find a wide range of interest and distinguish stories primarily of interest to a users' friends from those of interest to the entire user community. We show how this model predicts a story's eventual popularity from users' early reactions to it, and estimate the prediction reliability. This modeling framework can help evaluate alternative design choices for displaying content on the site.
|
Models of social dynamics can help explain and predict the popularity of online content. The broad distributions of popularity and user activity on many social media sites can arise from simple macroscopic dynamical rules @cite_7 . A phenomenological model of the collective attention on Digg describes the distribution of final votes for promoted stories through a decay of interest in news articles @cite_8 . Stochastic models @cite_14 @cite_23 offer an alternative explanation for the vote distribution. Rather than novelty decay, they explain the votes distribution by the combination of variation in the stories' inherent interest to users and effects of user interface, specifically decay in visibility as the story moves to subsequent pages. Crane and Sornette @cite_20 found that collective dynamics was linked to the inherent quality of videos on You -Tube. From the number of votes received by videos over time, they could separate high quality videos from junk videos. This study is similar in spirit to our own in exploiting the link between observed popularity and content quality. However, while these studies aggregated data from tens of thousands of individuals, our method focuses instead on the dynamics, modeling how individual behavior contributes to content popularity.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_23",
"@cite_20"
],
"mid": [
"2130726349",
"2007590415",
"2058465497",
"2963057722",
"2406592771"
],
"abstract": [
"Social media sites underscore the Web's transformation to a participatory medium in which users collaboratively create, evaluate, and distribute information. Innovations in social media have led to social information processing, a new paradigm for interacting with data. The social news aggregator Digg exploits social information processing for document recommendation and rating. Additionally, via mathematical modeling, it's possible to describe how collaborative document rating emerges from the independent decisions users make. Using such a model, the author reproduces observed ratings that actual stories on Digg have received.",
"Online peer production systems have enabled people to coactively create, share, classify, and rate content on an unprecedented scale. This paper describes strong macroscopic regularities in how people contribute to peer production systems, and shows how these regularities arise from simple dynamical rules. First, it is demonstrated that the probability a person stops contributing varies inversely with the number of contributions he has made. This rule leads to a power law distribution for the number of contributions per person in which a small number of very active users make most of the contributions. The rule also implies that the power law exponent is proportional to the effort required to contribute, as justified by the data. Second, the level of activity per topic is shown to follow a lognormal distribution generated by a stochastic reinforcement mechanism. A small number of very popular topics thus accumulate the vast majority of contributions. These trends are demonstrated to hold across hundreds of millions of contributions to four disparate peer production systems of differing scope, interface style, and purpose.",
"The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.",
"We describe a general stochastic processes-based approach to modeling user-contributory web sites, where users create, rate and share content. These models describe aggregate measures of activity and how they arise from simple models of individual users. This approach provides a tractable method to understand user activity on the web site and how this activity depends on web site design choices, especially the choice of what information about other users’ behaviors is shown to each user. We illustrate this modeling approach in the context of user-created content on the news rating site Digg.",
"With the rise of web 2.0 there is an ever-expanding source of interesting media because of the proliferation of usergenerated content. However, mixed in with this is a large amount of noise that creates a proverbial “needle in the haystack” when searching for relevant content. Although there is hope that the rich network of interwoven metadata may contain enough structure to eventually help sift through this noise, currently many sites serve up only the “most popular” things. Identifying only the most popular items can be useful, but doing so fails to take into account the famous “long tail” behavior of the web—the notion that the collective effect of small, niche interests can outweigh the market share of the few blockbuster (i.e. most-popular) items—thus providing only content that has mass appeal and masking the interests of the idiosyncratic many. YouTube, for example, hosts over 40 million videos— enough content to keep one occupied for more than 200 years. Are there intelligent tools to search through this information-rich environment and identify interesting and relevant content? Is there a way to identify emerging trends or “hot topics” in addition to indexing the long tail for content that has real value?"
]
}
|
1202.0031
|
2951918546
|
Online social media provide multiple ways to find interesting content. One important method is highlighting content recommended by user's friends. We examine this process on one such site, the news aggregator Digg. With a stochastic model of user behavior, we distinguish the effects of the content visibility and interestingness to users. We find a wide range of interest and distinguish stories primarily of interest to a users' friends from those of interest to the entire user community. We show how this model predicts a story's eventual popularity from users' early reactions to it, and estimate the prediction reliability. This modeling framework can help evaluate alternative design choices for displaying content on the site.
|
Statistically significant correlation between early and late popularity of content is found on Slashdot @cite_4 , Digg and You -Tube @cite_16 . Specifically, similar to our study, Szabo & Huberman @cite_16 predicted long-term popularity of stories on Digg. Through large-scale statistical study of stories promoted to the front page, they were able to predict stories' popularity after 30 days based on its correlation with popularity one hour after promotion. Similarly, Lerman & Hogg @cite_25 predicted popularity of stories based on their pre-promotion votes. We also quantitatively predict stories' future popularity, but unlike earlier works, we also estimate confidence intervals of these predictions for each story.
|
{
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_25"
],
"mid": [
"2070366435",
"2150412428",
"2096135266"
],
"abstract": [
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"We perform a statistical analysis of user's reaction time to a new discussion thread in online debates on the popular news site Slashdot. First, we show with Kolmogorov-Smirnov tests that a mixture of two log-normal distributions combined with the circadian rhythm of the community is able to explain with surprising accuracy the reaction time of comments within a discussion thread. Second, this characterization allows to predict intermediate and long-term user behavior with acceptable precision. The prediction method is based on activity-prototypes, which consist of a mixture of two log-normal distributions, and represent the average activity in a particular region of the circadian cycle.",
"Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both companies that host social media sites and their users. Accurate and timely prediction would enable the companies to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions among content quality, how the social media site chooses to highlight content, and influence among users. While these factors make it difficult to predict popularity a priori, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating aspects of the web site design, such models improve on predictions based on simply extrapolating from the early votes. We validate this claim on the social news portal Digg using a previously-developed model of social voting based on the Digg user interface."
]
}
|
1202.0031
|
2951918546
|
Online social media provide multiple ways to find interesting content. One important method is highlighting content recommended by user's friends. We examine this process on one such site, the news aggregator Digg. With a stochastic model of user behavior, we distinguish the effects of the content visibility and interestingness to users. We find a wide range of interest and distinguish stories primarily of interest to a users' friends from those of interest to the entire user community. We show how this model predicts a story's eventual popularity from users' early reactions to it, and estimate the prediction reliability. This modeling framework can help evaluate alternative design choices for displaying content on the site.
|
Previous works found social networks to be an important component to information diffusion. Niche interest content tends to spread mainly along social links in Second Life @cite_15 , in blogspace @cite_12 , as well as on Digg @cite_3 , and does not end up becoming very popular with the general audience. @cite_9 found that social links between like-minded people, rather than causal influence, explained much of information diffusion observed on a network. Our modeling approach allows us to systematically distinguish users who are linked to those who are not linked and study diffusion separately for each group.
|
{
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_3",
"@cite_12"
],
"mid": [
"2149910108",
"2107260313",
"2042408065",
"2179615269"
],
"abstract": [
"Node characteristics and behaviors are often correlated with the structure of social networks over time. While evidence of this type of assortative mixing and temporal clustering of behaviors among linked nodes is used to support claims of peer influence and social contagion in networks, homophily may also explain such evidence. Here we develop a dynamic matched sample estimation framework to distinguish influence and homophily effects in dynamic networks, and we apply this framework to a global instant messaging network of 27.4 million users, using data on the day-by-day adoption of a mobile service application and users' longitudinal behavioral, demographic, and geographic data. We find that previous methods overestimate peer influence in product adoption decisions in this network by 300–700 , and that homophily explains >50 of the perceived behavioral contagion. These findings and methods are essential to both our understanding of the mechanisms that drive contagions in networks and our knowledge of how to propagate or combat them in domains as diverse as epidemiology, marketing, development economics, and public health.",
"Social influence determines to a large extent what we adopt and when we adopt it. This is just as true in the digital domain as it is in real life, and has become of increasing importance due to the deluge of user-created content on the Internet. In this paper, we present an empirical study of user-to-user content transfer occurring in the context of a time-evolving social network in Second Life, a massively multiplayer virtual world. We identify and model social influence based on the change in adoption rate following the actions of one's friends and find that the social network plays a significant role in the adoption of content. Adoption rates quicken as the number of friends adopting increases and this effect varies with the connectivity of a particular user. We further find that sharing among friends occurs more rapidly than sharing among strangers, but that content that diffuses primarily through social influence tends to have a more limited audience. Finally, we examine the role of individuals, finding that some play a more active role in distributing content than others, but that these influencers are distinct from the early adopters.",
"The social Web is transforming the way information is created and distributed. Blog authoring tools enable users to publish content, while sites such as Digg and Del.icio.us are used to distribute content to a wider audience. With content fast becoming a commodity, interest in using social networks to promote and find content has grown, both on the side of content producers (viral marketing) and consumers (recommendation). Here we study the role of social networks in promoting content on Digg, a social news aggregator that allows users to submit links to and vote on news stories. Digg's goal is to feature the most interesting stories on its front page, and it aggregates opinions of its many users to identify them. Like other social networking sites, Digg allows users to designate other users as \"friends\" and see what stories they found interesting. We studied the spread of interest in news stories submitted to Digg in June 2006. Our results suggest that pattern of the spread of interest in a story on the network is indicative of how popular the story will become. Stories that spread mainly outside of the submitter's neighborhood go on to be very popular, while stories that spread mainly through submitter's social neighborhood prove not to be very popular. This effect is visible already in the early stages of voting, and one can make a prediction about the potential audience of a story simply by analyzing where the initial votes come from.",
"There is considerable interest in developing predictive capabilities for social diffusion processes, for instance to permit early identification of emerging contentious situations, rapid detection of disease outbreaks, or accurate forecasting of the ultimate reach of potentially “viral” ideas or behaviors. This paper proposes a new approach to this predictive analytics problem, in which analysis of meso-scale network dynamics is leveraged to generate useful predictions for complex social phenomena. We begin by deriving a stochastic hybrid dynamical systems (S-HDS) model for diffusion processes taking place over social networks with realistic topologies; this modeling approach is inspired by recent work in biology demonstrating that S-HDS offer a useful mathematical formalism with which to represent complex, multi-scale biological network dynamics. We then perform formal stochastic reachability analysis with this S-HDS model and conclude that the outcomes of social diffusion processes may depend crucially upon the way the early dynamics of the process interacts with the underlying network’s community structure and core-periphery structure. This theoretical finding provides the foundations for developing a machine learning algorithm that enables accurate early warning analysis for social diffusion events. The utility of the warning algorithm, and the power of network-based predictive metrics, are demonstrated through an empirical investigation of the propagation of political “memes” over social media networks. Additionally, we illustrate the potential of the approach for security informatics applications through case studies involving early warning analysis of large-scale protests events and politically-motivated cyber attacks."
]
}
|
1202.0332
|
2097343308
|
News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84 accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.
|
Stochastic models of information diffusion as well as deterministic epidemic models have been studied extensively in an array of papers, reaffirming theories developed in sociology such as diffusion of innovations @cite_5 . Among these are models of viral marketing @cite_4 , models of attention on the web @cite_20 , cascading behavior in propagation of information @cite_14 @cite_9 and models that describe heavy tails in human dynamics @cite_26 . While some studies incorporate factors for content into their model @cite_12 , they only capture this in general terms and do not include detailed consideration of content features.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_9",
"@cite_5",
"@cite_12",
"@cite_20"
],
"mid": [
"2113889316",
"1994473607",
"2073689275",
"1489677531",
"",
"",
"2058465497"
],
"abstract": [
"We study the dynamics of information propagation in environments of low-overhead personal publishing, using a large collection of WebLogs over time as our example domain. We characterize and model this collection at two levels. First, we present a macroscopic characterization of topic propagation through our corpus, formalizing the notion of long-running \"chatter\" topics consisting recursively of \"spike\" topics generated by outside world events, or more rarely, by resonances within the community. Second, we present a microscopic characterization of propagation from individual to individual, drawing on the theory of infectious diseases to model the flow. We propose, validate, and employ an algorithm to induce the underlying propagation network from a sequence of posts, and report on the results.",
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.",
"terized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution Pw w with =3 2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by = 1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display = 1, the surface mail based communication belongs to the =3 2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.",
"How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.",
"",
"",
"The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades."
]
}
|
1202.0332
|
2097343308
|
News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84 accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.
|
On the subject of news dissemination, @cite_15 and @cite_16 study temporal aspects of spread of news memes online, with @cite_7 empirically studying spread of news on the social networks of digg and twitter and @cite_8 studying facebook news feeds.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_7",
"@cite_8"
],
"mid": [
"2127492100",
"2112056172",
"1752870744",
"2124532437"
],
"abstract": [
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.",
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention.",
"Social networks have emerged as a critical factor in information dissemination, search, marketing, expertise and influence discovery, and potentially an important tool for mobilizing people. Social media has made social networks ubiquitous, and also given researchers access to massive quantities of data for empirical analysis. These data sets offer a rich source of evidence for studying dynamics of individual and group behavior, the structure of networks and global patterns of the flow of information on them. However, in most previous studies, the structure of the underlying networks was not directly visible but had to be inferred from the flow of information from one individual to another. As a result, we do not yet understand dynamics of information spread on networks or how the structure of the network affects it. We address this gap by analyzing data from two popular social news sites. Specifically, we extract social networks of active users on Digg and Twitter, and track how interest in news stories spreads among them. We show that social networks play a crucial role in the spread of information on these sites, and that network structure affects dynamics of information flow.",
"Whether they are modeling bookmarking behavior in Flickr or cascades of failure in large networks, models of diffusion often start with the assumption that a few nodes start long chain reactions, resulting in large-scale cascades. While reasonable under some conditions, this assumption may not hold for social media networks, where user engagement is high and information may enter a system from multiple disconnected sources. Using a dataset of 262,985 Facebook Pages and their associated fans, this paper provides an empirical investigation of diffusion through a large social media network. Although Facebook diffusion chains are often extremely long (chains of up to 82 levels have been observed), they are not usually the result of a single chain-reaction event. Rather, these diffusion chains are typically started by a substantial number of users. Large clusters emerge when hundreds or even thousands of short diffusion chains merge together. This paper presents an analysis of these diffusion chains using zero-inflated negative binomial regressions. We show that after controlling for distribution effects, there is no meaningful evidence that a start node’s maximum diffusion chain length can be predicted with the user's demographics or Facebook usage characteristics (including the user's number of Facebook friends). This may provide insight into future research on public opinion formation."
]
}
|
1202.0332
|
2097343308
|
News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84 accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.
|
A growing number of recent studies predict spread of information based on early measurements (using early votes on digg, likes on facebook, click-throughs, and comments on forums and sites). @cite_11 found that eventual popularity of items posted on youtube and digg has a strong correlation with their early popularity; @cite_19 and @cite_22 predict the popularity of a discussion thread using features based on early measurements of user comments. @cite_6 propose the notion of a virtual temperature of weblogs using early measurements. @cite_1 predict digg counts using stochastic models that combine design elements of the site -that in turn lead to collective user behavior- with information from early votes.
|
{
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_11"
],
"mid": [
"2045989637",
"2096135266",
"",
"2000139610",
"2070366435"
],
"abstract": [
"Understanding user participation is fundamental in anticipating the popularity of online content. In this paper, we explore how the number of users' comments during a short observation period after publication can be used to predict the expected popularity of articles published by a countrywide online newspaper. We evaluate a simple linear prediction model on a real dataset of hundreds of thousands of articles and several millions of comments collected over a period of four years. Analyzing the accuracy of our proposed model for different values of its basic parameters we provide valuable insights on the potentials and limitations for predicting content popularity based on early user activity.",
"Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both companies that host social media sites and their users. Accurate and timely prediction would enable the companies to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions among content quality, how the social media site chooses to highlight content, and influence among users. While these factors make it difficult to predict popularity a priori, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating aspects of the web site design, such models improve on predictions based on simply extrapolating from the early votes. We validate this claim on the social news portal Digg using a previously-developed model of social voting based on the Digg user interface.",
"",
"In this paper, we propose a methodology to predict the popularity of online contents. More precisely, rather than trying to infer the popularity of a content itself, we infer the likelihood that a content will be popular. Our approach is rooted in survival analysis where predicting the precise lifetime of an individual is very hard and almost impossible but predicting the likelihood of one's survival longer than a threshold or another individual is possible. We position ourselves in the standpoint of an external observer who has to infer the popularity of a content only using publicly observable metrics, such as the lifetime of a thread, the number of comments, and the number of views. Our goal is to infer these observable metrics, using a set of explanatory factors, such as the number of comments and the number of links in the first hours after the content publication, which are observable by the external observer. We use a Cox proportional hazard regression model that divides the distribution function of the observable popularity metric into two components: a) one that can be explained by the given set of explanatory factors (called risk factors) and b) a baseline distribution function that integrates all the factors not taken into account. To validate our proposed approach, we use data sets from two different online discussion forums: dpreview.com, one of the largest online discussion groups providing news and discussion forums about all kinds of digital cameras, and myspace.com, one of the representative online social networking services. On these two data sets we model two different popularity metrics, the lifetime of threads and the number of comments, and show that our approach can predict the lifetime of threads from Dpreview (Myspace) by observing a thread during the first 5 6 days (24 hours, respectively) and the number of comments of Dpreview threads by observing a thread during first 2 3 days.",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors."
]
}
|
1202.0332
|
2097343308
|
News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84 accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.
|
Finally, recent work on variation in the spread of content has been carried out by @cite_2 with a focus on categories of twitter hashtags (similar to keywords). This work is aligned with ours in its attention to importance of content in variations among popularity, however they consider categories only, with news being one of the hashtag categories. @cite_0 conduct similar work on social marketing messages.
|
{
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2132910341",
"2145446394"
],
"abstract": [
"Popularity of social marketing messages indicates the effectiveness of the corresponding marketing strategies. This research aims to discover the characteristics of social marketing messages that contribute to different level of popularity. Using messages posted by a sample of restaurants on Facebook as a case study, we measured the message popularity by the number of \"likes\" voted by fans, and examined the relationship between the message popularity and two properties of the messages: (1) content, and (2) media type. Combining a number of text mining and statistics methods, we have discovered some interesting patterns correlated to \"more popular\" and \"less popular\" social marketing messages. This work lays foundation for building computational models to predict the popularity of social marketing messages in the future.",
"There is a widespread intuitive sense that different kinds of information spread differently on-line, but it has been difficult to evaluate this question quantitatively since it requires a setting where many different kinds of information spread in a shared environment. Here we study this issue on Twitter, analyzing the ways in which tokens known as hashtags spread on a network defined by the interactions among Twitter users. We find significant variation in the ways that widely-used hashtags on different topics spread. Our results show that this variation is not attributable simply to differences in \"stickiness,\" the probability of adoption based on one or more exposures, but also to a quantity that could be viewed as a kind of \"persistence\" - the relative extent to which repeated exposures to a hashtag continue to have significant marginal effects. We find that hashtags on politically controversial topics are particularly persistent, with repeated exposures continuing to have unusually large marginal effects on adoption; this provides, to our knowledge, the first large-scale validation of the \"complex contagion\" principle from sociology, which posits that repeated exposures to an idea are particularly crucial when the idea is in some way controversial or contentious. Among other findings, we discover that hashtags representing the natural analogues of Twitter idioms and neologisms are particularly non-persistent, with the effect of multiple exposures decaying rapidly relative to the first exposure. We also study the subgraph structure of the initial adopters for different widely-adopted hashtags, again finding structural differences across topics. We develop simulation-based and generative models to analyze how the adoption dynamics interact with the network structure of the early adopters on which a hashtag spreads."
]
}
|
1201.5346
|
2953154723
|
We develop a conceptually clear, intuitive, and feasible decision procedure for testing satisfi?ability in the full multi-agent epistemic logic CMAEL(CD) with operators for common and distributed knowledge for all coalitions of agents mentioned in the language. To that end, we introduce Hintikka structures for CMAEL(CD) and prove that satisfiability in such structures is equivalent to satis?fiability in standard models. Using that result, we design an incremental tableau-building procedure that eventually constructs a satisfying Hintikka structure for every satisfi?able input set of formulae of CMAEL(CD) and closes for every unsatisfi?able input set of formulae.
|
Several tableau-based methods for satisfiability-checking for modal logics with fixpoint-definable operators have been developed and published over the past 30 years, all going back to the tableau-based decision methods developed for the Propositional Dynamic Logic in @cite_21 , for the branching-time temporal logics in @cite_43 and in [Section 5] EmHal85 and @cite_18 . In terms of handling eventualities arising from the fixed-point operators our tableau method follows more closely on the incremental tableaux for the linear time temporal logic in @cite_27 and for in [Section 7] EmHal85 .
|
{
"cite_N": [
"@cite_43",
"@cite_18",
"@cite_21",
"@cite_27"
],
"mid": [
"2000138546",
"1612453857",
"1997716585",
"1539868891"
],
"abstract": [
"A temporal logic is defined which contains both linear and branching operators. The underlying model is the tree of all possible computations. The following metatheoretical results are proven: 1) an exponential decision procedure for satisfiability; 2) a finite model property; 3) the completeness of an axiomatization.",
"Publisher Summary This chapter discusses temporal and modal logic. The chapter describes a multiaxis classification of systems of temporal logic. The chapter describes the framework of linear temporal logic. In both its propositional and first-order forms, linear temporal logic has been widely employed in the specification and verification of programs. The chapter describes the competing framework of branching temporal logic, which has seen wide use. It also explains how temporal logic structures can be used to model concurrent programs using non-determinism and fairness. The chapter also discusses other modal and temporal logics in computer science. The chapter describes the formal syntax and semantics of Propositional Linear Temporal Logic (PLTL). The chapter also describes the formal syntax and semantics for two representative systems of propositional branching-time temporal logics.",
"Abstract We give an algorithm for “before-after” reasoning about action. The algorithm decides satisfiability and validity of formulas of propositional dynamic logic, a recently developed logic of change of state that subsumes the zero-order component of most other action-oriented logics. The algorithm requires time at most proportional to an exponentially growing function of the length (number of occurrences of variables and connectives) of the input. Fischer and Ladner have shown that that every algorithm for this problem must take exponential time, making this algorithm optimal to within a polynomial. Application areas include program verification, program synthesis, and discourse analysis. The algorithm is based on the method of semantic tableaux, appropriately generalized to dynamic logic. A formal treatment of the generalization, called Hintikka structures , is developed.",
""
]
}
|
1201.5346
|
2953154723
|
We develop a conceptually clear, intuitive, and feasible decision procedure for testing satisfi?ability in the full multi-agent epistemic logic CMAEL(CD) with operators for common and distributed knowledge for all coalitions of agents mentioned in the language. To that end, we introduce Hintikka structures for CMAEL(CD) and prove that satisfiability in such structures is equivalent to satis?fiability in standard models. Using that result, we design an incremental tableau-building procedure that eventually constructs a satisfying Hintikka structure for every satisfi?able input set of formulae of CMAEL(CD) and closes for every unsatisfi?able input set of formulae.
|
We note that there is a natural tradeoff between conceptual clarity and simplicity of (tableau-based) decision procedures on the one hand, and their technical sophistication and optimality on the other hand. We emphasize that the main objective of developing the tableau procedure presented here is the conceptual clarity, intuitiveness, and ease of implementation, rather than practical optimality. While being optimal in terms of worst-case time complexity and incorporating some new and non-trivial optimizing features (such as restricted applications of cut rules) this procedure is amenable to various improvements and further optimizations. Most important known such optimizations are techniques for elimination of bad states and tableau methods developed for some related logics in @cite_23 , @cite_39 and versions of tableau as in @cite_39 for , @cite_30 for PDL with converse operators, @cite_41 for the description logic SHI and of sequent calculi, in @cite_42 for and in @cite_26 for LTL and CTL. We discuss briefly the possible modifications of our procedure, implementing such optimizing techniques in Section .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_41",
"@cite_42",
"@cite_39",
"@cite_23"
],
"mid": [
"2949377023",
"2047456744",
"2515200",
"",
"",
"1527982431"
],
"abstract": [
"We give an optimal (EXPTIME), sound and complete tableau-based algorithm for deciding satisfiability for propositional dynamic logic with converse (CPDL) which does not require the use of analytic cut. Our main contribution is a sound methodto combine our previous optimal method for tracking least fix-points in PDL with our previous optimal method for handling converse in the description logic ALCI. The extension is non-trivial as the two methods cannot be combined naively. We give sufficient details to enable an implementation by others. Our OCaml implementation seems to be the first theorem prover for CPDL.",
"Abstract Currently known sequent systems for temporal logics such as linear time temporal logic and computation tree logic either rely on a cut rule, an invariant rule, or an infinitary rule. The first and second violate the subformula property and the third has infinitely many premises. We present finitary cut-free invariant-free weakening-free and contraction-free sequent systems for both logics mentioned. In the case of linear time all rules are invertible. The systems are based on annotating fixpoint formulas with a history, an approach which has also been used in game-theoretic characterisations of these logics.",
"We give the first cut-free ExpTime (optimal) tableau decision procedure for checking satisfiability of a knowledge base in the description logic SHI, which extends the description logic ALC with transitive roles, inverse roles and role hierarchies.",
"",
"",
"The paper presents a one-pass tableau calculus PLTLT for the propositional linear time logic PLTL. The calculus is correct and complete and unlike in previous decision methods, there is no second phase that checks for the fulfillment of the so-called eventuality formulae. This second phase is performed locally and is incorporated into the rules of the calculus. Derivations in PLTLT are cyclic trees rather than cyclic graphs. When used as a basis for a decision procedure, it has the advantage that only one branch needs to be kept in memory at any one time. It may thus be a suitable starting point for the development of a parallel decision method for PLTL."
]
}
|
1201.5426
|
2949860269
|
This paper draws on diverse areas of computer science to develop a unified view of computation: (1) Optimization in operations research, where a numerical objective function is maximized under constraints, is generalized from the numerical total order to a non-numerical partial order that can be interpreted in terms of information. (2) Relations are generalized so that there are relations of which the constituent tuples have numerical indexes, whereas in other relations these indexes are variables. The distinction is essential in our definition of constraint satisfaction problems. (3) Constraint satisfaction problems are formulated in terms of semantics of conjunctions of atomic formulas of predicate logic. (4) Approximation structures, which are available for several important domains, are applied to solutions of constraint satisfaction problems. As application we treat constraint satisfaction problems over reals. These cover a large part of numerical analysis, most significantly nonlinear equations and inequalities. The chaotic algorithm analyzed in the paper combines the efficiency of floating-point computation with the correctness guarantees of arising from our logico-mathematical model of constraint-satisfaction problems.
|
Following Mackworth's AC-3 algorithm @cite_4 there are many other papers concerned with converging fair iterations @cite_3 @cite_1 @cite_0 @cite_6 @cite_8 .
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0"
],
"mid": [
"2135432705",
"2463035841",
"1990426442",
"2067273780",
"2054700680",
"1525837752"
],
"abstract": [
"Artificial intelligence tasks which can be formulated as constraint satisfaction problems, with which this paper is for the most part concerned, are usually by solved backtracking the examining the thrashing behavior that nearly always accompanies backtracking, identifying three of its causes and proposing remedies for them we are led to a class of algorithms whoch can profitably be used to eliminate local (node, arc and path) inconsistencies before any attempt is made to construct a complete solution. A more general paradigm for attacking these tasks is the altenation of constraint manipulation and case analysis producing an OR problem graph which may be searched in any of the usual ways. Many authors, particularly Montanari and Waltz, have contributed to the development of these ideas; a secondary aim of this paper is to trace that history. The primary aim is to provide an accessible, unified framework, within which to present the algorithms including a new path consistency algorithm, to discuss their relationships and the may applications, both realized and potential of network consistency algorithms.",
"",
"Abstract We present in this paper a unified processing for real, integer, and Boolean constraints based on a general narrowing algorithm which applies to any n-ary relation on R. The basic idea is to define, for every such relation ρ, a narrowing function ρ based on the approximation of ρ by a Cartesian product of intervals whose bounds are floating-point numbers. We then focus on nonconvex relations and establish several properties. The more important of these properties is applied to justify the computation of usual relations defined in terms of intersections of simpler relations. We extend the scope of the narrowing algorithm used in the language BNR-Prolog to integer and disequality constraints, to Boolean constraints, and to relations mixing numerical and Boolean values. As a result, we propose a new Constraint Logic Programming language called CLP(BNR), where BNR stands for Booleans, Naturals, and Reals. In this language, constraints are expressed in a unique structure, allowing the mixing of real numbers, integers, and Booleans. We end with the presentation of several examples showing the advantages of such an approach from the point of view of the expressiveness, and give some preliminary computational results from a prototype.",
"This paper describes the design, implementation, and applications of the constraint logic language cc(FD). cc(FD) is a declarative nondeterministic constraint logic language over finite domains based on the cc framework [33], an extension of the Constraint Logic Programming (CLP) scheme [21]. Its constraint solver includes (nonlinear) arithmetic constraints over natural numbers which are approximated using domain and interval consistency. The main novelty of cc (FD) is the inclusion of a number of general-purpose combinators, in particular cardinality, constructive disjunction, and blocking implication, in conjunction with new constraint operations such as constraint entailment and generalization. These combinators significantly improve the operational expressiveness, extensibility, and flexibility of CLP languages and allow issues such as the definition of nonprimitive constraints and disjunctions to be tackled at the language level. The implementation of cc (FD) (about 40,000 lines of C) includes a WAM-based engine [44], optimal are-consistency algorithms based on AC-5 [40], and incremental implementation of the combinators. Results on numerous problems, including scheduling, resource allocation, sequencing, packing, and hamiltonian paths are reported and indicate that cc(FD) comes close to procedural languages on a number of combinatorial problems. In addition, a small cc(FD) program was able to find the optimal solution and prove optimality to a famous 10 10 disjunctive scheduling problem [29], which was left open for more than 20 years and finally solved in 1986. (C) 1998 Elsevier Science Inc. All rights reserved.",
"We show that several constraint propagation algorithms (also called (local) consistency, consistency enforcing, Waltz, filtering or narrowing algorithms) are instances of algorithms that deal with chaotic iteration. To this end we propose a simple abstract framework that allows us to classify and compare these algorithms and to establish in a uniform way their basic properties.",
"This paper is an introduction to Newton, a constraint programming language over nonlinear real constraints. Newton originates from an effort to reconcile the declarative nature of constraint logic programming languages over intervals with advanced interval techniques developed in numerical analysis, such as the interval Newton method. Its key conceptual idea is to introduce the notion of box-consistency, which approximates arc-consistency, a notion well-known in artificial intelligence. Box-consistency achieves an effective pruning at a reasonable computation cost and generalizes some traditional interval operators. Newton has been applied to numerous applications in science and engineering, including nonlinear equation-solving, unconstrained optimization, and constrained optimization. It is competitive with continuation methods on their equation-solving benchmarks and outperforms the interval-based methods we are aware of on optimization problems."
]
}
|
1201.4754
|
2951791068
|
In this paper, we examine in which each player's preferences over partitions of players depend only on the members of his coalition. We present three main results in which restrictions on the preferences of the players guarantee the existence of stable partitions for various notions of stability. The preference restrictions pertain to and which model optimistic and pessimistic behavior of players respectively. The existence results apply to natural subclasses of and . It is also shown that our existence results cannot be strengthened to the case of stronger known stability concepts.
|
proposed a natural preference restriction called top responsiveness which is based on the idea that players value other players on how they could complement them in research teams. They showed that there exists an algorithm called the which finds a core stable partition for top responsive hedonic games. The Top Covering Algorithm can be seen as a generalization of @cite_0 . simplified the Top Covering Algorithm and proved that top responsiveness implies non-emptiness of the strict core and if mutuality is additionally satisfied, then a Nash stable partition exists.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2152907289"
],
"abstract": [
"Abstract An economic model of trading in commodities that are inherently indivisible, like houses, is investigated from a game-theoretic point of view. The concepts of balanced game and core are developed, and a general theorem of Scarf's is applied to prove that the market in question has a nonempty core, that is, at least one outcome that no subset of traders can improve upon. A number of examples are discussed, and the final section reviews a series of other models involving indivisible commodities, with references to the literature."
]
}
|
1201.4145
|
2950661867
|
Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these technologies on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed.
|
Online networks are focused on sharing information, and as such, have been studied extensively in the context of information diffusion. Diffusion and influence have been modeled in blogs @cite_37 @cite_23 @cite_32 , email @cite_26 , and sites such as Twitter, Digg, and Flickr @cite_8 @cite_2 @cite_5 . One particularly salient characteristic of diffusion behavior is the correlation between the number of friends engaging in a behavior and the probability of adopting the behavior. This relationship has been observed in many online contexts, from the joining of LiveJournal groups @cite_39 , to the bookmarking of photos @cite_18 , and the adoption of user-created content @cite_10 . However, as Anagnostopoulos, et al @cite_31 point out, individuals may be more likely to exhibit the same behavior as their friends because of homophily rather than as a result of peer influence. Statistical techniques such as permutation tests and matched sampling @cite_35 help control for confounds, but ultimately cannot resolve this fundamental problem @cite_1 .
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_26",
"@cite_18",
"@cite_8",
"@cite_1",
"@cite_32",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_10"
],
"mid": [
"",
"2107666336",
"2128893310",
"",
"",
"2149084727",
"",
"2432978112",
"",
"",
"1752870744",
"",
""
],
"abstract": [
"",
"Beyond serving as online diaries, weblogs have evolved into a complex social structure, one which is in many ways ideal for the study of the propagation of information. As weblog authors discover and republish information, we are able to use the existing link structure of blogspace to track its flow. Where the path by which it spreads is ambiguous, we utilize a novel inference scheme that takes advantage of data describing historical, repeating patterns of \"infection.\" Our paper describes this technique as well as a visualization system that allows for the graphical tracking of information flow.",
"Although information, news, and opinions continuously circulate in the worldwide social network, the actual mechanics of how any single piece of information spreads on a global scale have been a mystery. Here, we trace such information-spreading processes at a person-by-person level using methods to reconstruct the propagation of massively circulated Internet chain letters. We find that rather than fanning out widely, reaching many people in very few steps according to “small-world” principles, the progress of these chain letters proceeds in a narrow but very deep tree-like pattern, continuing for several hundred steps. This suggests a new and more complex picture for the spread of information through a social network. We describe a probabilistic model based on network clustering and asynchronous response times that produces trees with this characteristic structure on social-network data.",
"",
"",
"The authors consider processes on social networks that can potentially involve three factors: homophily, or the formation of social ties due to matching individual traits; social contagion, also known as social influence; and the causal effect of an individual’s covariates on his or her behavior or other measurable responses. The authors show that generically, all of these are confounded with each other. Distinguishing them from one another requires strong assumptions on the parametrization of the social process or on the adequacy of the covariates used (or both). In particular the authors demonstrate, with simple examples, that asymmetries in regression coefficients cannot identify causal effects and that very simple models of imitation (a form of social contagion) can produce substantial correlations between an individual’s enduring traits and his or her choices, even when there is no intrinsic affinity between them. The authors also suggest some possible constructive responses to these results.",
"",
"The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities.",
"",
"",
"Social networks have emerged as a critical factor in information dissemination, search, marketing, expertise and influence discovery, and potentially an important tool for mobilizing people. Social media has made social networks ubiquitous, and also given researchers access to massive quantities of data for empirical analysis. These data sets offer a rich source of evidence for studying dynamics of individual and group behavior, the structure of networks and global patterns of the flow of information on them. However, in most previous studies, the structure of the underlying networks was not directly visible but had to be inferred from the flow of information from one individual to another. As a result, we do not yet understand dynamics of information spread on networks or how the structure of the network affects it. We address this gap by analyzing data from two popular social news sites. Specifically, we extract social networks of active users on Digg and Twitter, and track how interest in news stories spreads among them. We show that social networks play a crucial role in the spread of information on these sites, and that network structure affects dynamics of information flow.",
"",
""
]
}
|
1201.4138
|
2949511554
|
We study random lozenge tilings of a certain shape in the plane called the Novak half-hexagon, and compute the correlation functions for this process. This model was introduced by Nordenstam and Young (2011) and has many intriguing similarities with a more well-studied model, domino tilings of the Aztec diamond. The most difficult step in the present paper is to compute the inverse of the matrix whose (i,j) entry is the binomial coefficient C(A, B_j - i) for indeterminate variables A and B_1, ..., B_n.
|
Metcalfe @cite_1 has developed an alternative approach to problems of this type, by developing a theory of the asymptotics of a sort of interlacing particle process. The theory currently covers a slightly different setting, in which the positions of the particles is continuous, but Metcalfe is in the process of extending his methods to the discrete setting.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1990552616"
],
"abstract": [
"A standard Gelfand–Tsetlin pattern of depth n is a configuration of particles in ( 1, ,n R ) . For each ( r 1, , n , r R ) is referred to as the rth level of the pattern. A standard Gelfand–Tsetlin pattern has exactly r particles on each level r, and particles on adjacent levels satisfy an interlacing constraint. Probability distributions on the set of Gelfand–Tsetlin patterns of depth n arise naturally as distributions of eigenvalue minor processes of random Hermitian matrices of size n. We consider such probability spaces when the distribution of the matrix is unitarily invariant, prove a determinantal structure for a broad subclass, and calculate the correlation kernel. In particular we consider the case where the eigenvalues of the random matrix are fixed. This corresponds to choosing uniformly from the set of Gelfand–Tsetlin patterns whose nth level is fixed at the eigenvalues of the matrix. Fixing ( q_n 1, ,n ) , and letting n → ∞ under the assumption that ( q_n n (0, 1) ) and the empirical distribution of the particles on the nth level converges weakly, the asymptotic behaviour of particles on level q n is relevant to free probability theory. Saddle point analysis is used to identify the set in which these particles behave asymptotically like a determinantal random point field with the Sine kernel."
]
}
|
1201.4138
|
2949511554
|
We study random lozenge tilings of a certain shape in the plane called the Novak half-hexagon, and compute the correlation functions for this process. This model was introduced by Nordenstam and Young (2011) and has many intriguing similarities with a more well-studied model, domino tilings of the Aztec diamond. The most difficult step in the present paper is to compute the inverse of the matrix whose (i,j) entry is the binomial coefficient C(A, B_j - i) for indeterminate variables A and B_1, ..., B_n.
|
@cite_5 , there appears a slightly less general kernel, written in terms of the Hahn polynomials; this is used to prove some theorems on the fluctuations of the frozen boundary of lozenge tilings of a hexagon.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1587514819"
],
"abstract": [
"Nous montrons en utilisant des chemins qui ne s'intersectent pas qu'un pavage rhombique d'un hexagone, ou une partition planaire en boites, est decrit par un point processus ponctuel determinentiel, donne par un noyau de Hahn etendu."
]
}
|
1201.4777
|
2952069805
|
Multilabel classification is a relatively recent subfield of machine learning. Unlike to the classical approach, where instances are labeled with only one category, in multilabel classification, an arbitrary number of categories is chosen to label an instance. Due to the problem complexity (the solution is one among an exponential number of alternatives), a very common solution (the binary method) is frequently used, learning a binary classifier for every category, and combining them all afterwards. The assumption taken in this solution is not realistic, and in this work we give examples where the decisions for all the labels are not taken independently, and thus, a supervised approach should learn those existing relationships among categories to make a better classification. Therefore, we show here a generic methodology that can improve the results obtained by a set of independent probabilistic binary classifiers, by using a combination procedure with a classifier trained on the co-occurrences of the labels. We show an exhaustive experimentation in three different standard corpora of labeled documents (Reuters-21578, Ohsumed-23 and RCV1), which present noticeable improvements in all of them, when using our methodology, in three probabilistic base classifiers.
|
A generative model is also presented in @cite_16 . Here, the main assumption is that words in documents belonging to several categories can be characterized as a mixture of characteristic words related to each of the categories, being this assumption confirmed with experimentation. Both first (PMM1) and second (PMM2) order models are built, and learning algorithms (using a MAP estimation) are proposed for both alternatives. Experiments are carried out with webpages gathered from the yahoo.com server. Presented results are good, and improve other methods as SVM, naive Bayes and @math -NN.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2129414564"
],
"abstract": [
"We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages."
]
}
|
1201.4292
|
2949598863
|
Major wireless operators are nowadays facing network capacity issues in striving to meet the growing demands of mobile users. At the same time, 3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g., Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a content dissemina- tion framework that harnesses ad hoc communication opportunities to minimize the load on the wireless infrastructure while guaranteeing tight delivery delays. It achieves this through a control loop that collects user-sent acknowledgements to determine if new copies need to be reinjected into the network through the 3G interface. Push-and-Track is flexible and can be applied to a variety of scenarios, including periodic message flooding and floating data. For the former, this paper examines multiple strategies to determine how many copies of the content should be injected, when, and to whom; for the latter, it examines the achievable offload ratio depending on the freshness constraints. The short delay-tolerance of common content, such as news or road traffic updates, make them suitable for such a system. Use cases with a long delay-tolerance, such as software updates, are an even better fit. Based on a realistic large-scale vehicular dataset from the city of Bologna composed of more than 10,000 vehicles, we demonstrate that Push-and-Track consistently meets its delivery objectives while reducing the use of the 3G network by about 90 .
|
Finally, theoretical frameworks for determining liveness or expected lifetime of floating data have been developed in the context of sensors networks @cite_29 and opportunistic networks @cite_21 . In a vehicular scenario, hybrid infrastructure opportunistic networks have been proposed for tying floating data to a given geographic area @cite_42 @cite_33 . However, unlike Push-and-Track, neither of these include a feedback loop and delivery remains probabilistic.
|
{
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_42",
"@cite_33"
],
"mid": [
"2112519645",
"2170833787",
"2168366381",
"2113040055"
],
"abstract": [
"Consider a network of, say, sensors, or P2P nodes, or Bluetooth-enabled cell-phones, where nodes transmit information to each other and where links and nodes can go up or down. Consider also a 'datum', that is, a piece of information, like a report of an emergency condition in a sensor network, a national traditional song, or a mobile phone virus. How often should nodes transmit the datum to each other, so that the datum can survive (or, in the virus case, under what conditions will the virus die out)? Clearly, the link and node fault probabilities are important - what else is needed to ascertain the survivability of the datum? We propose and solve the problem using non-linear dynamical systems and fixed point stability theorems. We provide a closed-form formula that, surprisingly, depends on only one additional parameter, the largest eigenvalue of the connectivity matrix. We illustrate the accuracy of our analysis on realistic and real settings, like mote sensor networks from Intel and MIT, as well as Gnutella and P2P networks.",
"We consider an opportunistic content sharing system designed to store and distribute local spatio-temporal “floating” information in uncoordinated P2P fashion relying solely on the mobile nodes passing through the area of interest, referred to as the anchor zone. Nodes within the anchor zone exchange the information in opportunistic manner, i.e., whenever two nodes come within each others' transmission range. Outside the anchor zone, the nodes are free to delete the information, since it is deemed relevant only for the nodes residing inside the anchor zone. Due to the random nature of the operation, there are no guarantees, e.g., for the information availability. By means of analytical models, we show that such a system, without any supporting infrastructure, can be a viable and surprisingly reliable option for content sharing as long as a certain criterion, referred to as the criticality condition, is met. The important quantity is the average number of encounters a randomly chosen node experiences during its sojourn time in the anchor zone, which again depends on the communication range and the mobility pattern. The theoretical studies are complemented with simulation experiments with various mobility models showing good agreement with the analytical results.",
"Content-based information dissemination has a potential number of applications in vehicular networking, including advertising, traffic and parking notifications and emergency announcements. In this paper we describe a protocol for content based information dissemination in hybrid (i.e., partially structureless) vehicular networks. The protocol allows content to “stick” to areas where vehicles need to receive it. The vehicle's subscriptions indicate the driver's interests about types of content and are used to filter and route information to affected vehicles. The publications, generated by other vehicles or by central servers, are first routed into the area, then continuously propagated for a specified time interval. The protocol takes advantage of both the infrastructure (i.e., wireless base stations), if this exists, and the decentralized vehicle-to-vehicle communication technologies. We evaluate our approach by simulation over a number of realistic vehicular traces based scenarios. Results show that our protocol achieves high message delivery while introducing low overhead, even in scenarios where no infrastructure is available.",
"Supporting future large-scale vehicular networks is expected to require a combination of fixed roadside infrastructure and mobile in-vehicle technologies. The need for an infrastructure, however, considerably decreases the deployment area of VANET applications. In this paper, we propose a self-organizing mechanism to emulate a geo-localized virtual infrastructure (GVI). This latter is emulated by a bounded-size subset of vehicles currently populating the geographic region where the virtual infrastructure is to be deployed. An analytical model is proposed to study this mechanism. More precisely, this model is proposed to study the GVI in the frame of its main use: data dissemination in VANETs. Despite being simple, the proposed model can accurately predict the system performance such as the probability that a vehicle is informed, and the average number of duplicate messages received by a vehicle, and allows a careful investigation of the impact of vehicular traffic properties and system parameters on performance criteria. Analytical and simulation results show that the proposed GVI mechanism can periodically disseminate the data within an intersection area, efficiently utilize the limited bandwidth and ensure high delivery ratio."
]
}
|
1201.3318
|
1517824290
|
This report contains revision and extension of some results about RBO from [14]. RBO is a simple and ecient broadcast scheduling of n = 2 k uniform frames for battery powered radio receivers. Each frame contains a key from some arbitrary linearly ordered universe. The broadcast cycle a sequence of frames sorted by the keys and permuted by k-bit reversal is transmitted in a round robin fashion by the broadcaster. At arbitrary time during the transmission, the receiver may start a simple protocol that reports to him all the frames with the keys that are contained in a specied
|
Broadcast scheduling for radio receivers with low access time (i.e. the delay to the reception of the required record) and low average tuning time (i.e. the energetic cost) was considered by Imielinski, Viswanathan, and Badrinath (see e.g. @cite_8 , @cite_2 , @cite_3 ). In @cite_2 , hashing and flexible indexing for finding single records in broadcast cycle have been proposed and compared. In @cite_3 , a distributed index based on a ordered balanced tree has been proposed. The broadcast sequence consists of two kinds of buckets . Groups of index buckets , containing parts of the index tree, are interleaved with the groups of data buckets containing proper data and a pointer (i.e. time offset) to the next index bucket. Each group of index buckets consists of the copy of upper part of the index tree together with the relevant fragment of the lower part of the tree. This mechanism has found useful application even in more complex scenarios of delivering data to mobile users @cite_4 .
|
{
"cite_N": [
"@cite_4",
"@cite_2",
"@cite_3",
"@cite_8"
],
"mid": [
"2131256609",
"1777442634",
"2109464511",
"2106732800"
],
"abstract": [
"Mobile computing has the potential for managing information globally. Data management issues in mobile computing have received some attention in recent times, and the design of adaptive braodcast protocols has been posed as an important probllem. Such protocols are employed by database servers to decide on the content of bbroadcasts dynamically, in response to client mobility and demand patterns. In this paper we design such protocols and also propose efficient retrieval strategies that may be employed by clients to download information from broadcasts. The goal is to design cooperative strategies between server and client to provide access to information in such a way as to minimize energy expenditure by clients. We evaluate the performance of our protocols both analytically and through simulation.",
"Organizing massive amount of information on communication channels is a new challenge to the data management and telecommunication communities. In this paper, we consider wireless data broadcasting as a way of disseminating information to a massive number of battery powered palmtops. We show that different physical requirements of the wireless digital medium make the problem of organizing wireless broadcast data different from data organization on the disk. We demonstrate that providing index or hashing based access to the data transmitted over wireless is very important for extending battery life and can result in very significant improvement in battery utilization. We describe two methods (Hashing and Flexible Indexing) for organizing and accessing broadcast data in such a way that two basic parameters: tuning time, which affects battery life, and access time (waiting time for data) are minimized.",
"Organizing massive amount of data on wireless communication networks in order to provide fast and low power access to users equipped with palmtops, is a new challenge to the data management and telecommunication communities. Solutions must take under consideration the physical restrictions of low network bandwidth and limited battery life of palmtops. This paper proposes algorithms for multiplexing clustering and nonclustering indexes along with data on wireless networks. The power consumption and the latency for obtaining the required data are considered as the two basic performance criteria for all algorithms. First, this paper describes two algorithms namely, (1, m) indexing and Distributed Indexing, for multiplexing data and its clustering index. Second, an algorithm called Nonclustered Indexing is described for allocating static data and its corresponding nonclustered index. Then, the Nonclustered indexing algorithm is generalized to the case of multiple indexes. Finally, the proposed algorithms are analytically demonstrated to lead to significant improvement of battery life while retaining a low latency.",
"We consider wireless broadcasting of data as a way of disseminating information to a massive number of users. Organizing and accessing information on wireless communication channels is different from the problem of organizing and accessing data on the disk. We describe two methods, (1, m ) Indexing and Distributed Indexing , for organizing and accessing broadcast data. We demonstrate that the proposed algorithms lead to significant improvement of battery life, while retaining a low access time."
]
}
|
1201.3318
|
1517824290
|
This report contains revision and extension of some results about RBO from [14]. RBO is a simple and ecient broadcast scheduling of n = 2 k uniform frames for battery powered radio receivers. Each frame contains a key from some arbitrary linearly ordered universe. The broadcast cycle a sequence of frames sorted by the keys and permuted by k-bit reversal is transmitted in a round robin fashion by the broadcaster. At arbitrary time during the transmission, the receiver may start a simple protocol that reports to him all the frames with the keys that are contained in a specied
|
Khanna and Zhou @cite_6 proposed a sophisticated version of the index tree aimed at minimizing mean access and tuning time, for given probability of each data record being requested. The broadcast cycle contains multiple copies of data items, so that spacing between copies of each item is related to the optimal spacing, minimizing mean access time derived in @cite_18 . However the keys are not arbitrary. The key of the item is determined by its probability of being requested.
|
{
"cite_N": [
"@cite_18",
"@cite_6"
],
"mid": [
"1691711714",
"2048152400"
],
"abstract": [
"With the increasing popularity of portable wireless computers, mechanisms to efficiently transmit information to wireless clients are of significant interest. The environment under consideration is asymmetric in that the information server has much more bandwidth available, as compared to the clients. In such environments, often it is not possible (or not desirable) for the clients to send explicit requests to the server. It has been proposed that in such systems the server should broadcast the data periodically. One challenge in implementing this solution is to determine the schedule for broadcasting the data, such that the wait encountered by the clients is minimized. A broadcast schedule determines what is broadcast by the server and when. In this paper, we present algorithms for determining broadcast schedules that minimize the wait time. Broadcast scheduling algorithms for environments subject to errors, and systems where different clients may listen to different number of broadcast channels are also considered. Performance evaluation results are presented to demonstrate that our algorithms perform well.",
"We consider the problem of efficient information retrieval in asymmetric communication environments where multiple clients with limited resources retrieve information from a powerful server who periodically broadcasts its information repository over a communication medium. The cost of a retrieving client consists of two components: (a) access time, defined as the total amount of time spent by a client in retrieving the information of interest, and (b) tuning time, defined as the time spent by the client in actively listening to the communication medium, measuring a certain efficiency in resource usage. A probability distribution is associated with the data items in the broadcast, representing the likelihood of a data item's being requested at any point of time. The problem of indexed data broadcast is to schedule the data items interleaved with certain indexing information in the broadcast so as to minimize simultaneously the mean access time and the mean tuning time. Prior work on this problem thus far has focused only on some special cases. In this paper we study the indexed data broadcast problem in its full generality and design a broadcast scheme that achieves a mean access time of at most (1.5+?) times the optimal and a mean tuning time bounded by O(logn)."
]
}
|
1201.3318
|
1517824290
|
This report contains revision and extension of some results about RBO from [14]. RBO is a simple and ecient broadcast scheduling of n = 2 k uniform frames for battery powered radio receivers. Each frame contains a key from some arbitrary linearly ordered universe. The broadcast cycle a sequence of frames sorted by the keys and permuted by k-bit reversal is transmitted in a round robin fashion by the broadcaster. At arbitrary time during the transmission, the receiver may start a simple protocol that reports to him all the frames with the keys that are contained in a specied
|
Indexing of broadcast stream for XML documents @cite_14 or for full text search @cite_9 have also been considered.
|
{
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"2116665893",
"2071778852"
],
"abstract": [
"In wireless mobile computing environments, broadcasting is an effective and scalable technique to disseminate information to a massive number of clients, wherein the energy usage and latency are considered major concerns. This paper presents an indexing scheme for the energy- and latency-efficient processing of full-text searches over the wireless broadcast data stream. Although a lot of access methods and index structures have been proposed in the past for full-text searches, all of them are targeted for data in disk storage, not wireless broadcast channels. For full-text searches on a wireless broadcast stream, we firstly introduce a naive, inverted list-style indexing method, where inverted lists are placed in front of the data on the wireless channel. In order to reduce the latency overhead, we propose a two-level indexing method which adds another level of index structure to the basic inverted list-style index. In addition, we propose a replication strategy of the index list and index tree to further improve the latency performance. We analyze the performance of the proposed indexing scheme with respect to the latency and energy usage measures, and show the optimality of index replication. The correctness of the analysis is demonstrated through simulation experiments, and the effectiveness of the proposed scheme is shown by implementing a real wireless information delivery system.",
"The paper considers a wireless information system, wherein various pieces of information represented in XML are broadcast via wireless channels, and mobile clients access the broadcast stream using energy-restricted portable devices. In this paper, we propose a wireless XML streaming method designed to provide energy-efficient access to a wireless stream. We construct two hierarchical structures to represent the XML data and their index information, called the XML data tree and XML index tree, respectively. The wireless XML stream is generated by traversing these two structures with some replications. We design three data index replication strategies (PP, TT, and TP) in the streaming method. We compare the proposed streaming method with a [email protected]?ve method called the (1,X) method both analytically and experimentally. Also, based on our analysis results, we determine the optimal method of replication."
]
}
|
1201.3318
|
1517824290
|
This report contains revision and extension of some results about RBO from [14]. RBO is a simple and ecient broadcast scheduling of n = 2 k uniform frames for battery powered radio receivers. Each frame contains a key from some arbitrary linearly ordered universe. The broadcast cycle a sequence of frames sorted by the keys and permuted by k-bit reversal is transmitted in a round robin fashion by the broadcaster. At arbitrary time during the transmission, the receiver may start a simple protocol that reports to him all the frames with the keys that are contained in a specied
|
In practical applications, due to imperfect synchronization between the broadcaster and the receiver, the header should also contain either the time-slot number or its bit reversal -- the index of the frame. To enable changing the contents and the length of the sequence of the transmitted keys by the broadcaster, the header may also include the parameter @math , such that @math is the length of the broadcast cycle, and some bits used to notify the receiver that the that the sequence of keys has been changed. For RBO, these issues have been discussed in @cite_7 .
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"1802425574"
],
"abstract": [
"We propose a protocol (called RBO) for broadcasting long streams of single-packet messages over radio channel for tiny, battery powered, receivers. The messages are labeled by the keys from some linearly ordered set. The sender repeatedly broadcasts a sequence of many (possibly millions) of messages, while each receiver is interested in reception of a message with a specified key within this sequence. The transmission is arranged so that the receiver can wake up in arbitrary moment and find the nearest transmission of its searched message. Even if it does not know the position of the message in the sequence, it needs only to receive a small number of (the headers of) other messages to locate it properly. Thus it can save energy by keeping the radio switched off most of the time. We show that bit-reversal permutation has \"recursive bisection properties\" and, as a consequence, RBO can be implemented very efficiently with only constant number of @math -bit variables, where @math is the total number of messages in the sequence. The total number of the required receptions is at most @math in the model with perfect synchronization. The basic procedure of RBO (computation of the time slot for the next required reception) requires only @math bit-wise operations. We propose implementation mechanisms for realistic model (with imperfect synchronization), for operating systems (such as e.g. TinyOS)."
]
}
|
1201.3960
|
2128372493
|
This dissertation is a study on the design and analysis of novel, optimal routing and rate control algorithms in wireless, mobile communication networks. Congestion control and routing algorithms upto now have been designed and optimized for wired or wireless mesh networks. In those networks, optimal algorithms (optimal in the sense that either the throughput is maximized or delay is minimized, or the network operation cost is minimized) can be engineered based on the classic time scale decomposition assumption that the dynamics of the network are either fast enough so that these algorithms essentially see the average or slow enough that any changes can be tracked to allow the algorithms to adapt over time. However, as technological advancements enable integration of ever more mobile nodes into communication networks, any rate control or routing algorithms based, for example, on averaging out the capacity of the wireless mobile link or tracking the instantaneous capacity will perform poorly. The common element in our solution to engineering efficient routing and rate control algorithms for mobile wireless networks is to make the wireless mobile links seem as if they are wired or wireless links to all but few nodes that directly see the mobile links (either the mobiles or nodes that can transmit to or receive from the mobiles) through an appropriate use of queuing structures at these selected nodes. This approach allows us to design end-to-end rate control or routing algorithms for wireless mobile networks so that neither averaging nor instantaneous tracking is necessary.
|
There is a considerable body of literature @cite_67 @cite_55 on modeling the TCP window process in the presence of active queue management (AQM) systems, especially random early detection (RED) @cite_33 . @cite_55 presents a weak limit of the window size process by proving a weak convergence of triangular arrays. @cite_78 presents a fluid limit of the TCP window process, as the number of concurrent flows sharing a link goes to infinity, and the authors show that the deterministic limiting system provides a good approximation for the average queue size and total throughput. None of the previous works mentioned above treats the situation when the loss rate can not be tracked due mismatch between the channel change time-scale and the RTT time-scale.
|
{
"cite_N": [
"@cite_55",
"@cite_67",
"@cite_33",
"@cite_78"
],
"mid": [
"2117068155",
"2145942024",
"2158733823",
""
],
"abstract": [
"We consider a discrete-time stochastic model of an ECN RED gateway where competing TCP sources share the link capacity. As the number of competing flows becomes large, the asymptotic queue behavior (normalized by the number of flows) at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent. A Central Limit Theorem complement is also presented, yielding a more accurate characterization of the asymptotic queue size. These results suggest a scalable yet accurate model of this complex large-scale stochastic feedback system, and crisply reveal the sources of queue fluctuations.",
"In this paper we study a previously developed linearized model of TCP and active queue management (AQM). We use classical control system techniques to develop controllers well suited for the application. The controllers are shown to have better theoretical properties than the well known RED controller. We present guidelines for designing stable controllers subject to network parameters like load level propagation delay etc. We also present simple implementation techniques which require a minimal change to RED implementations. The performance of the controllers are verified and compared with RED using ns simulations. The second of our designs, the proportional integral (PI) controller is shown to outperform RED significantly.",
"The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP IP network are used to illustrate the performance of RED gateways. >",
""
]
}
|
1201.3960
|
2128372493
|
This dissertation is a study on the design and analysis of novel, optimal routing and rate control algorithms in wireless, mobile communication networks. Congestion control and routing algorithms upto now have been designed and optimized for wired or wireless mesh networks. In those networks, optimal algorithms (optimal in the sense that either the throughput is maximized or delay is minimized, or the network operation cost is minimized) can be engineered based on the classic time scale decomposition assumption that the dynamics of the network are either fast enough so that these algorithms essentially see the average or slow enough that any changes can be tracked to allow the algorithms to adapt over time. However, as technological advancements enable integration of ever more mobile nodes into communication networks, any rate control or routing algorithms based, for example, on averaging out the capacity of the wireless mobile link or tracking the instantaneous capacity will perform poorly. The common element in our solution to engineering efficient routing and rate control algorithms for mobile wireless networks is to make the wireless mobile links seem as if they are wired or wireless links to all but few nodes that directly see the mobile links (either the mobiles or nodes that can transmit to or receive from the mobiles) through an appropriate use of queuing structures at these selected nodes. This approach allows us to design end-to-end rate control or routing algorithms for wireless mobile networks so that neither averaging nor instantaneous tracking is necessary.
|
Initially, the approach taken in intermittently connected networks and DTNs for routing was based on packet replications. The simplest way to make sure packets are delivered is to flood the mobile'' portion of the network so that the likelihood of a packet reaching the destination increases as more and more replicas are made @cite_12 . A more refined approach is to control the number of replicas of a packet so that there is a balance between increasing the likelihood and still leaving some capacity for new packets to be injected into the network @cite_17 @cite_53 @cite_41 @cite_12 @cite_56 . Another refined approach is to learn the intermittently connected topology and use this knowledge to route replicate through the best'' contacts and encounters and avoid congestion @cite_38 @cite_44 @cite_37 @cite_43 @cite_31 .
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_41",
"@cite_53",
"@cite_56",
"@cite_44",
"@cite_43",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"2162076967",
"2149891651",
"2142522527",
"2125957038",
"1923680614",
"2147830904",
"",
"",
"1572481965",
"1966518282"
],
"abstract": [
"We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs.",
"Location is an important feature for many applications, and wireless networks can better serve their clients by anticipating client mobility. As a result, many location predictors have been proposed in the literature, though few have been evaluated with empirical evidence. This paper reports on the results of the first extensive empirical evaluation of location predictors, using a two-year trace of the mobility patterns of over 6,000 users on Dartmouth's campus-wide Wi-Fi wireless network. We implemented and compared the prediction accuracy of several location predictors drawn from two major families of domain-independent predictors, namely Markov-based and compression-based predictors. We found that low-order Markov predictors performed as well or better than the more complex and more space-consuming compression-based predictors. Predictors of both families fail to make a prediction when the recent context has not been previously seen. To overcome this drawback, we added a simple fallback feature to each predictor and found that it significantly enhanced its accuracy in exchange for modest effort. Thus the Order-2 Markov predictor with fallback was the best predictor we studied, obtaining a median accuracy of about 72 for users with long trace lengths. We also investigated a simplification of the Markov predictors, where the prediction is based not on the most frequently seen context in the past, but the most recent, resulting in significant space and computational savings. We found that Markov predictors with this recency semantics can rival the accuracy of standard Markov predictors in some cases. Finally, we considered several seemingly obvious enhancements, such as smarter tie-breaking and aging of context information, and discovered that they had little effect on accuracy. The paper ends with a discussion and suggestions for further work.",
"Disruption-tolerant networks (DTNs) attempt to route network messages via intermittently connected nodes. Routing in such environments is difficult because peers have little information about the state of the partitioned network and transfer opportunities between peers are of limited duration. In this paper, we propose MaxProp, a protocol for effective routing of DTN messages. MaxProp is based on prioritizing both the schedule of packets transmitted to other peers and the schedule of packets to be dropped. These priorities are based on the path likelihoods to peers according to historical data and also on several complementary mechanisms, including acknowledgments, a head-start for new packets, and lists of previous intermediaries. Our evaluations show that MaxProp performs better than protocols that have access to an oracle that knows the schedule of meetings between peers. Our evaluations are based on 60 days of traces from a real DTN network we have deployed on 30 buses. Our network, called UMassDieselNet, serves a large geographic area between five colleges. We also evaluate MaxProp on simulated topologies and show it performs well in a wide variety of DTN environments.",
"Intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination. These networks fall into the general category of Delay Tolerant Networks. There are many real networks that follow this paradigm, for example, wildlife tracking sensor networks, military networks, inter-planetary networks, etc. In this context, conventional routing schemes would fail.To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. Furthermore, proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays. With this in mind, we introduce a new routing scheme, called Spray and Wait, that \"sprays\" a number of copies into the network, and then \"waits\" till one of these nodes meets the destination.Using theory and simulations we show that Spray and Wait outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered; its overall performance is close to the optimal scheme. Furthermore, it is highly scalable retaining good performance under a large range of scenarios, unlike other schemes. Finally, it is simple to implement and to optimize in order to achieve given performance goals in practice.",
"In this paper, we address the problem of routing in intermittently connected networks. In such networks there is no guarantee that a fully connected path between source and destination exists at any time, rendering traditional routing protocols unable to deliver messages between hosts. There does, however, exist a number of scenarios where connectivity is intermittent, but where the possibility of communication still is desirable. Thus, there is a need for a way to route through networks with these properties. We propose PRoPHET, a probabilistic routing protocol for intermittently connected networks and compare it to the earlier presented Epidemic Routing protocol through simulations. We show that PRoPHET is able to deliver more messages than Epidemic Routing with a lower communication overhead.",
"Delay-tolerant networks (DTNs) have the potential to connect devices and areas of the world that are under-served by current networks. A critical challenge for DTNs is determining routes through the network without ever having an end-to-end connection, or even knowing which \"routers\" will be connected at any given time. Prior approaches have focused either on epidemic message replication or on knowledge of the connectivity schedule. The epidemic approach of replicating messages to all nodes is expensive and does not appear to scale well with increasing load. It can, however, operate without any prior network configuration. The alternatives, by requiring a priori connectivity knowledge, appear infeasible for a self-configuring network.In this paper we present a practical routing protocol that only uses observed information about the network. We designed a metric that estimates how long a message will have to wait before it can be transferred to the next hop. The topology is distributed using a link-state routing protocol, where the link-state packets are \"flooded\" using epidemic routing. The routing is recomputed when connections are established. Messages are exchanged if the topology suggests that a connected node is \"closer\" than the current node.We demonstrate through simulation that our protocol provides performance similar to that of schemes that have global knowledge of the network topology, yet without requiring that knowledge. Further, it requires a significantly smaller quantity of buffer, suggesting that our approach will scale with the number of messages in the network, where replication approaches may not.",
"",
"",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"Many DTN routing protocols use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, so these approaches have only an incidental effect on such routing metrics as maximum or average delivery latency. In this paper, we present RAPID , an intentional DTN routing protocol that can optimize a specific routing metric such as worst-case delivery latency or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system. We evaluate RAPID rigorously through a prototype of RAPID deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real DTN at this scale. Our results suggest that RAPID significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads RAPID is within 10 of the optimal performance."
]
}
|
1201.3960
|
2128372493
|
This dissertation is a study on the design and analysis of novel, optimal routing and rate control algorithms in wireless, mobile communication networks. Congestion control and routing algorithms upto now have been designed and optimized for wired or wireless mesh networks. In those networks, optimal algorithms (optimal in the sense that either the throughput is maximized or delay is minimized, or the network operation cost is minimized) can be engineered based on the classic time scale decomposition assumption that the dynamics of the network are either fast enough so that these algorithms essentially see the average or slow enough that any changes can be tracked to allow the algorithms to adapt over time. However, as technological advancements enable integration of ever more mobile nodes into communication networks, any rate control or routing algorithms based, for example, on averaging out the capacity of the wireless mobile link or tracking the instantaneous capacity will perform poorly. The common element in our solution to engineering efficient routing and rate control algorithms for mobile wireless networks is to make the wireless mobile links seem as if they are wired or wireless links to all but few nodes that directly see the mobile links (either the mobiles or nodes that can transmit to or receive from the mobiles) through an appropriate use of queuing structures at these selected nodes. This approach allows us to design end-to-end rate control or routing algorithms for wireless mobile networks so that neither averaging nor instantaneous tracking is necessary.
|
@cite_4 @cite_50 @cite_8 study networks that are closer to ours. In @cite_50 , distant groups of nodes are connected via mobiles, much like our network but with general random mobility. At the intra-group level, a MANET routing protocol is used for route discovery, and at the inter-group level, the Spray-and-Wait algorithm @cite_53 is used among mobiles to decrease forwarding time and increase delivery probability. @cite_4 augments AODV with DTN routing to discover routes and whether those routes support DTN routing and to what extent they support end-to-end IP routing and hop-by-hop DTN routing. @cite_8 studies how two properties of the mobile nodes, namely whether a mobile is dedicated to serve a specific region (ownership) and whether the mobile movement can be scheduled and controlled by regions (scheduling time), affect performance metrics such as delay and efficiency.
|
{
"cite_N": [
"@cite_53",
"@cite_4",
"@cite_50",
"@cite_8"
],
"mid": [
"2125957038",
"2163071443",
"2142640475",
"2122354383"
],
"abstract": [
"Intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination. These networks fall into the general category of Delay Tolerant Networks. There are many real networks that follow this paradigm, for example, wildlife tracking sensor networks, military networks, inter-planetary networks, etc. In this context, conventional routing schemes would fail.To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. Furthermore, proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays. With this in mind, we introduce a new routing scheme, called Spray and Wait, that \"sprays\" a number of copies into the network, and then \"waits\" till one of these nodes meets the destination.Using theory and simulations we show that Spray and Wait outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered; its overall performance is close to the optimal scheme. Furthermore, it is highly scalable retaining good performance under a large range of scenarios, unlike other schemes. Finally, it is simple to implement and to optimize in order to achieve given performance goals in practice.",
"Mobile Ad-hoc Network (MANET) routing protocols aim at establishing end-to-end paths between communicating nodes and thus support end-to-end semantics of existing transports and applications. In contrast, DTN-based communication schemes imply asynchronous communication (and thus often require new applications) but achieve better reachability, particularly in sparsely populated environments. In this paper, we suggest a hybrid scheme that combines AODV and DTN-based routing and allows keeping the AODV advantage of maintaining end-to-end semantics whenever possible while, at the same time, also offering DTN-based communication options whenever available---leaving the choice to the application. We present our protocol and system design, particularly including the interaction of AODV and DTN, demonstrate achievable performance gains based upon measurements, and report on initial experiments with our implementation in an emulation environment.",
"In this paper we propose HYMAD, a Hybrid DTN-MANET routing protocol which uses DTN between disjoint groups of nodes while using MANET routing within these groups. HYMAD is fully decentralized and only makes use of topological information exchanges between the nodes. We evaluate the scheme in simulation by replaying real life traces which exhibit this highly dynamic connectivity. The results show that HYMAD outperforms the multi-copy Spray-and-Wait DTN routing protocol it extends, both in terms of delivery ratio and delay, for any number of message copies. Our conclusion is that such a Hybrid DTN-MANET approach offers a promising venue for the delivery of elastic data in mobile ad-hoc networks as it retains the resilience of a pure DTN protocol while significantly improving performance.",
"The evolution of wireless devices along with the increase in user mobility have created new challenges such as network partitioning and intermittent connectivity. These new challenges have become apparent in many situations where the transmission of critical data is of high priority. Disaster rescue groups, for example, are equipped with numerous devices which constantly gather and transmit various forms of data. The challenge of establishing communication between groups of this type has led to an evolutionary form of networks which we consider in this paper, namely, delay tolerant mobile networks (DTMNs). Nodes in DTMNs usually form clusters that we define as regions. Nodes within each region have end-to-end paths between them. Both regions, as well as nodes within a region, can be either stationary or mobile. For such environments, we propose using a dedicated set of messengers that relay message bundles between these regions. Our goal is to understand how messenger scheduling can be used to improve network performance and connectedness. We develop several classes of messenger scheduling algorithms which can be used to achieve inter-regional communication in such environments. We use simulation to better understand the performance and tradeoffs between these algorithms."
]
}
|
1201.3960
|
2128372493
|
This dissertation is a study on the design and analysis of novel, optimal routing and rate control algorithms in wireless, mobile communication networks. Congestion control and routing algorithms upto now have been designed and optimized for wired or wireless mesh networks. In those networks, optimal algorithms (optimal in the sense that either the throughput is maximized or delay is minimized, or the network operation cost is minimized) can be engineered based on the classic time scale decomposition assumption that the dynamics of the network are either fast enough so that these algorithms essentially see the average or slow enough that any changes can be tracked to allow the algorithms to adapt over time. However, as technological advancements enable integration of ever more mobile nodes into communication networks, any rate control or routing algorithms based, for example, on averaging out the capacity of the wireless mobile link or tracking the instantaneous capacity will perform poorly. The common element in our solution to engineering efficient routing and rate control algorithms for mobile wireless networks is to make the wireless mobile links seem as if they are wired or wireless links to all but few nodes that directly see the mobile links (either the mobiles or nodes that can transmit to or receive from the mobiles) through an appropriate use of queuing structures at these selected nodes. This approach allows us to design end-to-end rate control or routing algorithms for wireless mobile networks so that neither averaging nor instantaneous tracking is necessary.
|
The networks that utilize mobile carriers to transport data have been studied extensively recently by @cite_60 @cite_38 @cite_50 @cite_4 @cite_8 @cite_13 @cite_10 @cite_71 @cite_56 @cite_41 @cite_17 @cite_12 @cite_43 and others. The primary focus of @cite_13 @cite_56 @cite_43 @cite_12 is to increase the data delivery probability and reduce delivery latency through in the context of (DTN). Replication is useful in networks where mobile carriers move randomly because it increases the opportunities to transfer data from mobile nodes to static nodes and vice versa. In networks where the mobility patterns of mobiles are fixed, replication is not necessary. However, the drawback of fixed mobility patterns is that the network cannot dynamically respond to changes in the traffic loads. In addition, as long as data is delivered to the destination, it is considered sufficient, but in networks where the mobility pattern can be controlled, we can not only guarantee data delivery, but also the most efficient and optimal network resource utilization.
|
{
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_8",
"@cite_60",
"@cite_10",
"@cite_41",
"@cite_56",
"@cite_43",
"@cite_50",
"@cite_71",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2162076967",
"2163071443",
"2122354383",
"2112316694",
"2097625638",
"2142522527",
"1923680614",
"",
"2142640475",
"2151342861",
"2166406195",
"1572481965",
"1966518282"
],
"abstract": [
"We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs.",
"Mobile Ad-hoc Network (MANET) routing protocols aim at establishing end-to-end paths between communicating nodes and thus support end-to-end semantics of existing transports and applications. In contrast, DTN-based communication schemes imply asynchronous communication (and thus often require new applications) but achieve better reachability, particularly in sparsely populated environments. In this paper, we suggest a hybrid scheme that combines AODV and DTN-based routing and allows keeping the AODV advantage of maintaining end-to-end semantics whenever possible while, at the same time, also offering DTN-based communication options whenever available---leaving the choice to the application. We present our protocol and system design, particularly including the interaction of AODV and DTN, demonstrate achievable performance gains based upon measurements, and report on initial experiments with our implementation in an emulation environment.",
"The evolution of wireless devices along with the increase in user mobility have created new challenges such as network partitioning and intermittent connectivity. These new challenges have become apparent in many situations where the transmission of critical data is of high priority. Disaster rescue groups, for example, are equipped with numerous devices which constantly gather and transmit various forms of data. The challenge of establishing communication between groups of this type has led to an evolutionary form of networks which we consider in this paper, namely, delay tolerant mobile networks (DTMNs). Nodes in DTMNs usually form clusters that we define as regions. Nodes within each region have end-to-end paths between them. Both regions, as well as nodes within a region, can be either stationary or mobile. For such environments, we propose using a dedicated set of messengers that relay message bundles between these regions. Our goal is to understand how messenger scheduling can be used to improve network performance and connectedness. We develop several classes of messenger scheduling algorithms which can be used to achieve inter-regional communication in such environments. We use simulation to better understand the performance and tradeoffs between these algorithms.",
"The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service.",
"Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a \"least common denominator\" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. We identify three fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and describe the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling. We also examine Internet infrastructure adaptations that might yield comparable performance but conclude that the simplicity of the DTN architecture promises easier deployment and extension.",
"Disruption-tolerant networks (DTNs) attempt to route network messages via intermittently connected nodes. Routing in such environments is difficult because peers have little information about the state of the partitioned network and transfer opportunities between peers are of limited duration. In this paper, we propose MaxProp, a protocol for effective routing of DTN messages. MaxProp is based on prioritizing both the schedule of packets transmitted to other peers and the schedule of packets to be dropped. These priorities are based on the path likelihoods to peers according to historical data and also on several complementary mechanisms, including acknowledgments, a head-start for new packets, and lists of previous intermediaries. Our evaluations show that MaxProp performs better than protocols that have access to an oracle that knows the schedule of meetings between peers. Our evaluations are based on 60 days of traces from a real DTN network we have deployed on 30 buses. Our network, called UMassDieselNet, serves a large geographic area between five colleges. We also evaluate MaxProp on simulated topologies and show it performs well in a wide variety of DTN environments.",
"In this paper, we address the problem of routing in intermittently connected networks. In such networks there is no guarantee that a fully connected path between source and destination exists at any time, rendering traditional routing protocols unable to deliver messages between hosts. There does, however, exist a number of scenarios where connectivity is intermittent, but where the possibility of communication still is desirable. Thus, there is a need for a way to route through networks with these properties. We propose PRoPHET, a probabilistic routing protocol for intermittently connected networks and compare it to the earlier presented Epidemic Routing protocol through simulations. We show that PRoPHET is able to deliver more messages than Epidemic Routing with a lower communication overhead.",
"",
"In this paper we propose HYMAD, a Hybrid DTN-MANET routing protocol which uses DTN between disjoint groups of nodes while using MANET routing within these groups. HYMAD is fully decentralized and only makes use of topological information exchanges between the nodes. We evaluate the scheme in simulation by replaying real life traces which exhibit this highly dynamic connectivity. The results show that HYMAD outperforms the multi-copy Spray-and-Wait DTN routing protocol it extends, both in terms of delivery ratio and delay, for any number of message copies. Our conclusion is that such a Hybrid DTN-MANET approach offers a promising venue for the delivery of elastic data in mobile ad-hoc networks as it retains the resilience of a pure DTN protocol while significantly improving performance.",
"DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions.",
"Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from source to destination, or such a path is highly unstable and may break soon after it has been discovered. In this context, conventional routing schemes would fail. To deal with such networks we propose the use of an opportunistic hop-by-hop routing model. According to the model, a series of independent, local forwarding decisions are made, based on current connectivity and predictions of future connectivity information diffused through nodes' mobility. The important issue here is how to choose an appropriate next hop. To this end, we propose and analyze via theory and simulations a number of routing algorithms. The champion algorithm turns out to be one that combines the simplicity of a simple random policy, which is efficient in finding good leads towards the destination, with the sophistication of utility-based policies that efficiently follow good leads. We also state and analyze the performance of an oracle-based optimal algorithm, and compare it to the online approaches. The metrics used in the comparison are the average message delivery delay and the number of transmissions per message delivered.",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"Many DTN routing protocols use a variety of mechanisms, including discovering the meeting probabilities among nodes, packet replication, and network coding. The primary focus of these mechanisms is to increase the likelihood of finding a path with limited information, so these approaches have only an incidental effect on such routing metrics as maximum or average delivery latency. In this paper, we present RAPID , an intentional DTN routing protocol that can optimize a specific routing metric such as worst-case delivery latency or the fraction of packets that are delivered within a deadline. The key insight is to treat DTN routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system. We evaluate RAPID rigorously through a prototype of RAPID deployed over a vehicular DTN testbed of 40 buses and simulations based on real traces. To our knowledge, this is the first paper to report on a routing protocol deployed on a real DTN at this scale. Our results suggest that RAPID significantly outperforms existing routing protocols for several metrics. We also show empirically that for small loads RAPID is within 10 of the optimal performance."
]
}
|
1201.2531
|
2107785808
|
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
|
Several papers addressed the privacy problems of smart metering in the recent past @cite_9 @cite_8 @cite_20 @cite_19 @cite_11 @cite_1 @cite_14 @cite_0 . However, only a few of them have proposed technical solutions to protect users' privacy. In @cite_20 @cite_11 , the authors discuss the different security aspects of smart metering and the conflicting interests among stakeholders. The privacy of billing is considered in @cite_14 @cite_8 . These techniques uses zero-knowledge proofs to ensure that the fee calculated by the user is correct without disclosing any consumption data.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_20",
"@cite_11"
],
"mid": [
"1993673439",
"2159065056",
"1975153313",
"2135193968",
"2096219879",
"2156217225",
"2407814236",
""
],
"abstract": [
"Smart grid proposals threaten user privacy by potentially disclosing fine-grained consumption data to utility providers, primarily for time-of-use billing, but also for profiling, settlement, forecasting, tariff and energy efficiency advice. We propose a privacy-preserving protocol for general calculations on fine-grained meter readings, while keeping the use of tamper evident meters to a strict minimum. We allow users to perform and prove the correctness of computations based on readings on their own devices, without disclosing any fine grained consumption. Applying the protocols to time-of-use billing is particularly simple and efficient, but we also support a wider variety of tariff policies. Cryptographic proofs and multiple implementations are used to show the proposed protocols are secure and efficient.",
"Household smart meters that measure power consumption in real-time at fine granularities are the foundation of a future smart electricity grid. However, the widespread deployment of smart meters has serious privacy implications since they inadvertently leak detailed information about household activities. In this paper, we show that even without a priori knowledge of household activities or prior training, it is possible to extract complex usage patterns from smart meter data using off-the-shelf statistical methods. Our analysis uses two months of data from three homes, which we instrumented to log aggregate household power consumption every second. With the data from our small-scale deployment, we demonstrate the potential for power consumption patterns to reveal a range of information, such as how many people are in the home, sleeping routines, eating routines, etc. We then sketch out the design of a privacy-enhancing smart meter architecture that allows an electric utility to achieve its net metering goals without compromising the privacy of its customers.",
"The security and privacy of future smart grid and smart metering networks is important to their rollout and eventual acceptance by the public: research in this area is ongoing and smart meter users will need to be reassured that their data is secure. This paper describes a method for securely anonymizing frequent (for example, every few minutes) electrical metering data sent by a smart meter. Although such frequent metering data may be required by a utility or electrical energy distribution network for operational reasons, this data may not necessarily need to be attributable to a specific smart meter or consumer. It does, however, need to be securely attributable to a specific location (e.g. a group of houses or apartments) within the electricity distribution network. The method described in this paper provides a 3rd party escrow mechanism for authenticated anonymous meter readings which are difficult to associate with a particular smart meter or customer. This method does not preclude the provision of attributable metering data that is required for other purposes such as billing, account management or marketing research purposes.",
"Electricity suppliers have started replacing traditional electricity meters with so-called smart meters, which can transmit current power consumption levels to the supplier within short intervals. Though this is advantageous for the electricity suppliers' planning purposes, and also allows the customers a more detailed look at their usage behavior, it means a considerable risk for privacy. The detailed information can be used to judge whether persons are in the household, when they come home, which electric devices they use (e.g. when they watch TV), and so forth. In this work, we introduce the \"smart metering privacy model\" for measuring the degree of privacy that a smart metering application can provide. Moreover, we present two design solutions both with and without involvement of trusted third parties. We show that the solution with trusted party can provide \"perfect privacy\" under certain conditions.",
"The first part of this paper discusses developments wrt. smart (electricity) meters (simply called E-meters) in general, with emphasis on security and privacy issues. The second part will be more technical and describes protocols for secure communication with E-meters and for fraud detection (leakage) in a privacy-preserving manner. Our approach uses a combination of Paillier's additive homomorphic encryption and additive secret sharing to compute the aggregated energy consumption of a given set of users.",
"In this paper, we discuss symmetric-key and public-key protocols for key management in electricity transmission and distribution substations — both for communication within substations, and between substations and the network control center. Key management in the electricity network is widely regarded as a challenging problem, not only because of the scale, but also due to the fact that any mechanism must be implemented in resource-constrained environments. NISTIR 7628, the foundation document for the architecture of the US Smart Grid, mentions key management as one of the most important research areas, and the IEC 62351 standards committee has already initiated a new specification dedicated to key management. In this document, we describe different variants of symmetric-key and public-key protocols. Our design is motivated by the need to keep the mechanism simple, robust, usable and still cost effective. It is important to take into account the complexity and the costs involved not just in the initial bootstrapping of trust but also in subsequent key management operations like key update and revocation. It is vital to determine the complexity and the cost of recovery mechanisms — recovery not only from malicious, targeted attacks but also from unintentional failures. We present a detailed threat model, analysing a range of scenarios from physical intrusion through disloyal maintenance personnel to supply-chain attacks, network intrusions and attacks on central systems. We conclude that while using cryptography to secure wide area communication between the substation and the network control center brings noticeable benefits, the benefits of using cryptography within the substation bay are much less obvious; we expect that any such use will be essentially for compliance. The protocols presented in this paper are informed by this threat model and are optimised for robustness, including simplicity, usability and cost.",
"Smart grids are a hot topic, with the US administration devoting billions of dollars to modernising the electricity infrastructure. Significant action is likely in metering, where the largest and most radical change may come in the European Union. The EU is strongly encouraging its 27 Member States to replace utility meters with ‘smart meters’ by 2022. This will be a massive project: the UK, for example, looks set to replace 47m meters at a cost of perhaps £350 each. Yet it is not at all clear what it means for a meter to be secure. The utility wants to cut energy theft, so it wants the ability to disable any meter remotely; but a prudent nation state might be wary of a facility that could let an attacker turn off the lights. Again, the utility may want to monitor its customers’ consumption by the half hour, so it can price discriminate more effectively; the competition authorities may find this abhorrent. Other parts of government might find it convenient to have access to fine-grained consumption data, but might find themselves up against privacy law. There are at least half-a-dozen different stakeholders with different views on security – which can refer to information, to money, or to the supply of electricity. And it’s not even true that more security is always better: some customers may opt for an interruptible supply to save money. In short, energy metering is ripe for a security-economics analysis, and in this paper we attempt a first cut. We end up with five recommendations for the regulation of a future smart meter infrastructure.",
""
]
}
|
1201.2531
|
2107785808
|
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
|
Seemingly, the privacy of monitoring the sum consumption of multiple users may be solved by simply anonymizing individual measurements like in @cite_9 or using some mixnet. However, these ad-hoc'' techniques are dangerous and do not provide any real assurances of privacy. Several prominent examples in the history have shown that ad-hoc methods do not work @cite_18 . Moreover, these techniques require an existing trusted third party who performs anonymization. The authors in @cite_1 perturb the released aggregate with random noise and use a different model from ours to analyze the privacy of their scheme. However, they do not encrypt individual measurements which means that the added noise must be large enough to guarantee reasonable privacy. As individual noise shares sum up at the aggregation, the final noise makes the aggregate useless. In contrast to this, @cite_0 uses homomorphic encryption to guarantee privacy for individual measurements. However, the aggregate is not perturbed which means that it is not differential private.
|
{
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_9",
"@cite_18"
],
"mid": [
"2135193968",
"2096219879",
"1975153313",
"2170540710"
],
"abstract": [
"Electricity suppliers have started replacing traditional electricity meters with so-called smart meters, which can transmit current power consumption levels to the supplier within short intervals. Though this is advantageous for the electricity suppliers' planning purposes, and also allows the customers a more detailed look at their usage behavior, it means a considerable risk for privacy. The detailed information can be used to judge whether persons are in the household, when they come home, which electric devices they use (e.g. when they watch TV), and so forth. In this work, we introduce the \"smart metering privacy model\" for measuring the degree of privacy that a smart metering application can provide. Moreover, we present two design solutions both with and without involvement of trusted third parties. We show that the solution with trusted party can provide \"perfect privacy\" under certain conditions.",
"The first part of this paper discusses developments wrt. smart (electricity) meters (simply called E-meters) in general, with emphasis on security and privacy issues. The second part will be more technical and describes protocols for secure communication with E-meters and for fraud detection (leakage) in a privacy-preserving manner. Our approach uses a combination of Paillier's additive homomorphic encryption and additive secret sharing to compute the aggregated energy consumption of a given set of users.",
"The security and privacy of future smart grid and smart metering networks is important to their rollout and eventual acceptance by the public: research in this area is ongoing and smart meter users will need to be reassured that their data is secure. This paper describes a method for securely anonymizing frequent (for example, every few minutes) electrical metering data sent by a smart meter. Although such frequent metering data may be required by a utility or electrical energy distribution network for operational reasons, this data may not necessarily need to be attributable to a specific smart meter or consumer. It does, however, need to be securely attributable to a specific location (e.g. a group of houses or apartments) within the electricity distribution network. The method described in this paper provides a 3rd party escrow mechanism for authenticated anonymous meter readings which are difficult to associate with a particular smart meter or customer. This method does not preclude the provision of attributable metering data that is required for other purposes such as billing, account management or marketing research purposes.",
"The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data."
]
}
|
1201.2531
|
2107785808
|
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
|
The notion of differential privacy was first proposed in @cite_6 . The main advantage of differential privacy over other privacy models is that it does not specify the prior knowledge of the adversary and provides rigorous privacy guarantee if each users' data is statistically independent @cite_10 . Initial works on differential privacy focused on the problem how a trusted curator (aggregator), who collects all data from users, can differential privately release statistics. By contrast, our scheme ensures differential privacy even if the curator is untrusted. Although @cite_5 describes protocols for generating shares of random noise which is secure against malicious participants, it requires communication between users and it uses expensive secret sharing techniques resulting in high overhead in case of large number of users. Similarly, traditional Secure Multiparty Computation (SMC) techniques @cite_17 @cite_3 also require interactions between users. All these solutions are impractical for resource constrained smart meters where all the computation is done by the aggregator and users are not supposed to communicate with each other.
|
{
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2728487654",
"2951011752",
"2610910029",
"2054922243",
""
],
"abstract": [
"We introduce a new approach to multiparty computation (MPC) basing it on homomorphic threshold crypto-systems. We show that given keys for any sufficiently efficient system of this type, general MPC protocols for n players can be devised which are secure against an active adversary that corrupts any minority of the players. The total number of bits sent is O(nk|C|), where k is the security parameter and |C| is the size of a (Boolean) circuit computing the function to be securely evaluated. An earlier proposal by Franklin and Haber with the same complexity was only secure for passive adversaries, while all earlier protocols with active security had complexity at least quadratic in n. We give two examples of threshold cryptosystems that can support our construction and lead to the claimed complexities.",
"We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements.",
"In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.",
"Differential privacy is a powerful tool for providing privacy-preserving noisy query answers over statistical databases. It guarantees that the distribution of noisy query answers changes very little with the addition or deletion of any tuple. It is frequently accompanied by popularized claims that it provides privacy without any assumptions about the data and that it protects against attackers who know all but one record. In this paper we critically analyze the privacy protections offered by differential privacy. First, we use a no-free-lunch theorem, which defines non-privacy as a game, to argue that it is not possible to provide privacy and utility without making assumptions about how the data are generated. Then we explain where assumptions are needed. We argue that privacy of an individual is preserved when it is possible to limit the inference of an attacker about the participation of the individual in the data generating process. This is different from limiting the inference about the presence of a tuple (for example, Bob's participation in a social network may cause edges to form between pairs of his friends, so that it affects more than just the tuple labeled as \"Bob\"). The definition of evidence of participation, in turn, depends on how the data are generated -- this is how assumptions enter the picture. We explain these ideas using examples from social network research as well as tabular data for which deterministic statistics have been previously released. In both cases the notion of participation varies, the use of differential privacy can lead to privacy breaches, and differential privacy does not always adequately limit inference about participation.",
""
]
}
|
1201.2531
|
2107785808
|
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
|
Two closely related works to ours are @cite_13 and @cite_21 . In @cite_13 , the authors propose a scheme to differential privately aggregate sums over multiple slots when the aggregator is untrusted. However, they use the threshold Paillier cryptosystem @cite_15 for homomorphic encryption which is much more expensive compared to @cite_4 that we use. They also use different noise distribution technique which requires several rounds of message exchanges between the users and the aggregator. By contrast, our solution is much more efficient and simple: it requires only a single message exchange if there are no node failures, otherwise, we only need one extra round. In addition, our solution does not rely on expensive public key cryptography during aggregation.
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_13",
"@cite_4"
],
"mid": [
"1522597308",
"2146673169",
"2104803737",
"2102832611"
],
"abstract": [
"Several public key cryptosystems with additional homomorphic properties have been proposed so far. They allow to perform computation with encrypted data without the knowledge of any secret information. In many applications, the ability to perform decryption, i.e. the knowledge of the secret key, gives a huge power. A classical way to reduce the trust in such a secret owner, and consequently to increase the security, is to share the secret between many entities in such a way that cooperation between them is necessary to decrypt. In this paper, we propose a distributed version of the Paillier cryptosystem presented at Eurocrypt '99. This shared scheme can for example be used in an electronic voting scheme or in a lottery where a random number related to the winning ticket has to be jointly chosen by all participants.",
"A private stream aggregation (PSA) system contributes a user's data to a data aggregator without compromising the user's privacy. The system can begin by determining a private key for a local user in a set of users, wherein the sum of the private keys associated with the set of users and the data aggregator is equal to zero. The system also selects a set of data values associated with the local user. Then, the system encrypts individual data values in the set based in part on the private key to produce a set of encrypted data values, thereby allowing the data aggregator to decrypt an aggregate value across the set of users without decrypting individual data values associated with the set of users, and without interacting with the set of users while decrypting the aggregate value. The system also sends the set of encrypted data values to the data aggregator.",
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users.",
"Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain."
]
}
|
1201.2531
|
2107785808
|
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
|
A recent paper @cite_21 proposes another technique to privately aggregate time series data. This work differs from ours as follows: (1) they use a Diffie-Hellman-based encryption scheme, whereas our construction is based on a more efficient construction that only use modular additions. This approach is better adapted to resource constrained devices like smart meters. (2) Although @cite_21 does not require the establishment (and storage) of pairwise keys between nodes as opposed to our approach, it is unclear how @cite_21 can be extended to tolerate node and communication failures. By contrast, our scheme is more robust, as the encryption key of non-responding nodes is known to other nodes in the network that can help to recover the aggregate. (3) Finally, @cite_21 uses a different noise generation method from ours, but this technique only satisfies the relaxed @math -differential privacy definition. Indeed, in their scheme, each node adds noise probabilistically which means that none of the nodes add noise with some positive probability @math . Although @math can be arbitrarily small, this also decreases the utility. By contrast, in our scheme, @math while ensuring nearly optimal utility.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2146673169"
],
"abstract": [
"A private stream aggregation (PSA) system contributes a user's data to a data aggregator without compromising the user's privacy. The system can begin by determining a private key for a local user in a set of users, wherein the sum of the private keys associated with the set of users and the data aggregator is equal to zero. The system also selects a set of data values associated with the local user. Then, the system encrypts individual data values in the set based in part on the private key to produce a set of encrypted data values, thereby allowing the data aggregator to decrypt an aggregate value across the set of users without decrypting individual data values associated with the set of users, and without interacting with the set of users while decrypting the aggregate value. The system also sends the set of encrypted data values to the data aggregator."
]
}
|
1201.2702
|
2951650674
|
This work studies the problem of 2-dimensional searching for the 3-sided range query of the form @math in both main and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution.
|
In the RAM model, the only dynamic sublogarithmic bounds for this problem are due to Willard @cite_25 who attains @math worst case or @math randomized update time and @math query time using linear space. This solution poses no assumptions on the input distribution.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2041550253"
],
"abstract": [
"This article illustrates several examples of computer science problems whose performance can be improved with the use of either the fusion trees [Fredman and Willard, J. Comput. System Sci., 47 (1993), pp. 424--436; Fredman and Willard, J. Comput. System Sci., 48 (1994), pp. 533--551] or one of several recent improvements to this data structure. It is likely that many other data structures can also have their performance improved with fusion trees. The examples here are only illustrative."
]
}
|
1201.2702
|
2951650674
|
This work studies the problem of 2-dimensional searching for the 3-sided range query of the form @math in both main and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution.
|
Many external data structures such as grid files, various quad-trees, z-orders and other space filling curves, k-d-B-trees, hB-trees and various R-trees have been proposed. A recent survey can be found in @cite_22 . Often these data structures are used in applications, because they are relatively simple, require linear space and perform well in practice most of the time. However, they all have highly sub-optimal worst case (w.c.) performance, whereas their expected performance is usually not guaranteed by theoretical bounds, since they are based on heuristic rules for the construction and update operations.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2106642566"
],
"abstract": [
"Search operations in databases require special support at the physical level. This is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that overlap a given search region). More than ten years of spatial database research have resulted in a great variety of multidimensional access methods to support such operations. We give an overview of that work. After a brief survey of spatial data management in general, we first present the class of point access methods , which are used to search sets of points in two or more dimensions. The second part of the paper is devoted to spatial access methods to handle extended objects, such as rectangles or polyhedra. We conclude with a discussion of theoretical and experimental results concerning the relative performance of various approaches."
]
}
|
1201.2702
|
2951650674
|
This work studies the problem of 2-dimensional searching for the 3-sided range query of the form @math in both main and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution.
|
Moreover, several attempts have been performed to externalize Priority Search Trees, including @cite_11 , @cite_20 , @cite_23 , @cite_16 and @cite_8 , but all of them have not been optimal. The worst case optimal external memory solution (External Priority Search Tree) was presented in @cite_0 . It consumes @math disk blocks, performs 3-sided range queries in @math I Os w.c. and supports updates in @math I Os amortized. This solution poses no assumptions on the input distribution.
|
{
"cite_N": [
"@cite_8",
"@cite_0",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2085088576",
"2043148321",
"1994101999",
"25426087",
"113831205",
"16477014"
],
"abstract": [
"",
"In this paper we settle several longstanding open problems in theory of indexability and external orthogonal range searching. In the rst part of the paper, we apply the theory of indexability to the problem of two-dimensional range searching. We show that the special case of 3-sided querying can be solved with constant redundancy and access overhead. From this, we derive indexing schemes for general 4-sided range queries that exhibit an optimal tradeo between redundancy and access overhead. In the second part of the paper, we develop dynamic external memory data structures for the two query types. Our structure for 3-sided queries occupies O(N=B) disk blocks, and it supports insertions and deletions in O(log B N) I Os and queries in O(log B N + T=B) I Os, where B is the disk block size, N is the number of points, and T is the query output size. These bounds are optimal. Our structure for general (4-sided) range searching occupies O (N=B)(log(N=B))= log log B N disk blocks and answers queries in O(log B N + T=B) I Os, which are optimal. It also supports updates in O (log B N)(log(N=B))= log log B N I Os. Center for Geometric Computing, Department of Computer Science, Duke University, Box 90129, Durham, NC 27708 0129. Supported in part by the U.S. Army Research O ce through MURI grant DAAH04 96 1 0013 and by the National Science Foundation through ESS grant EIA 9870734. Part of this work was done while visiting BRICS, Department of Computer Science, University of Aarhus, Denmark. Email: large@cs.duke.edu. yDepartment of Computer Sciences, University of Texas at Austin, Austin, TX 78712-1188. Email vsam@cs.utexas.edu zCenter for Geometric Computing, Department of Computer Science, Duke University, Box 90129, Durham, NC 27708 0129. Supported in part by the U.S. Army Research O ce through MURI grant DAAH04 96 1 0013 and by the National Science Foundation through grants CCR 9522047 and EIA 9870734. Part of this work was done while visiting BRICS, Department of Computer Science, University of Aarhus, Denmark and I.N.R.I.A., Sophia Antipolis, France. Email: jsv@cs.duke.edu.",
"We examine I O-efficient data structures that provide indexing support for new data models. The database languages of these models include concepts from constraint programming (e.g., relational tuples are generated to conjunctions of constraints) and from object-oriented programming (e.g., objects are organized in class hierarchies). Letnbe the size of the database,cthe number of classes,Bthe page size on secondary storage, andtthe size of the output of a query: (1) Indexing by one attribute in many constraint data models is equivalent to external dynamic interval management, which is a special case of external dynamic two-dimensional range searching. We present a semi-dynamic data structure for this problem that has worst-case spaceO(n B) pages, query I O timeO(logBn+t B) andO(logBn+(logBn)2 B) amortized insert I O time. Note that, for the static version of this problem, this is the first worst-case optimal solution. (2) Indexing by one attribute and by class name in an object-oriented model, where objects are organized as a forest hierarchy of classes, is also a special case of external dynamic two-dimensional range searching. Based on this observation, we first identify a simple algorithm with good worst-case performance, query I O timeO(log2clogBn+t B), update I O timeO(log2clogBn) and spaceO((n B)log2c) pages for the class indexing problem. Using the forest structure of the class hierarchy and techniques from the constraint indexing problem, we improve its query I O time toO(logBn+t B+log2B).",
"External 2-dimensional searching is a fundamental problem with many applications in relational, object-oriented, spatial, and temporal databases. For example, interval intersection can be reduced to 2-sided, 2-dimensional searching and indexing class hierarchies of objects to 3-sided, 2-dimensional searching. Path caching is a new technique that can be used to transform a number of time space efficient data structures for internal 2-dimensional searching (such as segment trees, interval trees, and priority search trees) into I O efficient external ones. Let n be the size of the database, B the page size, and t the output size of a query. Using path caching, we provide the first data structure with optimal I O query time @math for 2-sided, 2-dimensional searching. Furthermore, we show that path caching requires a small space overhead @math and is simple enough to admit dynamic updates in optimal @math amortized time. We also extend this data structure to handle 3-sided, 2-dimensional searching with optimal I O query-time, at the expense of slightly higher storage and update overheads.",
"",
"XP-trees (external priority search trees) are simple, practical and versatile structures supporting searches on points, intervals and higher dimensional spatial objects in secondary memory. Our approach to developing external structures is to consider worst-case efficient (internal) data structures from computational geometry, here the priority search tree. With the XP-tree we succeeded in transferring the underlying principle in an appropriate way to organize secondary storage, arriving at a practically useful structure. Together with external counterparts of other structures from computational geometry, such as segment tree, interval tree, and range tree, XP-trees can be used as building blocks to construct nested tree structures directly representing sets of spatial objects in higher dimensions. Like the internal counterpart, the XP-tree supports \"halfrange\" queries on points in two dimensions. Although XP-trees are not fully dynamic we believe them to be useful in many applications. Regarding balanced XP-trees, O(logdn + t) external accesses in halfrange queries are guaranteed, where n is the number of points, d the degree of the XP-tree, and t the number of points within a halfrange. Mapping intervals, as one-dimensional spatial objects, into two-dimensional points in the standard way, the structure supports all interesting kinds of queries on intervals. An XP-tree also supports queries on spatial objects in more dimensions by projecting them to an interval in one dimension. Hence the XP-tree can be used as an index structure in temporal or geometric databese systems. Experimentalperformance evaluations show that halfrange queries on two-dimensional point sets as well as all types of interval and point queries on sets of intervals are supported efficiently by the structure. Searching is quite fast, when no or few objects are retrieved. Although the shape of an XP-tree degenerates when a point set representation of a set of intervals is stored, the searching behaviour is practically the same as for a corresponding balanced structure. Searching is either very fast or has a large \"retrieval ratio\"."
]
}
|
1201.2462
|
2028196280
|
We study the optimality of the minimax risk of truncated series estimators for symmetric convex polytopes. We show that the optimal truncated series estimator is within @math factor of the optimal if the polytope is defined by @math hyperplanes. This represents the first such bounds towards general convex bodies. In proving our result, we first define a geometric quantity, called the , for lower bounding the minimax risk. We then derive our bounds by establishing a connection between the approximation radius and the Kolmogorov width, the quantity that provides upper bounds for the truncated series estimator. Besides, our proof contains several ingredients which might be of independent interest: 1. The notion of approximation radius depends on the volume of the body. It is an intuitive notion and is flexible to yield strong minimax lower bounds; 2. The connection between the approximation radius and the Kolmogorov width is a consequence of a novel duality relationship on the Kolmogorov width, developed by utilizing some deep results from convex geometry.
|
On the other hand, the truncated series estimator has a nice geometric interpretation and is related to the classical Kolmogorov width of the underlying space. In addition to its simplicity, @cite_1 shows that it is asymptotically optimal for the classes of orthosymmetric and quadratically convex objects. This includes the class of diagonally stretched @math balls for @math . Present paper shows that the power of truncated series estimators also extend to the family of symmetric convex polytopes, as long as the polytope is defined by @math hyperplanes.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2004853061"
],
"abstract": [
"Consider estimating the mean of a standard Gaussian shift when that mean is known to lie in an orthosymmetric quadratically convex set in l 2 . The minimax risk among linear estimates is within 25 of the minimax risk among all estimates. The minimax risk among truncated series estimates is within a factor 4.44 of the minimax risk. This implies that the difficulty of estimation ― a statistical quantity ― is measured fairly precisely by the n-widths ― a geometric quantity. If the set is not quadratically convex, as in the case of l p -bodies with p<2, things change appreciably. Minimax linear estimators may be out-performed arbitrarily by nonlinear estimates"
]
}
|
1201.1717
|
1907304534
|
Hyperbolicity is a property of a graph that may be viewed as being a "soft" version of a tree, and recent empirical and theoretical work has suggested that many graphs arising in Internet and related data applications have hyperbolic properties. We consider Gromov's notion of -hyperbolicity, and establish several results for small-world and tree-like random graph models. First, we study the hyperbolicity of Kleinberg small-world random graphs and show that the hyperbolicity of these random graphs is not significantly improved comparing to graph diameter even when it greatly improves decentralized navigation. Next we study a class of tree-like graphs called ringed trees that have constant hyperbolicity. We show that adding random links among the leaves similar to the small-world graph constructions may easily destroy the hyperbolicity of the graphs, except for a class of random edges added using an exponentially decaying probability function based on the ring distance among the leaves. Our study provides one of the first significant analytical results on the hyperbolicity of a rich class of random graphs, which shed light on the relationship between hyperbolicity and navigability of random graphs, as well as on the sensitivity of hyperbolic to noises in random graphs.
|
More generally, we see two approaches connecting hyperbolicity with efficient routing in graphs. One approach study efficient computation of graph properties, such as diameters, centers, approximating trees, and packings and coverings for low hyperbolic- @math graphs and metric spaces @cite_16 @cite_0 @cite_34 @cite_32 @cite_30 . In large part, the reason for this interest is that there are often direct consequences for navigation and routing in these graphs @cite_34 @cite_32 @cite_38 @cite_33 . While these results are of interest for general low hyperbolic- @math graphs, they can be less interesting when applied to small-world and other low-diameter random models of complex networks. To take one example, @cite_16 provides a simple construction of a distance approximating tree for @math -hyperbolic graphs on @math vertices; but the @math additive-error guarantee is clearly less interesting for models in which the diameter of the graph is @math . Unfortunately, this @math arises for a very natural reason in the analysis, and it is nontrivial to improve it for popular tree-like complex network models.
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_33",
"@cite_32",
"@cite_34",
"@cite_0",
"@cite_16"
],
"mid": [
"",
"2152948207",
"2142420687",
"2141663929",
"1587282847",
"2029803751",
"1965650021"
],
"abstract": [
"",
"We propose a scalable and reliable point-to-point routing algorithm for ad hoc wireless networks and sensor-nets. Our algorithm assigns to each node of the network a virtual coordinate in the hyperbolic plane, and performs greedy geographic routing with respect to these virtual coordinates. Unlike other proposed greedy routing algorithms based on virtual coordinates, our embedding guarantees that the greedy algorithm is always successful in finding a route to the destination, if such a route exists. We describe a distributed algorithm for computing each node's virtual coordinates in the hyperbolic plane, and for greedily routing packets to a destination point in the hyperbolic plane. (This destination may be the address of another node of the network, or it may be an address associated to a piece of content in a Distributed Hash Table. In the latter case we prove that the greedy routing strategy makes a consistent choice of the node responsible for the address, irrespective of the source address of the request.) We evaluate the resulting algorithm in terms of both path stretch and node congestion.",
"We introduce a novel measure called e-four-pointscondition (e-4PC), which assigns a value e ∈ [0,1] to every metric space quantifying how close the metric is to a tree metric. Data-sets taken from real Internet measurements indicate remarkable closeness of Internet latencies to tree metrics based on this condition. We study embeddings of e-4PC metric spaces into trees and prove tight upper and lower bounds. Specifically, we show that there are constants c1 and c2 such that, (1) every metric (X,d) which satisfies the e-4PC can be embedded into a tree with distortion (1+e)c1log|X|, and (2) for every e ∈: [0,1] and any number of nodes, there is a metric space (X,d) satisfying the e-4PC that does not embed into a tree with distortion less than (1+e)c2log|X|. In addition, we prove a lower bound on approximate distance labelings of e-4PC metrics, and give tight bounds for tree embeddings with additive error guarantees.",
"δ-Hyperbolic metric spaces have been defined by M. Gromov in 1987 via a simple 4-point condition: for any four points u,v,w,x, the two larger of the distance sums d(u,v)+d(w,x),d(u,w)+d(v,x),d(u,x)+d(v,w) differ by at most 2δ. They play an important role in geometric group theory, geometry of negatively curved spaces, and have recently become of interest in several domains of computer science, including algorithms and networking. In this paper, we study unweighted δ-hyperbolic graphs. Using the Layering Partition technique, we show that every n-vertex δ-hyperbolic graph with δ≥1 2 has an additive O(δlog n)-spanner with at most O(δn) edges and provide a simpler, in our opinion, and faster construction of distance approximating trees of δ-hyperbolic graphs with an additive error O(δlog n). The construction of our tree takes only linear time in the size of the input graph. As a consequence, we show that the family of n-vertex δ-hyperbolic graphs with δ≥1 2 admits a routing labeling scheme with O(δlog 2 n) bit labels, O(δlog n) additive stretch and O(log 2(4δ)) time routing protocol, and a distance labeling scheme with O(log 2 n) bit labels, O(δlog n) additive error and constant time distance decoder.",
"A graph G is δ-hyperbolic if for any four vertices u,v,x,y of G the two larger of the three distance sums dG(u,v) + dG(x,y), dG(u,x) + dG(v,y), dG(u,y) + dG(v,x) differ by at most δ, and the smallest δ ≥ 0 for which G is δ-hyperbolic is called the hyperbolicity of G. In this paper, we construct a distance labeling scheme for bounded hyperbolicity graphs, that is a vertex labeling such that the distance between any two vertices of G can be estimated from their labels, without any other source of information. More precisely, our scheme assigns labels of O(log2n) bits for bounded hyperbolicity graphs with n vertices such that distances can be approximated within an additive error of O(log n). The label length is optimal for every additive error up to ne. We also show a lower bound of Ω(log log n) on the approximation factor, namely every s-multiplicative approximate distance labeling scheme on bounded hyperbolicity graphs with polylogarithmic labels requires s = Ω(log log n).",
"Let G= (V, E) be a connected graph endowed with the standard graph-metric dGand in which longest induced simple cycle has length? (G). We prove that there exists a tree T= (V,F ) such that| dG(u, v) ?dT(u, v)| ? ??(G)2? +?for all vertices u, v?V, where?= 1 if ?(G) ?= 4, 5 and ?= 2 otherwise. The case ?(G) = 3 (i.e., G is a chordal graph) has been considered in Brandstadt, Chepoi, and Dragan, (1999) J.Algorithms 30. The proof contains an efficient algorithm for determining such a treeT .",
"δ-Hyperbolic metric spaces have been defined by M. Gromov via a simple 4-point condition: for any four points u,v,w,x, the two larger of the sums d(u,v)+d(w,x), d(u,w)+d(v,x), d(u,x)+d(v,w) differ by at most 2δ. Given a finite set S of points of a δ-hyperbolic space, we present simple and fast methods for approximating the diameter of S with an additive error 2δ and computing an approximate radius and center of a smallest enclosing ball for S with an additive error 3δ. These algorithms run in linear time for classical hyperbolic spaces and for δ-hyperbolic graphs and networks. Furthermore, we show that for δ-hyperbolic graphs G=(V,E) with uniformly bounded degrees of vertices, the exact center of S can be computed in linear time O(|E|). We also provide a simple construction of distance approximating trees of δ-hyperbolic graphs G on n vertices with an additive error O(δlog2 n). This construction has an additive error comparable with that given by Gromov for n-point δ-hyperbolic spaces, but can be implemented in O(|E|) time (instead of O(n2)). Finally, we establish that several geometrical classes of graphs have bounded hyperbolicity."
]
}
|
1201.1717
|
1907304534
|
Hyperbolicity is a property of a graph that may be viewed as being a "soft" version of a tree, and recent empirical and theoretical work has suggested that many graphs arising in Internet and related data applications have hyperbolic properties. We consider Gromov's notion of -hyperbolicity, and establish several results for small-world and tree-like random graph models. First, we study the hyperbolicity of Kleinberg small-world random graphs and show that the hyperbolicity of these random graphs is not significantly improved comparing to graph diameter even when it greatly improves decentralized navigation. Next we study a class of tree-like graphs called ringed trees that have constant hyperbolicity. We show that adding random links among the leaves similar to the small-world graph constructions may easily destroy the hyperbolicity of the graphs, except for a class of random edges added using an exponentially decaying probability function based on the ring distance among the leaves. Our study provides one of the first significant analytical results on the hyperbolicity of a rich class of random graphs, which shed light on the relationship between hyperbolicity and navigability of random graphs, as well as on the sensitivity of hyperbolic to noises in random graphs.
|
Another approach taken by several recent papers is to build random graphs from hyperbolic metric spaces and then shows that such random graphs lead to several common properties of small-world complex networks, including good navigability properties @cite_6 @cite_19 @cite_31 @cite_42 . While assuming a low hyperbolicity metric space to build random graphs in these studies makes intuitive sense, it is difficult to prove nontrivial results on the Gromov's @math of these random graphs even for simple random graph models that are intuitively tree-like.
|
{
"cite_N": [
"@cite_19",
"@cite_31",
"@cite_42",
"@cite_6"
],
"mid": [
"2083086638",
"",
"2030407863",
"2963825859"
],
"abstract": [
"We show that complex (scale-free) network topologies naturally emerge from hyperbolic metric spaces. Hyperbolic geometry facilitates maximally efficient greedy forwarding in these networks. Greedy forwarding is topology-oblivious. Nevertheless, greedy packets find their destinations with 100 probability following almost optimal shortest paths. This remarkable efficiency sustains even in highly dynamic networks. Our findings suggest that forwarding information through complex networks, such as the Internet, is possible without the overhead of existing routing protocols, and may also find practical applications in overlay networks for tasks such as application-level routing, information sharing, and data distribution.",
"",
"We show that heterogeneous degree distributions in observed scale-free topologies of complex networks can emerge as a consequence of the exponential expansion of hidden hyperbolic space. Fermi-Dirac statistics provides a physical interpretation of hyperbolic distances as energies of links. The hidden space curvature affects the heterogeneity of the degree distribution, while clustering is a function of temperature. We embed the internet into the hyperbolic plane and find a remarkable congruency between the embedding and our hyperbolic model. Besides proving our model realistic, this embedding may be used for routing with only local information, which holds significant promise for improving the performance of internet routing.",
"In many real-world processes that can be mapped onto complex networks—from cell signalling to transporting people—communication between distant nodes is surprisingly efficient, considering that no node has a full view of the entire network. A framework sets out to explain why ‘navigability’ is so efficient in these networks."
]
}
|
1201.1717
|
1907304534
|
Hyperbolicity is a property of a graph that may be viewed as being a "soft" version of a tree, and recent empirical and theoretical work has suggested that many graphs arising in Internet and related data applications have hyperbolic properties. We consider Gromov's notion of -hyperbolicity, and establish several results for small-world and tree-like random graph models. First, we study the hyperbolicity of Kleinberg small-world random graphs and show that the hyperbolicity of these random graphs is not significantly improved comparing to graph diameter even when it greatly improves decentralized navigation. Next we study a class of tree-like graphs called ringed trees that have constant hyperbolicity. We show that adding random links among the leaves similar to the small-world graph constructions may easily destroy the hyperbolicity of the graphs, except for a class of random edges added using an exponentially decaying probability function based on the ring distance among the leaves. Our study provides one of the first significant analytical results on the hyperbolicity of a rich class of random graphs, which shed light on the relationship between hyperbolicity and navigability of random graphs, as well as on the sensitivity of hyperbolic to noises in random graphs.
|
Finally, ideas related to hyperbolicity have been applied in numerous other networks applications, e.g., to problems such as distance estimation, network security, sensor networks, and traffic flow and congestion minimization @cite_41 @cite_13 @cite_43 @cite_28 @cite_7 @cite_5 , as well as large-scale data visualization @cite_18 . The latter applications typically take important advantage of the idea that data are often hierarchical or tree-like and that there is more room'' in hyperbolic spaces of dimension 2 than Euclidean spaces of any finite dimension.
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_41",
"@cite_28",
"@cite_43",
"@cite_5",
"@cite_13"
],
"mid": [
"2107034998",
"2138482845",
"2168891622",
"2098449770",
"2137653028",
"1920540977",
"2009182597"
],
"abstract": [
"Drawing graphs as nodes connected by links is visually compelling but computationally difficult. Hyperbolic space and spanning trees can reduce visual clutter, speed up layout, and provide fluid interaction. This article briefly describes a software system that explicitly attempts to handle much larger graphs than previous systems and support dynamic exploration rather than final presentation. It then discusses the applicability of this system to goals beyond simple exploration. A software system that supports graph exploration should include both a layout and an interactive drawing component. I have developed new algorithms for both layout and drawing (H3 and H3Viewer). The H3Viewer drawing algorithm remains under development, so this article presents preliminary results. I have implemented a software library that uses these algorithms. It can handle graphs of more than 100,000 edges by using a spanning tree as the backbone for the layout and drawing algorithms.",
"Large-scale data networks form the infrastructure for contemporary global communications. Increasingly, a single network may provide a variety of disparate services, flatter architectures (i.e., fewer controlling hubs) are used to achieve robustness against failure, and networks have to be dynamically and automatically reconfigurable to allow services to be set up quickly. With these trends in mind, it is impractical to perform detailed case-by-case simulations in order to predict and understand the behavior of such large-scale networks. Instead, one has to identify the key structural properties that affect network performance, reliability, and security. These structural properties can then be used to construct models that estimate network behavior in an efficient and scalable manner. A key observation regarding large-scale communications and biological and social networks has been the “small-world” property [1‐3]. More recent network models have focused on power-law degree distributions (PLDD) (for a few examples, see Refs. [4‐6]) as an explanation of or correlated with the small-world property. Evidence for PLDD has been found in data networks at the Internet protocol (IP) layer [7], for the worldwide web [4], and for the virtual network of social connections [8]. Although these features are interesting and important, the impact of intrinsic geometrical and topological features of large-scale networks on performance, reliability, and security is of much greater importance. Intuitively, it is known that traffic between nodes tends to go through a relatively small core of the network, as if the shortest path between them is curved inward. It has been suggested that this property may be due to global curvature or hyperbolicity of the network [9].",
"Estimating distances in the Internet has been studied in the recent years due to its ability to improve the performance of many applications, e.g., in the peer-to-peer realm. One scalable approach to estimate distances between nodes is to embed the nodes in some d dimensional geometric space and to use the pair distances in this space as the estimate for the real distances. Several algorithms were suggested in the past to do this in low dimensional Euclidean spaces. It was noted in recent years that the Internet structure has a highly connected core and long stretched tendrils, and that most of the routing paths between nodes in the tendrils pass through the core. Therefore, we suggest in this work, to embed the Internet distance metric in a hyperbolic space where routes are bent toward the center. We found that if the curvature, that defines the extend of the bending, is selected in the adequate range, the accuracy of Internet distance embedding can be improved. We demonstrate the strength of our hyperbolic embedding with two applications: selecting the closest server and building an application level multicast tree. For the latter, we present a distributed algorithm for building geometric multicast trees that achieve good trade-offs between delay (stretch) and load (stress). We also present a new efficient centralized embedding algorithm that enables the accurate embedding of short distances, something that have never been done before.",
"The main point of this paper is that network security has a geometric component, in the sense that some architectures promote some aspects of security. Such security issues closely related to the topological architecture of the network graph are multi-path routing to mitigate \"eavesdropping\" or \"packet sniffing\", worm propagation and defense, and distributed denial of service (DDoS) attack mitigation. Those geometric aspects relevant to network security are encapsulated in the concept of graph curvature. An architecture that promotes, in some sense, security is the negative curvature of the graph, which is shown to hold in several physical and logical graphs and in the well know \"scale free\" model.",
"The technique of effective resistance has seen growing popularity in problems ranging from escape probability of random walks on graphs to asymptotic space localization in sensor networks. The results obtained thus far deal with such problems on Euclidean lattices, on which their asymptotic nature already reveals that the crucial issue is the large scale behavior of such lattices. Here we investigate how such results have to be amended on a class of graphs, referred to as Gromov hyperbolic, which behave in the large scale as negatively curved Riemannian manifolds. It is argued that Gromov hyperbolic graphs occur quite naturally in many situations. Among the results developed here, we will mention the nonvanishing probability of escape of a random walk to a Cantor set Gromov boundary and the facts that the space localization error of sensors networked in a Gromov hyperbolic fashion grows linearly with the distance to a sensor whose geographical position is known, but would become uniformly bounded in an idealized situation in which the geographical locations of the nodes at the Gromov boundary are known.",
"In this work we study the asymptotic traffic flow in Gromov's hyperbolic graphs. We prove that under certain mild hypotheses the traffic flow in a hyperbolic graph tends to pass through a finite set of highly congested nodes. These nodes are called the \"core\" of the graph. We provide a formal definition of the core in a very general context and we study the properties of this set for several graphs.",
"Abstract This paper proposes a mathematical justification of the phenomenon of extreme congestion at a very limited number of nodes in very large networks. It is argued that this phenomenon occurs as a combination of the negative curvature property of the network together with minimum-length routing. More specifically, it is shown that in a large n-dimensional hyperbolic ball B of radius R viewed as a roughly similar model of a Gromov hyperbolic network, the proportion of traffic paths transiting through a small ball near the center is Θ(1), whereas in a Euclidean ball, the same proportion scales as Θ(1 R n−1). This discrepancy persists for the traffic load, which at the center of the hyperbolic ball scales as volume2(B), whereas the same traffic load scales as volume1+1 n (B) in the Euclidean ball. This provides a theoretical justification of the experimental exponent discrepancy observed by Narayan and Saniee between traffic loads in Gromov-hyperbolic networks from the Rocketfuel database and synthetic ..."
]
}
|
1201.2430
|
1658537750
|
Situation calculus has been widely applied in Artificial Intelligence related fields. This formalism is considered as a dialect of logic programming language and mostly used in dynamic domain modeling. However, type systems are hardly deployed in situation calculus in the literature. To achieve a correct and sound typed program written in situation calculus, adding typing elements into the current situation calculus will be quite helpful. In this paper, we propose to add more typing mechanisms to the current version of situation calculus, especially for three basic elements in situation calculus: situations, actions and objects, and then perform rigid type checking for existing situation calculus programs to find out the well-typed and ill-typed ones. In this way, type correctness and soundness in situation calculus programs can be guaranteed by type checking based on our type system. This modified version of a lightweight situation calculus is proved to be a robust and well-typed system.
|
Yilan @cite_3 proposed a modified version of the situation calculus built using a two-variable fragment of the first-order logic extended with counting quantifiers. By introducing several additional groups of axiom to capture taxonomic reasoning and using similar regression operator in Raymond Reiter's work @cite_4 , the projection and executability problems are proved decidable although an initial knowledge base is incomplete and open. While their system concerns primarily on semantics of the new components proposed but rarely talks about typing on them, our well-typed version of situation calculus mentions typing mechanisms together with a modified situation calculus version in an all around way.
|
{
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2038809129",
"1825399774"
],
"abstract": [
"Modeling and implementing dynamical systems is a central problem in artificial intelligence, robotics, software agents, simulation, decision and control theory, and many other disciplines. In recent years, a new approach to representing such systems, grounded in mathematical logic, has been developed within the AI knowledge-representation community. This book presents a comprehensive treatment of these ideas, basing its theoretical and implementation foundations on the situation calculus, a dialect of first-order logic. Within this framework, it develops many features of dynamical systems modeling, including time, processes, concurrency, exogenous events, reactivity, sensing and knowledge, probabilistic uncertainty, and decision theory. It also describes and implements a new family of high-level programming languages suitable for writing control programs for dynamical systems. Finally, it includes situation calculus specifications for a wide range of examples drawn from cognitive robotics, planning, simulation, databases, and decision theory, together with all the implementation code for these examples. This code is available on the book's Web site.",
"We consider a modified version of the situation calculus built using a two-variable fragment of the first-order logic extended with counting quantifiers. We mention several additional groups of axioms that can be introduced to capture taxonomic reasoning. We show that the regression operator in this framework can be defined similarly to regression in the Reiter's version of the situation calculus. Using this new regression operator, we show that the projection and executability problems are decidable in the modified version even if an initial knowledge base is incomplete and open. For an incomplete knowledge base and for context-dependent actions, we consider a type of progression that is sound with respect to the classical progression. We show that the new knowledge base resulting after our progression is definable in our modified situation calculus if one allows actions with local effects only. We mention possible applications to formalization of Semantic Web services."
]
}
|
1201.2430
|
1658537750
|
Situation calculus has been widely applied in Artificial Intelligence related fields. This formalism is considered as a dialect of logic programming language and mostly used in dynamic domain modeling. However, type systems are hardly deployed in situation calculus in the literature. To achieve a correct and sound typed program written in situation calculus, adding typing elements into the current situation calculus will be quite helpful. In this paper, we propose to add more typing mechanisms to the current version of situation calculus, especially for three basic elements in situation calculus: situations, actions and objects, and then perform rigid type checking for existing situation calculus programs to find out the well-typed and ill-typed ones. In this way, type correctness and soundness in situation calculus programs can be guaranteed by type checking based on our type system. This modified version of a lightweight situation calculus is proved to be a robust and well-typed system.
|
There are also some attempts on modifying situation calculus only based on a lightweight version of the original one. Gerhard @cite_6 proposed a new logic dialect of situation calculus with the situation terms suppressed, namely, [height=0.3cm] es.png . That is, it is merely a similar formalism as a part of the current situation calculus. Moreover, in this paper, the authors consider how to map sentences between [height=0.3cm] es.png and situation calculus and try to prove [height=0.3cm] es.png is powerful enough to handle most cases as the situation calculus does, but mention little about how to type their new logic system as a fragment of situation calculus.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"1562472473"
],
"abstract": [
"In a recent paper, we presented a new logic called ES for reasoning about the knowledge, action, and perception of an agent. Although formulated using modal operators, we argued that the language was in fact a dialect of the situation calculus but with the situation terms suppressed. This allowed us to develop a clean and workable semantics for the language without piggybacking on the generic Tarski semantics for first-order logic. In this paper, we reconsider the relation between ES and the situation calculus and show how to map sentences of ES into the situation calculus. We argue that the fragment of the situation calculus represented by ES is rich enough to handle the basic action theories defined by Reiter as well as Golog. Finally, we show that in the full second-order version of ES, almost all of the situation calculus can be accommodated."
]
}
|
1201.2074
|
1669743231
|
The paper demonstrates how traffic load of a shared packet queue can be exploited as a side channel through which protected information leaks to an off-path attacker. The attacker sends to a victim a sequence of identical spoofed segments. The victim responds to each segment in the sequence (the sequence is reflected by the victim) if the segments satisfy a certain condition tested by the attacker. The responses do not reach the attacker directly, but induce extra load on a routing queue shared between the victim and the attacker. Increased processing time of packets traversing the queue reveal that the tested condition was true. The paper concentrates on the TCP, but the approach is generic and can be effective against other protocols that allow to construct requests which are conditionally answered by the victim. A proof of concept was created to assess applicability of the method in real-life scenarios.
|
A high correlation between traffic patterns of users sharing a routing resource was demonstrated in @cite_6 . The authors monitored ping round trip time to a router that connected a user to the Internet and compared the measurements with traffic patterns generated by the user's online activities. In this technique the eavesdropper was passive and did not send any packets to trigger traffic spikes and gain additional information.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2118688880"
],
"abstract": [
"This paper presents a dangerous low-cost traffic analysis attack in packet-based networks, such as the Internet. The attack is mountable in any scenario where a shared routing resource exists among users. A real-world attack successfully compromised the privacy of a user without requiring significant resources in terms of access, memory, or computational power. The effectiveness of our attack is demonstrated in a scenario where the user's DSL router uses FCFS scheduling policy. Specifically, we show that by using a low-rate sequence of probes, a remote attacker can obtain significant traffic-timing and volume information about a particular user, just by observing the round trip time of the probes. We also observe that even when the scheduling policy is changed to round-robin, while the correlation reduces significantly, the attacker can still reliably deduce user's traffic pattern. Most of the router scheduling policies designed to date are evaluated mostly on the metrics of throughput, delay and fairness. Our work is aimed to demonstrate a need for considering an additional metric that quantifies the information leak between the individual traffic flows through the router."
]
}
|
1201.1409
|
1874973466
|
Character posing is of interest in computer animation. It is difficult due to its dependence on inverse kinematics (IK) techniques and articulate property of human characters . To solve the IK problem, classical methods that rely on numerical solutions often suffer from the under-determination problem and can not guarantee naturalness. Existing data-driven methods address this problem by learning from motion capture data. When facing a large variety of poses however, these methods may not be able to capture the pose styles or be applicable in real-time environment. Inspired from the low-rank motion de-noising and completion model in lai2011motion , we propose a novel model for character posing based on sparse coding. Unlike conventional approaches, our model directly captures the pose styles in Euclidean space to provide intuitive training error measurements and facilitate pose synthesis. A pose dictionary is learned in training stage and based on it natural poses are synthesized to satisfy users' constraints . We compare our model with existing models for tasks of pose de-noising and completion. Experiments show our model obtains lower de-noising and completion error. We also provide User Interface(UI) examples illustrating that our model is effective for interactive character posing.
|
A straightforward model for modelling the prior is to use the Gaussian distribution. Due to the connection in the covariance matrix, this approach is related to Principal Component analysis (PCA) which restricts the solution to lie in the subspace span by the principal components. By imposing the Gaussian prior, we force the solution to approach the mean from the direction of one principal component or a linear combination of them. Instead of using the Gaussian model directly, we can also first partition the motion data by clustering algorithm and then build a Gaussian prior for each cluster. This is similar to the mixture of local linear models which have been used as baseline models in @cite_7 .
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"1982905909"
],
"abstract": [
"This paper introduces an approach to performance animation that employs video cameras and a small set of retro-reflective markers to create a low-cost, easy-to-use system that might someday be practical for home use. The low-dimensional control signals from the user's performance are supplemented by a database of pre-recorded human motion. At run time, the system automatically learns a series of local models from a set of motion capture examples that are a close match to the marker locations captured by the cameras. These local models are then used to reconstruct the motion of the user as a full-body animation. We demonstrate the power of this approach with real-time control of six different behaviors using two video cameras and a small set of retro-reflective markers. We compare the resulting animation to animation from commercial motion capture equipment with a full set of markers."
]
}
|
1201.1409
|
1874973466
|
Character posing is of interest in computer animation. It is difficult due to its dependence on inverse kinematics (IK) techniques and articulate property of human characters . To solve the IK problem, classical methods that rely on numerical solutions often suffer from the under-determination problem and can not guarantee naturalness. Existing data-driven methods address this problem by learning from motion capture data. When facing a large variety of poses however, these methods may not be able to capture the pose styles or be applicable in real-time environment. Inspired from the low-rank motion de-noising and completion model in lai2011motion , we propose a novel model for character posing based on sparse coding. Unlike conventional approaches, our model directly captures the pose styles in Euclidean space to provide intuitive training error measurements and facilitate pose synthesis. A pose dictionary is learned in training stage and based on it natural poses are synthesized to satisfy users' constraints . We compare our model with existing models for tasks of pose de-noising and completion. Experiments show our model obtains lower de-noising and completion error. We also provide User Interface(UI) examples illustrating that our model is effective for interactive character posing.
|
On the other side, sparse representation has been widely applied to image processing and pattern recognition. Examples include face recognition @cite_5 , image super-resolution @cite_19 , etc. For modelling human motion, @cite_9 considered each joint's movement as a signal that admits sparse representation over a set of basis functions. These basis functions are learned from the motion capture data. They demonstrated that the proposed model is useful for action retrieval and classification. Our work is different as we model each pose separately and our target application is on character posing instead of action retrieval and classification.
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_9"
],
"mid": [
"2161516371",
"2129812935",
"2155465850"
],
"abstract": [
"This paper addresses the problem of generating a super-resolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed to have a sparse representation with respect to an over-complete dictionary of prototype signal-atoms. The principle of compressed sensing ensures that under mild conditions, the sparse representation can be correctly recovered from the downsampled signal. We will demonstrate the effectiveness of sparsity as a prior for regularizing the otherwise ill-posed super-resolution problem. We further show that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.",
"We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.",
"A central problem in the analysis of motion capture (Mo-Cap) data is how to decompose motion sequences into primitives. Ideally, a description in terms of primitives should facilitate the recognition, synthesis, and characterization of actions. We propose an unsupervised learning algorithm for automatically decomposing joint movements in human motion capture (MoCap) sequences into shift-invariant basis functions. Our formulation models the time series data of joint movements in actions as a sparse linear combination of short basis functions (snippets), which are executed (or “activated”) at different positions in time. Given a set of MoCap sequences of different actions, our algorithm finds the decomposition of MoCap sequences in terms of basis functions and their activations in time. Using the tools of L 1 minimization, the procedure alternately solves two large convex minimizations: Given the basis functions, a variant of Orthogonal Matching Pursuit solves for the activations, and given the activations, the Split Bregman Algorithm solves for the basis functions. Experiments demonstrate the power of the decomposition in a number of applications, including action recognition, retrieval, MoCap data compression, and as a tool for classification in the diagnosis of Parkinson (a motion disorder disease)."
]
}
|
1201.0962
|
1962982606
|
The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the Medium and Low Voltage levels that will support local energy trading among prosumers. In [74], we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how dierent networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and pathcost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling.
|
The works mentioned so far take into account mainly the end of the while not least important is the Distribution especially in the vision of the future electrical system as proposed in this work where the end-user plays a vital role. The integrated planning of networks is tackled by Paiva al @cite_61 who emphasize the need of considering the two networks together to obtain a sensible optimal planning. The problem is modeled as a mixed integer-linear programming one considering an objective function for investment, maintenance, operation and losses costs that need to be minimized satisfying the constraints of energy balance and equipment physical limits.
|
{
"cite_N": [
"@cite_61"
],
"mid": [
"2154788877"
],
"abstract": [
"Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes."
]
}
|
1201.0962
|
1962982606
|
The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the Medium and Low Voltage levels that will support local energy trading among prosumers. In [74], we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how dierent networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and pathcost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling.
|
Even more challenges to Electrical system planning is posed by the change in the energy landscape with several companies running different aspects of the business (generation, transmission, distribution). In addition, accommodating more players in the wholesale market transmission expansion should follow (as it is already for generation) a market based approach i.e., the demand forces of the market and its forecast should trigger the expansion of the Grid @cite_32 . The same consideration regarding the need of a different approach in planning in a deregulated market are expressed in @cite_44 where an optimization of an objective function in the market environment is applied. Another method to evaluate transmission expansion plan takes into account the probability reliability criteria of Loss Of Load Expectation (LOLE); in particular, in @cite_81 an objective function is proposed that takes into account the cost in constructing a transmission line between all buses involved in the line which is then subject to constrains in peak load demand satisfaction and a certain level of LOLE that the line should not outrun.
|
{
"cite_N": [
"@cite_44",
"@cite_81",
"@cite_32"
],
"mid": [
"2147790797",
"2001596102",
"2163778133"
],
"abstract": [
"An important component to be considered in electric power system expansion planning is the security of service that the system is able to provide. In restructured power systems, variables such as agents' profit or Locational Marginal Price (LMP) variances are considered in transmission expansion planning. Finally to have a secure network this plan would be refined for simulated contingencies. This paper proposes a new method for transmission expansion planning in which the grid owner (GO) is responsible for expansion while benefiting a fair benefit percentage. The objective function of transmission expansion tries to reduce weighted standard deviation of LMPs and the construction cost as well as the cost of security enhancement. For different scenarios of expansion; at first, the cost of security enhancement is calculated and then it is considered in the objective function of expansion. To investigate the validity of the method, we have applied it to the modified \"Garver 6-bus test system for expansion\".",
"This paper proposes a method for choosing the best transmission system expansion plan considering a probabilistic reliability criterion ( sub R LOLE). The method minimizes the investment budget for constructing new transmission lines subject to probabilistic reliability criteria, which consider the uncertainties of transmission system elements. Two probabilistic reliability criteria are used as constraints. One is a transmission system reliability criterion ( sub R LOLE sub TS ) constraint, and the other is a bus nodal reliability criterion ( sub R LOLE sub Bus ) constraint. The proposed method models the transmission system expansion problem as an integer programming problem. It solves for the optimal strategy using a probabilistic branch and bound method that utilizes a network flow approach and the maximum flow-minimum cut set theorem. Test results on an existing 21-bus system are included in the paper. They demonstrate the suitability of the proposed method for solving the transmission system expansion planning problem subject to practical future uncertainties.",
"In competitive energy market system, expansions are based on market-based resources. This concept is commonly accepted for the generation business, while it is more difficult to be put into practice for the transmission business as transmission is considered a natural monopoly. Nevertheless, as in the liberalized energy markets, the fast growth of wholesale electricity trading is strongly increasing the request for transmission services. A market-based model for transmission expansion has been recently proposed in order to provide incentives to the realisation of new transmission assets. The paper deals with some factors affecting both generation and transmission market-based expansions: price-forecasting, inter-area congestions, grid phenomena, transmission capacity increase, transmission technologies and system planning design. Methodologies and tools developed at CESl for the assessment of market-based projects are described as well as some applications and results relevant to the Italian system."
]
}
|
1201.0962
|
1962982606
|
The shift towards an energy Grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the distribution infrastructure. Today it is a hierarchical one designed to deliver energy from large scale facilities to end-users. Tomorrow it will be a capillary infrastructure at the Medium and Low Voltage levels that will support local energy trading among prosumers. In [74], we analyzed the Dutch Power Grid and made an initial analysis of the economic impact topological properties have on decentralized energy trading. In this paper, we go one step further and investigate how dierent networks topologies and growth models facilitate the emergence of a decentralized market. In particular, we show how the connectivity plays an important role in improving the properties of reliability and pathcost reduction. From the economic point of view, we estimate how the topological evolutions facilitate local electricity distribution, taking into account the main cost ingredient required for increasing network connectivity, i.e., the price of cabling.
|
In the framework the planning techniques might be revised especially for the Distribution Grid which is the segment that is likely to face the greatest changes due to the presence of Advanced Metering Infrastructure (i.e., bidirectional intelligent digital meters at customer location) and Distribution Automation (i.e., feeders can be monitored, controlled in automated way through two-way communication). In addition, the is no longer a layer where only energy is consumed, but Distributed Energy Generation facilities (small-scale photovoltaic systems and small-wind turbines) will be attached to this segment of the Grid; altogether these elements are likely to reshape the way planning for is realized @cite_54 .
|
{
"cite_N": [
"@cite_54"
],
"mid": [
"2126668795"
],
"abstract": [
"There has been much recent discussion on what distribution systems can and should look like in the future. Terms related to this discussion include smart grid, distribution system of the future, and others. Functionally, a smart grid should be able to provide new abilities such as self-healing, high reliability, energy management, and real-time pricing. From a design perspective, a smart grid will likely incorporate new technologies such as advanced metering, automation, communication, distributed generation, and distributed storage. This paper discussed the potential impact that issues related to smart grid will have on distribution system design."
]
}
|
1201.1652
|
1567534466
|
Researches on signed languages still strongly dissociate linguistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the different concepts and provide avenues for future work.
|
Most of the works in this area focus on the expressivity of the high-level computer languages, using descriptive or procedural languages, for example the XML-based specification language called SiGML @cite_22 which is connected to the HamNoSys @cite_16 notation system, and interpreted into signed language gestures using classical animation techniques. A more exhaustive overview of existing systems using virtual signers technology can be found in @cite_20 . For these kinds of applications involving signed language analysis, recognition, translation, and generation, the nature of the performed gestures themselves is particularly challenging.
|
{
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_20"
],
"mid": [
"164191052",
"2100217059",
"2077367280"
],
"abstract": [
"",
"We have created software for automatic synthesis of signing animations from the HamNoSys transcription notation. In this process we have encountered certain shortcomings of the notation. We describe these, and consider how to develop a notation more suited to computer animation.",
"In this article we present a multichannel animation system for producing utterances signed in French Sign Language (LSF) by a virtual character. The main challenges of such a system are simultaneously capturing data for the entire body, including the movements of the torso, hands, and face, and developing a data-driven animation engine that takes into account the expressive characteristics of signed languages. Our approach consists of decomposing motion along different channels, representing the body parts that correspond to the linguistic components of signed languages. We show the ability of this animation system to create novel utterances in LSF, and present an evaluation by target users which highlights the importance of the respective body parts in the production of signs. We validate our framework by testing the believability and intelligibility of our virtual signer."
]
}
|
1201.1652
|
1567534466
|
Researches on signed languages still strongly dissociate linguistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the different concepts and provide avenues for future work.
|
Alternatively, data-driven animation methods can be substituted for these pure synthesis methods. In this case the motions of a real signer are captured with different combinations of motion capture techniques. Though these methods significantly improve the quality and credibility of animations, there are nonetheless several challenges to the reuse of motion capture data in the production of sign languages. Some of them are related to the spatialization of the content, but also to the rapidity and precision required in motion performances, and to the dynamic aspects of movements. All these factors are responsible for phonological inflection processes. Incorrectly manipulated, they may lead to imperfections in the performed signs (problems in timing variations or synchronization between channels) that can alter the semantic content of the sentence. A detailed discussion on the important factors for the design of virtual signers in regard to the animation problems is proposed in @cite_15 .
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2203310485"
],
"abstract": [
"Virtual signers communicating in signed languages are a very interesting tool to serve as means of communication with deaf people and improve their access to services and information. We discuss in this paper important factors of the design of virtual signers in regard to the animation problems. We notably show that some aspects of these signed languages are challenging for up-to-date animation methods, and present possible future research directions that could also benefit more widely the animation of virtual characters."
]
}
|
1201.1652
|
1567534466
|
Researches on signed languages still strongly dissociate linguistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the different concepts and provide avenues for future work.
|
Little has been done so far to determine the role of sensory-motor activity for the understanding (perception and production) of signed languages. The idea that semantic knowledge is embodied into sensory-motor systems has given rise to many studies, bringing together researchers from domains as different as cognitive neuroscience and linguistics, but most of these works concern spoken languages. This interaction between language and action are based on different claims such as: imagining and acting share the same neural substrate @cite_4 ; language makes use in large part of brain structures akin to those used to support perception and action @cite_3 .
|
{
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"1661662646",
"2158598009"
],
"abstract": [
"People are minded creatures; we have thoughts, feelings and emotions. More intriguingly, we grasp our own mental states, and conduct the business of ascribing them to ourselves and others without instruction in formal psychology. How do we do this? And what are the dimensions of our grasp of the mental realm? In this book, Alvin I. Goldman explores these questions with the tools of philosophy, developmental psychology, social psychology and cognitive neuroscience. He refines an approach called simulation theory, which starts from the familiar idea that we understand others by putting ourselves in their mental shoes. Can this intuitive idea be rendered precise in a philosophically respectable manner, without allowing simulation to collapse into theorizing? Given a suitable definition, do empirical results support the notion that minds literally create (or attempt to create) surrogates of other peoples mental states in the process of mindreading? Goldman amasses a surprising array of evidence from psychology and neuroscience that supports this hypothesis.",
"Abstract The discovery of mirror neurons in the macaque monkey and the discovery of a homologous “mirror system for grasping” in Broca’s area in the human brain has revived the gestural origins theory of the evolution of the human capability for language, enriching it with the suggestion that mirror neurons provide the neurological core for this evolution. However, this notion of “mirror neuron support for the transition from grasp to language” has been worked out in very different ways in the Mirror System Hypothesis model [Arbib, M.A., 2005a. From monkey-like action recognition to human language: an evolutionary framework for neurolinguistics (with commentaries and author’s response). Behavioral and Brain Sciences 28, 105–167; Rizzolatti, G., Arbib, M.A., 1998. Language within our grasp. Trends in Neuroscience 21(5), 188–194] and the Embodied Concept model [Gallese, V., Lakoff, G., 2005. The brain’s concepts: the role of the sensory-motor system in reason and language. Cognitive Neuropsychology 22, 455–479]. The present paper provides a critique of the latter to enrich analysis of the former, developing the role of schema theory [Arbib, M.A., 1981. Perceptual structures and distributed motor control. In: Brooks, V.B. (Ed.), Handbook of Physiology – The Nervous System II. Motor Control. American Physiological Society, pp. 1449–1480]."
]
}
|
1201.0834
|
2949997389
|
The Internet is constantly changing, and its hierarchy was recently shown to become flatter. Recent studies of inter-domain traffic showed that large content providers drive this change by bypassing tier-1 networks and reaching closer to their users, enabling them to save transit costs and reduce reliance of transit networks as new services are being deployed, and traffic shaping is becoming increasingly popular. In this paper we take a first look at the evolving connectivity of large content provider networks, from a topological point of view of the autonomous systems (AS) graph. We perform a 5-year longitudinal study of the topological trends of large content providers, by analyzing several large content providers and comparing these trends to those observed for large tier-1 networks. We study trends in the connectivity of the networks, neighbor diversity and geographical spread, their hierarchy, the adoption of IXPs as a convenient method for peering, and their centrality. Our observations indicate that content providers gradually increase and diversify their connectivity, enabling them to improve their centrality in the graph, and as a result, tier-1 networks lose dominance over time.
|
Kuai al @cite_37 , He al @cite_11 @cite_34 , and more recently Augustin al @cite_7 studied the AS-graph, and discuss in details methods for discovering IXP participants. These works report significantly higher number of peering relationships discovered among ASes that are IXP participants than among ASes that are not connecting via an IXP.
|
{
"cite_N": [
"@cite_37",
"@cite_34",
"@cite_7",
"@cite_11"
],
"mid": [
"1549014341",
"2123649205",
"2295430786",
"1574986862"
],
"abstract": [
"Internet eXchange Points (IXPs) are one of two primary methods for Autonomous Systems (ASes) to interconnect with each other for exchanging traffic and for global Internet reachability. This paper explores the properties of IXPs and their impact on the AS topology and AS business relations using Scriptroute and Skitter traceroute probes, BGP routing archives and other data. With these datasets we develop an algorithm to discover IXPs and infer ASes that participate at these IXPs. Using the discovered IXPs and their inferred AS participants, we analyze and characterize the properties of IXPs and their participants such as size, geographical locations. We also investigate the impact of IXPs on the global AS topology and business relations between ASes. Our study sheds light on the Internet interconnection practices and the evolution of the Internet, in particular, the potential role IXPs play in such evolution.",
"The topology of the Internet at the autonomous system (AS) level is not yet fully discovered despite significant research activity. The community still does not know how many links are missing, where these links are and finally, whether the missing links will change our conceptual model of the Internet topology. An accurate and complete model of the topology would be important for protocol design, performance evaluation and analyses. The goal of our work is to develop methodologies and tools to identify and validate such missing links between ASes. In this work, we develop several methods and identify a significant number of missing links, particularly of the peer-to-peer type. Interestingly, most of the missing AS links that we find exist as peer-to-peer links at the Internet exchange points (IXPs). First, in more detail, we provide a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet routing registries, and traceroute data, while we extract significant new information from the less-studied Internet exchange points (IXPs). We identify 40 more edges and approximately 300 more peer-to-peer edges compared to commonly used data sets. All of these edges have been verified by either BGP tables or traceroute. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-to-peer edges, we find that for some ASes more than 50 of their paths stop going through their ISPs assuming policy-aware routing. A surprising observation is that the degree of an AS may be a poor indicator of which ASes it will peer with.",
"Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.",
"The lack of an accurate representation of the Internet topology at the Autonomous System (AS) level is a limiting factor in the design, simulation, and modeling efforts in inter-domain routing protocols. In this paper, we design and implement a framework for identifying AS links that are missing from the commonly-used Internet topology snapshots. We apply our framework and show that the new links that we find change the current Internet topology model in a non-trivial way. First, in more detail, our framework provides a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet Routing Registries, and traceroute data, while we extract significant newinformation from the less-studied Internet Exchange Points (IXPs). We identify 40 more edges and approximately 300 more peer-to-peer edges compared to commonly used data sets. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-to-peer edges, we find that for some ASes more than 50 of their paths stop going through their ISP providers assuming policy-aware routing. A surprising observation is that the degree of a node may be a poor indicator of which ASes it will peer with: the two degrees differ by a factor of four or more in 50 of the peer-to-peer links. Finally, we attempt to estimate the number of edges we may still be missing."
]
}
|
1201.0070
|
2951948397
|
We propose a novel method for fitting planar B-spline curves to unorganized data points. In traditional methods, optimization of control points and foot points are performed in two very time-consuming steps in each iteration: 1) control points are updated by setting up and solving a linear system of equations; and 2) foot points are computed by projecting each data point onto a B-spline curve. Our method uses the L-BFGS optimization method to optimize control points and foot points simultaneously and therefore it does not need to perform either matrix computation or foot point projection in every iteration. As a result, our method is much faster than existing methods.
|
All the above methods update control points @math and location parameters @math in two interleaving steps. The main difference of our new method with these existing methods is that in every iteration we update @math and @math simultaneously. In this sense the most closely related work is @cite_15 which also optimizes control points and location parameters simultaneously in every iteration. However, that method uses the Gauss-Newton optimization and therefore still needs to valuate and store the Jacobian matrices of the objective function, whose size depends on the number of data points and control points @cite_15 , as well as to solve a linear system of equations. In contrast, our approach based on L-BFGS does not need to formulate and solve any linear equations and is therefore faster than the method in @cite_15 , as we are going to demonstrate in later experiments.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"1994853132"
],
"abstract": [
"Abstract For approximation of a set of points P j ϵ R d by a parametric curve X ( t ) the choice of parameters t j is essential. None of the parametrization strategies is optimal. To obtain good approximation results, reparametrization (parameter correction) of the points is necessary. In general, these reparametrization methods work only locally. We present a global reparametrization method which leads to dramatically better results."
]
}
|
1201.0023
|
1569259952
|
Stack allocation and first-class functions don't naturally mix together. In this paper we show that a type and effect system can be the detergent that helps these features form a nice emulsion. Our interest in this problem comes from our work on the Chapel language, but this problem is also relevant to lambda expressions in C++ and blocks in Objective C. The difficulty in mixing first-class functions and stack allocation is a tension between safety, efficiency, and simplicity. To preserve safety, one must worry about functions outliving the variables they reference: the classic upward funarg problem. There are systems which regain safety but lose programmer-predictable efficiency, and ones that provide both safety and efficiency, but give up simplicity by exposing regions to the programmer. In this paper we present a simple design that combines a type and effect system, for safety, with function-local storage, for control over efficiency.
|
Several programming languages employ type and effect systems to provide safety and efficiency, but at the cost of exposing regions to the programmer @cite_3 @cite_0 . (Several intermediate representations also use explicit regions, but because they are intermediate representations, regions are not necissarily exposed to the programmer @cite_1 @cite_2 .) Instead of regions, the design relies on traditional stack allocation where parameters and local variables are implicitly allocated on the stack. The effects of are sets of variables instead of sets of region names.
|
{
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2116318340",
"2038677544",
"1981962444",
"1971349187"
],
"abstract": [
"The Real Time Specification for Java (RTSJ) allows a program to create real-time threads with hard real-time constraints. Real-time threads use region-based memory management to avoid unbounded pauses caused by interference from the garbage collector. The RTSJ uses runtime checks to ensure that deleting a region does not create dangling references and that real-time threads do not access references to objects allocated in the garbage-collected heap. This paper presents a static type system that guarantees that these runtime checks will never fail for well-typed programs. Our type system therefore 1) provides an important safety guarantee for real-time programs and 2) makes it possible to eliminate the runtime checks and their associated overhead.Our system also makes several contributions over previous work on region types. For object-oriented programs, it combines the benefits of region types and ownership types in a unified type system framework. For multithreaded programs, it allows long-lived threads to share objects without using the heap and without memory leaks. For real-time programs, it ensures that real-time threads do not interfere with the garbage collector. Our experience indicates that our type system is sufficiently expressive and requires little programming overhead, and that eliminating the RTSJ runtime checks using a static type system can significantly decrease the execution time of real-time programs.",
"An increasing number of systems rely on programming language technology to ensure safety and security of low-level code. Unfortunately, these systems typically rely on a complex, trusted garbage collector. Region-based type systems provide an alternative to garbage collection by making memory management explicit but verifiably safe. However, it has not been clear how to use regions in low-level, type-safe code.We present a compiler intermediate language, called the Capability Calculus, that supports region-based memory management, enjoys a provably safe type system, and is straightforward to compile to a typed assembly language. Source languages may be compiled to our language using known region inference algorithms. Furthermore, region lifetimes need not be lexically scoped in our language, yet the language may be checked for safety without complex analyses. Finally, our soundness proof is relatively simple, employing only standard techniques.The central novelty is the use of static capabilities to specify the permissibility of various operations, such as memory access and deallocation. In order to ensure capabilities are relinquished properly, the type system tracks aliasing information using a form of bounded quantification.",
"Cyclone is a type-safe programming language derived from C. The primary design goal of Cyclone is to let programmers control data representation and memory management without sacrificing type-safety. In this paper, we focus on the region-based memory management of Cyclone and its static typing discipline. The design incorporates several advancements, including support for region subtyping and a coherent integration with stack allocation and a garbage collector. To support separate compilation, Cyclone requires programmers to write some explicit region annotations, but a combination of default annotations, local type inference, and a novel treatment of region effects reduces this burden. As a result, we integrate C idioms in a region-based framework. In our experience, porting legacy C to Cyclone has required altering about 8 of the code; of the changes, only 6 (of the 8 ) were region annotations.",
""
]
}
|
1201.0119
|
2949115963
|
In this paper, a family of ant colony algorithms called DAACA for data aggregation has been presented which contains three phases: the initialization, packet transmission and operations on pheromones. After initialization, each node estimates the remaining energy and the amount of pheromones to compute the probabilities used for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustment is performed periodically, which combines the advantages of both global and local pheromones adjustment for evaporating or depositing pheromones. Four different pheromones adjustment strategies are designed to achieve the global optimal network lifetime, namely Basic-DAACA, ES-DAACA, MM-DAACA and ACS-DAACA. Compared with some other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last we analyze the characteristic of DAACA in the aspects of robustness, fault tolerance and scalability.
|
In @cite_8 , a Local Minimum Spanning Tree algorithm called LMST is presented to establish the network topology, although it can effectively reduce the average degree of nodes, some prominent problems still emerge. Each node needs to periodically calculate and update the MST(Minimum Spanning Tree) locally which leads to a high computational overhead for each node. Moreover, each node needs to communicate with its neighbors to obtain the energy condition of neighbors, which still costs much energy.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1597141356"
],
"abstract": [
"In this paper, we present a minimum spanning tree (MST) based topology control algorithm, called local minimum spanning tree (LMST), for wireless multi-hop networks. In this algorithm, each node builds its local minimum spanning tree independently and only keeps on-tree nodes that are one-hop away as its neighbors in the final topology. We analytically prove several important properties of LMST: (1) the topology derived under LMST preserves the network connectivity; (2) the node degree of any node in the resulting topology is bounded by 6; and (3) the topology can be transformed into one with bidirectional links (without impairing the network connectivity) after removal of all uni-directional links. These results are corroborated in the simulation study."
]
}
|
1201.0119
|
2949115963
|
In this paper, a family of ant colony algorithms called DAACA for data aggregation has been presented which contains three phases: the initialization, packet transmission and operations on pheromones. After initialization, each node estimates the remaining energy and the amount of pheromones to compute the probabilities used for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustment is performed periodically, which combines the advantages of both global and local pheromones adjustment for evaporating or depositing pheromones. Four different pheromones adjustment strategies are designed to achieve the global optimal network lifetime, namely Basic-DAACA, ES-DAACA, MM-DAACA and ACS-DAACA. Compared with some other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last we analyze the characteristic of DAACA in the aspects of robustness, fault tolerance and scalability.
|
In @cite_32 , a localized, self organizing, robust and energy efficient data aggregation algorithm named L-PEDAP is proposed which combines LMST with RNG @cite_35 . Although it is proved to have the capability of prolonging the network lifetime, its topology construction procedure is nearly identical with that of LMST, hence, it cannot be considered as an energy-efficient algorithm.
|
{
"cite_N": [
"@cite_35",
"@cite_32"
],
"mid": [
"2163227453",
"2162778967"
],
"abstract": [
"Results of neighborhood graphs are surveyed. Properties, bounds on the size, algorithms, and variants of the neighborhood graphs are discussed. Numerous applications including computational morphology, spatial analysis, pattern classification, and databases for computer vision are described. >",
"We propose localized, self organizing, robust, and energy-efficient data aggregation tree approaches for sensor networks, which we call Localized Power-Efficient Data Aggregation Protocols (L-PEDAPs). They are based on topologies, such as LMST and RNG, that can approximate minimum spanning tree and can be efficiently computed using only position or distance information of one-hop neighbors. The actual routing tree is constructed over these topologies. We also consider different parent selection strategies while constructing a routing tree. We compare each topology and parent selection strategy and conclude that the best among them is the shortest path strategy over LMST structure. Our solution also involves route maintenance procedures that will be executed when a sensor node fails or a new node is added to the network. The proposed solution is also adapted to consider the remaining power levels of nodes in order to increase the network lifetime. Our simulation results show that by using our power-aware localized approach, we can almost have the same performance of a centralized solution in terms of network lifetime, and close to 90 percent of an upper bound derived here."
]
}
|
1201.0119
|
2949115963
|
In this paper, a family of ant colony algorithms called DAACA for data aggregation has been presented which contains three phases: the initialization, packet transmission and operations on pheromones. After initialization, each node estimates the remaining energy and the amount of pheromones to compute the probabilities used for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustment is performed periodically, which combines the advantages of both global and local pheromones adjustment for evaporating or depositing pheromones. Four different pheromones adjustment strategies are designed to achieve the global optimal network lifetime, namely Basic-DAACA, ES-DAACA, MM-DAACA and ACS-DAACA. Compared with some other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last we analyze the characteristic of DAACA in the aspects of robustness, fault tolerance and scalability.
|
To reduce the cost of constructing and maintaining the energy-efficient topology, a heuristic method called ACA (Ant Colony Algorithm) which is based on ant colony algorithm is proposed in @cite_28 . In the ant colony optimization, a colony of artificial ants is used to construct solutions guided by the pheromones trails and heuristic information @cite_16 . This behavior enables ants to find the shortest paths between the food source and the nest in a random manner. The author initiatively designs the rules for depositing and evaporating pheromones. However, the requirements of depositing pheromones can be easily met. When depositing pheromones, each node needs to communicate with its neighbors. Therefore there are lots of communications within the pheromones adjusting period, which leads to much energy cost overhead. Therefore, it cannot be considered as an energy-efficient algorithm either.
|
{
"cite_N": [
"@cite_28",
"@cite_16"
],
"mid": [
"2011092354",
"2006805041"
],
"abstract": [
"Data aggregation is important in energy constraint wireless sensor networks which exploits correlated sensing data and aggregates at the intermediate nodes to reduce the number of messages exchanged network. This paper considers the problem of constructing data aggregation tree in a wireless sensor network for a group of source nodes to send sensory data to a single sink node. The ant colony system provides a natural and intrinsic way of exploring search space in determining data aggregation. Moreover, we propose an ant colony algorithm for data aggregation in wireless sensor networks. Every ant will explore all possible paths from the source node to the sink node. The data aggregation tree is constructed by the accumulated pheromone. Simulations have shown that our algorithm can reduce significant energy costs.",
"Wireless sensor network localization is an important area that attracted significant research interest. This interest is expected to grow further with the proliferation of wireless sensor network applications. This paper provides an overview of the measurement techniques in sensor network localization and the one-hop localization algorithms based on these measurements. A detailed investigation on multi-hop connectivity-based and distance-based localization algorithms are presented. A list of open research problems in the area of distance-based sensor network localization is provided with discussion on possible approaches to them."
]
}
|
1112.6235
|
2000235497
|
We consider a situation where the state of a system is represented by a real-valued vector. Under normal circumstances, the vector is zero, while an event manifests as non-zero entries in this vector, possibly few. Our interest is in the design of algorithms that can reliably detect events (i.e., test whether the vector is zero or not) with the least amount of information. We place ourselves in a situation, now common in the signal processing literature, where information about the vector comes in the form of noisy linear measurements. We derive information bounds in an active learning setup and exhibit some simple near-optimal algorithms. In particular, our results show that the task of detection within this setting is at once much easier, simpler and different than the tasks of estimation and support recovery.
|
We mention that the present paper may be seen as a companion paper to @cite_0 which considers the tasks of estimation and support recovery in the same setting.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2950626207"
],
"abstract": [
"Suppose we can sequentially acquire arbitrary linear measurements of an n-dimensional vector x resulting in the linear model y = Ax + z, where z represents measurement noise. If the signal is known to be sparse, one would expect the following folk theorem to be true: choosing an adaptive strategy which cleverly selects the next row of A based on what has been previously observed should do far better than a nonadaptive strategy which sets the rows of A ahead of time, thus not trying to learn anything about the signal in between observations. This paper shows that the folk theorem is false. We prove that the advantages offered by clever adaptive strategies and sophisticated estimation procedures---no matter how intractable---over classical compressed acquisition recovery schemes are, in general, minimal."
]
}
|
1112.5767
|
2953310103
|
In this paper, we investigate joint optimal relay selection and resource allocation under bandwidth exchange (BE) enabled incentivized cooperative forwarding in wireless networks. We consider an autonomous network where N nodes transmit data in the uplink to an access point (AP) base station (BS). We consider the scenario where each node gets an initial amount (equal, optimal based on direct path or arbitrary) of bandwidth, and uses this bandwidth as a flexible incentive for two hop relaying. We focus on alpha-fair network utility maximization (NUM) and outage reduction in this environment. Our contribution is two-fold. First, we propose an incentivized forwarding based resource allocation algorithm which maximizes the global utility while preserving the initial utility of each cooperative node. Second, defining the link weight of each relay pair as the utility gain due to cooperation (over noncooperation), we show that the optimal relay selection in alpha-fair NUM reduces to the maximum weighted matching (MWM) problem in a non-bipartite graph. Numerical results show that the proposed algorithms provide 20- 25 gain in spectral efficiency and 90-98 reduction in outage probability.
|
Second, our proposed decode & forward (DF) BE enabled resource allocation maximizes the summation of the utilities while preserving the initial utilities of the individual nodes. Previously, the authors of @cite_13 considered BE from a simpler two hop relaying perspective. The authors of @cite_11 proposed a similar half duplex DF relaying approach. However, they considered a commercial relay network where the relay did not have its own data @cite_11 . To the best of our knowledge, our proposed BE based resource allocation algorithm has not been investigated before.
|
{
"cite_N": [
"@cite_13",
"@cite_11"
],
"mid": [
"1971045532",
"2148861839"
],
"abstract": [
"We investigate an incentive mechanism called Bandwidth Exchange (BE) for cooperative forwarding where transmission bandwidth is used as a flexible resource. We focus on a network where two nodes, communicating with the base station (BS) access point (AP), initially get optimal amount of bandwidth based on direct path transmission and then use their individual bandwidths as flexible incentives for two hop relaying. In our proposed scenario, the forwarder node sends its own data along with the data of the sender in exchange for additional transmission bandwidth, provided by the sender. We compare the performance of the proposed mechanism with optimal bandwidth and power allocation based direct transmission. We use sum rate, max-min rate and min-max power as the evaluation criteria and prove the convex concave nature of the optimization problem formulations. Our numerical analysis shows that the BE based cooperative forwarding extends the coverage in wireless networks when the far node falls in outage under direct transmission. Further, BE significantly improves the max-min rate and min-max power performance of the network.",
"In a multiple-antenna relay channel, the full-duplex cut-set capacity upper bound and decode-and-forward rate are formulated as convex optimization problems. For half-duplex relaying, bandwidth allocation and transmit signals are optimized jointly. Moreover, achievable rates based on the compress-and-forward strategy are presented using rate-distortion and Wyner-Ziv compression schemes."
]
}
|
1112.6399
|
1483683744
|
Recently, there has been much interest in spectral approaches to learning manifolds---so-called kernel eigenmap methods. These methods have had some successes, but their applicability is limited because they are not robust to noise. To address this limitation, we look at two-manifold problems, in which we simultaneously reconstruct two related manifolds, each representing a different view of the same data. By solving these interconnected learning problems together and allowing information to flow between them, two-manifold algorithms are able to succeed where a non-integrated approach would fail: each view allows us to suppress noise in the other, reducing bias in the same way that an instrumental variable allows us to remove bias in a linear dimensionality reduction problem. We propose a class of algorithms for two-manifold problems, based on spectral decomposition of cross-covariance operators in Hilbert space. Finally, we discuss situations where two-manifold problems are useful, and demonstrate that solving a two-manifold problem can aid in learning a nonlinear dynamical system from limited data.
|
While preparing this manuscript, we learned of the simultaneous and independent work of @cite_3 . This paper defines one particular two-manifold algorithm, maximum covariance unfolding (MCU), by extending maximum variance unfolding; but, it does not discuss how to extend other one-manifold methods. It also does not discuss any asymptotic properties of the MCU method, such as consistency.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2095762537"
],
"abstract": [
"We propose maximum covariance unfolding (MCU), a manifold learning algorithm for simultaneous dimensionality reduction of data from different input modalities. Given high dimensional inputs from two different but naturally aligned sources, MCU computes a common low dimensional embedding that maximizes the cross-modal (inter-source) correlations while preserving the local (intra-source) distances. In this paper, we explore two applications of MCU. First we use MCU to analyze EEG-fMRI data, where an important goal is to visualize the fMRI voxels that are most strongly correlated with changes in EEG traces. To perform this visualization, we augment MCU with an additional step for metric learning in the high dimensional voxel space. Second, we use MCU to perform cross-modal retrieval of matched image and text samples from Wikipedia. To manage large applications of MCU, we develop a fast implementation based on ideas from spectral graph theory. These ideas transform the original problem for MCU, one of semidefinite programming, into a simpler problem in semidefinite quadratic linear programming."
]
}
|
1112.6399
|
1483683744
|
Recently, there has been much interest in spectral approaches to learning manifolds---so-called kernel eigenmap methods. These methods have had some successes, but their applicability is limited because they are not robust to noise. To address this limitation, we look at two-manifold problems, in which we simultaneously reconstruct two related manifolds, each representing a different view of the same data. By solving these interconnected learning problems together and allowing information to flow between them, two-manifold algorithms are able to succeed where a non-integrated approach would fail: each view allows us to suppress noise in the other, reducing bias in the same way that an instrumental variable allows us to remove bias in a linear dimensionality reduction problem. We propose a class of algorithms for two-manifold problems, based on spectral decomposition of cross-covariance operators in Hilbert space. Finally, we discuss situations where two-manifold problems are useful, and demonstrate that solving a two-manifold problem can aid in learning a nonlinear dynamical system from limited data.
|
A similar problem to the two-manifold problem is @cite_39 @cite_18 , which builds connections between two or more data sets by aligning their underlying manifolds. Generally, manifold alignment algorithms either first learn the manifolds separately and then attempt to align them based on their low-dimensional geometric properties, or they take the union of several manifolds and attempt to learn a latent space that preserves the geometry of all of them @cite_18 . Our aim is different: we assume paired data, where manifold alignments do not; and, we focus on learning algorithms that discover manifold structure (as kernel eigenmap methods do) connections between manifolds (as provided by, e.g., a top-level learning problem defined between two manifolds).
|
{
"cite_N": [
"@cite_18",
"@cite_39"
],
"mid": [
"2397338926",
"194200989"
],
"abstract": [
"Manifold alignment has been found to be useful in many fields of machine learning and data mining. In this paper we summarize our work in this area and introduce a general framework for manifold alignment. This framework generates a family of approaches to align manifolds by simultaneously matching the corresponding instances and preserving the local geometry of each given manifold. Some approaches like semi-supervised alignment and manifold projections can be obtained as special cases. Our framework can also solve multiple manifold alignment problems and be adapted to handle the situation when no correspondence information is available. The approaches are described and evaluated both theoretically and experimentally, providing results showing useful knowledge transfer from one domain to another. Novel applications of our methods including identification of topics shared by multiple document collections, and biological structure alignment are discussed in the paper.",
"In this paper, we study a family of semisupervised learning algorithms for “aligning” different data sets that are characterized by the same underlying manifold. The optimizations of these algorithms are based on graphs that provide a discretized approximation to the manifold. Partial alignments of the data sets—obtained from prior knowledge of their manifold structure or from pairwise correspondences of subsets of labeled examples— are completed by integrating supervised signals with unsupervised frameworks for manifold learning. As an illustration of this semisupervised setting, we show how to learn mappings between different data sets of images that are parameterized by the same underlying modes of variability (e.g., pose and viewing angle). The curse of dimensionality in these problems is overcome by exploiting the low dimensional structure of image manifolds."
]
}
|
1112.6399
|
1483683744
|
Recently, there has been much interest in spectral approaches to learning manifolds---so-called kernel eigenmap methods. These methods have had some successes, but their applicability is limited because they are not robust to noise. To address this limitation, we look at two-manifold problems, in which we simultaneously reconstruct two related manifolds, each representing a different view of the same data. By solving these interconnected learning problems together and allowing information to flow between them, two-manifold algorithms are able to succeed where a non-integrated approach would fail: each view allows us to suppress noise in the other, reducing bias in the same way that an instrumental variable allows us to remove bias in a linear dimensionality reduction problem. We propose a class of algorithms for two-manifold problems, based on spectral decomposition of cross-covariance operators in Hilbert space. Finally, we discuss situations where two-manifold problems are useful, and demonstrate that solving a two-manifold problem can aid in learning a nonlinear dynamical system from limited data.
|
Manifold kernel dimension reduction @cite_21 , finds an embedding of covariates @math using a kernel eigenmap method, and then attempts to find a linear transformation of some of the dimensions of the embedded points to predict response variables @math . The response variables are constrained to be linear in the manifold, so the problem is quite different from a two-manifold problem.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2130606707"
],
"abstract": [
"We study the problem of discovering a manifold that best preserves information relevant to a nonlinear regression. Solving this problem involves extending and uniting two threads of research. On the one hand, the literature on sufficient dimension reduction has focused on methods for finding the best linear subspace for nonlinear regression; we extend this to manifolds. On the other hand, the literature on manifold learning has focused on unsupervised dimensionality reduction; we extend this to the supervised setting. Our approach to solving the problem involves combining the machinery of kernel dimension reduction with Laplacian eigenmaps. Specifically, we optimize cross-covariance operators in kernel feature spaces that are induced by the normalized graph Laplacian. The result is a highly flexible method in which no strong assumptions are made on the regression function or on the distribution of the covariates. We illustrate our methodology on the analysis of global temperature data and image manifolds."
]
}
|
1112.5671
|
2950403898
|
We present a symbolic-execution-based algorithm that for a given program and a given program location produces a nontrivial necessary condition on input values to drive the program execution to the given location. We also propose an application of necessary conditions in contemporary bug-finding and test-generation tools. Experimental results show that the presented technique can significantly improve performance of the tools.
|
Early work on symbolic execution @cite_3 @cite_25 showed its effectiveness in test generation. King @cite_3 further showed that symbolic execution can bring more automation into Floyd's inductive proving method @cite_27 . Nevertheless, loops as the source of the path explosion problem were not in the center of interest.
|
{
"cite_N": [
"@cite_27",
"@cite_25",
"@cite_3"
],
"mid": [
"2069300761",
"1995109607",
"2101512909"
],
"abstract": [
"This paper attempts to provide an adequate basis for formal definitions of the meanings of programs in appropriately defined programming languages, in such a way that a rigorous standard is established for proofs about computer programs, including proofs of correctness, equivalence, and termination. The basis of our approach is the notion of an interpretation of a program: that is, an association of a proposition with each connection in the flow of control through a program, where the proposition is asserted to hold whenever that connection is taken. To prevent an interpretation from being chosen arbitrarily, a condition is imposed on each command of the program. This condition guarantees that whenever a command is reached by way of a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. Then by induction on the number of commands executed, one sees that if a program is entered by a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. By this means, we may prove certain properties of programs, particularly properties of the form: ‘If the initial values of the program variables satisfy the relation R l, the final values on completion will satisfy the relation R 2’.",
"Symbolic testing and a symbolic evaluation system called DISSECT are described. The principle features of DISSECT are outlined. The results of two classes of experiments in the use of symbolic evaluadon are summarized. Several classes of program errors are defined and the reliability of symbolic testing in finding bugs is related to the classes of errors. The relationship of symbolic evaluation systems like DISSECT to classes of program errors and to other kinds of program testing and program analysis tools is also discussed. Desirable improvements in DISSECT, whose importance was revealed by the experiments, are mentioned.",
"This paper describes the symbolic execution of programs. Instead of supplying the normal inputs to a program (e.g. numbers) one supplies symbols representing arbitrary values. The execution proceeds as in a normal execution except that values may be symbolic formulas over the input symbols. The difficult, yet interesting issues arise during the symbolic execution of conditional branch type statements. A particular system called EFFIGY which provides symbolic execution for program testing and debugging is also described. It interpretively executes programs written in a simple PL I style programming language. It includes many standard debugging features, the ability to manage and to prove things about symbolic expressions, a simple program testing manager, and a program verifier. A brief discussion of the relationship between symbolic execution and program proving is also included."
]
}
|
1112.5629
|
2153066468
|
This paper considers the problem of completing a matrix with many missing entries under the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. This generalizes the standard low-rank matrix completion problem to situations in which the matrix rank can be quite high or even full rank. Since the columns belong to a union of subspaces, this problem may also be viewed as a missing-data version of the subspace clustering problem. Let X be an n x N matrix whose (complete) columns lie in a union of at most k subspaces, each of rank > kn. The main result of the paper shows that under mild assumptions each column of X can be perfectly recovered with high probability from an incomplete version so long as at least CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The result is illustrated with numerical experiments and an application to Internet distance matrix completion and topology identification.
|
The proof of the main result draws on ideas from matrix completion theory, subspace learning and detection with missing data, and subspace clustering. One key ingredient in our approach is the celebrated results on low-rank Matrix Completion @cite_12 @cite_0 @cite_5 . Unfortunately, in many real-world problems where missing data is present, particularly when the data is generated from a union of subspaces, these matrices can have very large rank values ( e.g., networking data in @cite_17 ). Thus, these prior results will require effectively all the elements be observed to accurately reconstruct the matrix.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2182949771",
"",
"2120872934",
"2159326294"
],
"abstract": [
"This paper is concerned with the problem of recov- ering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solu- tions meaningful. This paper presents optimality results quanti- fying the minimum number of entries needed to recover a matrix of rank exactly by any method whatsoever (the information theo- retic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, re- covery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theo- retic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of samples are needed to recover a random matrix of rank by any method, and to be sure, nuclear norm min- imization succeeds as soon as the number of entries is of the form .",
"",
"This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Candes and Recht (2009), Candes and Tao (2009), and (2009). The reconstruction is accomplished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory.",
"Despite many efforts over the past decade, the ability to generate topological maps of the Internet at the router-level accurately and in a timely fashion remains elusive. Mapping campaigns commonly involve traceroute-like probing that are usually non-adaptive and incomplete, thus revealing only a portion of the underlying topology. In this paper we demonstrate that standard probing methods yield datasets that implicitly contain information about much more than just the directly observed links and routers. Each probe, in addition to the underlying domain knowledge, returns information that places constraints on the underlying topology, and by integrating a large number of such constraints it is possible to accurately infer the existence of unseen components of the Internet. We describe DomainImpute, a novel data analysis methodology designed to accurately infer the unseen hop-count distances between observed routers. We use both synthetic and a large empirical dataset to validate the proposed methods. On our empirical real world dataset, we show that our methods can estimate over 55 of the unseen distances between observed routers to within a one-hop error."
]
}
|
1112.5629
|
2153066468
|
This paper considers the problem of completing a matrix with many missing entries under the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. This generalizes the standard low-rank matrix completion problem to situations in which the matrix rank can be quite high or even full rank. Since the columns belong to a union of subspaces, this problem may also be viewed as a missing-data version of the subspace clustering problem. Let X be an n x N matrix whose (complete) columns lie in a union of at most k subspaces, each of rank > kn. The main result of the paper shows that under mild assumptions each column of X can be perfectly recovered with high probability from an incomplete version so long as at least CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The result is illustrated with numerical experiments and an application to Internet distance matrix completion and topology identification.
|
Our work builds upon the results of @cite_14 , which quantifies the deviation of an incomplete vector norm with respect to the incoherence of the sampling pattern. While this work also examines subspace detection using incomplete data, it assumes complete knowledge of the subspaces.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2102628473"
],
"abstract": [
"We consider the problem of deciding whether a highly incomplete signal lies within a given subspace. This problem, Matched Subspace Detection, is a classical, well-studied problem when the signal is completely observed. High-dimensional testing problems in which it may be prohibitive or impossible to obtain a complete observation motivate this work. The signal is represented as a vector in ℝn, but we only observe m ≪ n of its elements.We show that reliable detection is possible, under mild incoherence conditions, as long as m is slightly greater than the dimension of the subspace in question."
]
}
|
1112.5629
|
2153066468
|
This paper considers the problem of completing a matrix with many missing entries under the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. This generalizes the standard low-rank matrix completion problem to situations in which the matrix rank can be quite high or even full rank. Since the columns belong to a union of subspaces, this problem may also be viewed as a missing-data version of the subspace clustering problem. Let X be an n x N matrix whose (complete) columns lie in a union of at most k subspaces, each of rank > kn. The main result of the paper shows that under mild assumptions each column of X can be perfectly recovered with high probability from an incomplete version so long as at least CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The result is illustrated with numerical experiments and an application to Internet distance matrix completion and topology identification.
|
While research that examines subspace learning has been presented in @cite_8 , the work in this paper differs by the concentration on learning from incomplete observations ( i.e., when there are missing elements in the matrix), and by the methodological focus ( i.e., nearest neighbor clustering versus a multiscale Singular Value Decomposition approach).
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2050333948"
],
"abstract": [
"Modeling data by multiple low-dimensional planes is an important problem in many applications such as computer vision and pattern recognition. In the most general setting where only coordinates of the data are given, the problem asks to determine the optimal model parameters (i.e., number of planes and their dimensions), estimate the model planes, and cluster the data accordingly. Though many algorithms have been proposed, most of them need to assume prior knowledge of the model parameters and thus address only the last two components of the problem. In this paper we propose an efficient algorithm based on multiscale SVD analysis and spectral methods to tackle the problem in full generality. We also demonstrate its state-of-the-art performance on both synthetic and real data."
]
}
|
1112.4237
|
2949364308
|
Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, belief, and channel capacity. This paper investigates the hardness of precisely checking the quantitative information flow of a program according to such definitions. More precisely, we study the "bounding problem" of quantitative information flow, defined as follows: Given a program M and a positive real number q, decide if the quantitative information flow of M is less than or equal to q. We prove that the bounding problem is not a k-safety property for any k (even when q is fixed, for the Shannon-entropy-based definition with the uniform distribution), and therefore is not amenable to the self-composition technique that has been successfully applied to checking non-interference. We also prove complexity theoretic hardness results for the case when the program is restricted to loop-free boolean programs. Specifically, we show that the problem is PP-hard for all definitions, showing a gap with non-interference which is coNP-complete for the same class of programs. The paper also compares the results with the recently proved results on the comparison problems of quantitative information flow.
|
This work continues our recent research @cite_13 on investigating the hardness and possibilities of verifying quantitative information flow according to the formal definitions proposed in literature @cite_15 @cite_4 @cite_2 @cite_26 @cite_23 @cite_10 @cite_9 @cite_27 @cite_28 @cite_7 @cite_32 @cite_17 . Much of the previous research has focused on information theoretic properties of the definitions and proposed approximate (i.e., incomplete and or unsound) methods for checking and inferring quantitative information flow according to such definitions. In contrast, this paper (along with our recent paper @cite_13 ) investigates the hardness and possibilities of precisely checking and inferring quantitative information flow according to the definitions.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_10",
"@cite_9",
"@cite_32",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"2078114603",
"1882297107",
"",
"",
"2119966192",
"",
"2167136065",
"",
"",
"",
"",
"2097151854",
"2116660249"
],
"abstract": [
"There is a clear intuitive connection between the notion of leakage of information in a program and concepts from information theory. This intuition has not been satisfactorily pinned down, until now. In particular, previous information-theoretic models of programs are imprecise, due to their overly conservative treatment of looping constructs. In this paper we provide the first precise information-theoretic semantics of looping constructs. Our semantics describes both the amount and rate of leakage; if either is small enough, then a program might be deemed \"secure\". Using the semantics we provide an investigation and classification of bounded and unbounded covert channels.",
"From the Preface (See Front Matter for full Preface) Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data. Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further. Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.",
"",
"",
"We present a model of adaptive side-channel attacks which we combine with information-theoretic metrics to quantify the information revealed to an attacker. This allows us to express an attacker's remaining uncertainty about a secret as a function of the number of side-channel measurements made. We present algorithms and approximation techniques for computing this measure. We also give examples of how they can be used to analyze the resistance of hardware implementations of cryptographic functions to both timing and power attacks.",
"",
"Recent research in quantitative theories for information-hiding topics, such as Anonymity and Secure Information Flow, tend to converge towards the idea of modeling the system as a noisy channel in the information-theoretic sense. The notion of information leakage, or vulnerability of the system, has been related in some approaches to the concept of mutual information of the channel. A recent work of Smith has shown, however, that if the attack consists in one single try, then the mutual information and other concepts based on Shannon entropy are not suitable, and he has proposed to use Renyi's min-entropy instead. In this paper, we consider and compare two different possibilities of defining the leakage, based on the Bayes risk, a concept related to Renyi min-entropy.",
"",
"",
"",
"",
"There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and side-channel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate \"small\" leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information . But a useful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees--the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk ) and which measures uncertainty using Renyi's min-entropy , rather than Shannon entropy.",
"We establish formal bounds for the number of min-entropy bits that can be extracted in a timing attack against a cryptosystem that is protected by blinding, the state-of-the art countermeasure against timing attacks. Compared with existing bounds, our bounds are both tighter and of greater operational significance, in that they directly address the key’s one-guess vulnerability. Moreover, we show that any semantically secure public-key cryptosystem remains semantically secure in the presence of timing attacks, if the implementation is protected by blinding and bucketing. This result shows that, by considering (and justifying) more optimistic models of leakage than recent proposals for leakage-resilient cryptosystems, one can achieve provable resistance against side-channel attacks for standard cryptographic primitives."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
Plate @cite_11 constructs a form of additive GP, but using only the first-order and @math th order terms. This model is motivated by the desire to trade off the interpretability of first-order models, with the flexibility of full-order models. Our experiments show that often, the intermediate degrees of interaction contribute most of the variance.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2005695783"
],
"abstract": [
"One of the widely acknowledged drawbacks of flexible statistical models is that the fitted models are often extremely difficult to interpret. However, if flexible models are constrained to be additive the fitted models are much easier to interpret, as each input can be considered independently. The problem with additive models is that they cannot provide an accurate model if the phenomenon being modeled is not additive. This paper shows that a tradeoff between accuracy and additivity can be implemented easily in Gaussian process models, which are a type of flexible model closely related to feedforward neural networks. One can fit a series of Gaussian process models that begins with the completely flexible and are constrained to be progressively more additive, and thus progressively more interpretable. Observations of how the degree of non-additivity and the test error change as the models become more additive give insight into the importance of interactions in a particular model. Fitted models in the series can also be interpreted graphically with a technique for visualizing the effects of inputs in non-additive models that was adapted from plots for generalized additive models. This visualization technique shows the overall effects of different inputs and also shows which inputs are involved in interactions and how strong those interactions are."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
A related functional ANOVA GP model @cite_0 decomposes the function into a weighted sum of GPs. However, the effect of a particular degree of interaction cannot be quantified by that approach. Also, computationally, the Gibbs sampling approach used in @cite_0 is disadvantageous.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2085571932"
],
"abstract": [
"Functional analysis of variance (ANOVA) models partition a func- tional response according to the main efiects and interactions of various factors. This article develops a general framework for functional ANOVA modeling from a Bayesian viewpoint, assigning Gaussian process prior distributions to each batch of functional efiects. We discuss the choices to be made in specifying such a model, advocating the treatment of levels within a given factor as dependent but exchangeable quantities, and we suggest weakly informative prior distributions for higher level parameters that may be appropriate in many situations. We discuss computationally e-cient strategies for posterior sampling using Markov Chain Monte Carlo algorithms, and we emphasize useful graphical summaries based on the posterior distribution of model-based analogues of traditional ANOVA decom- positions of variance. We illustrate this process of model speciflcation, posterior sampling, and graphical posterior summaries in two examples. The flrst consid- ers the efiect of geographic region on the temperature proflles at weather stations in Canada. The second example examines sources of variability in the output of regional climate models from a designed experiment."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
@cite_9 previously showed how mixtures of kernels can be learnt by gradient descent in the Gaussian process framework. They call this . However, their approach learns a mixture over a small, fixed set of kernels, while our method learns a mixture over all possible products of those kernels.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"1546257250"
],
"abstract": [
"Multiple kernel learning approaches to multi-view learning [1, 11, 7] have recently become very popular since they can easily combine information from multiple views, e.g., by adding or multiplying kernels. They are particularly effective when the views are class conditionally independent, since the errors committed by each view can be corrected by the other views. Most methods assume that a single set of kernel weights is sufficient for accurate classification, however, one can expect that the set of features important to discriminate between different examples can vary locally. As a result the performance of such global techniques can degrade in the presence of complex noise processes, e.g., heteroscedastic noise, missing data, or when the discriminative properties vary across the input space."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
Bach @cite_10 uses a regularized optimization framework to learn a weighted sum over an exponential number of kernels which can be computed in polynomial time. The subsets of kernels considered by this method are restricted to be a of kernels. In the setting we are considering in this paper, a hull can be defined as a subset of all terms such that if term @math is included in the subset, then so are all terms @math , for all @math . For details, see @cite_10 . Given each dimension's kernel, and a pre-defined weighting over all terms, HKL performs model selection by searching over hulls of interaction terms. In @cite_10 , Bach also fixes the relative weighting between orders of interaction with a single term @math , computing the sum over all orders by: which has computational complexity @math . However, this formulation forces the weight of all @math th order terms to be weighted by @math .
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1599445879"
],
"abstract": [
"We consider the problem of high-dimensional non-linear variable selection for supervised learning. Our approach is based on performing linear selection among exponentially many appropriately defined positive definite kernels that characterize non-linear interactions between the original variables. To select efficiently from these many kernels, we use the natural hierarchical structure of the problem to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels. Moreover, we study the consistency of variable selection in high-dimensional settings, showing that under certain assumptions, our regularization framework allows a number of irrelevant variables which is exponential in the number of observations. Our simulations on synthetic datasets and datasets from the UCI repository show state-of-the-art predictive performance for non-linear regression problems."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
Figure contrasts the HKL hull-selection method with the Additive GP hyperparameter-learning method. Neither method dominates the other in flexibility. The main difficulty with the approach of @cite_10 is that hyperparameters are hard to set other than by cross-validation. In contrast, our method optimizes the hyperparameters of each dimension's base kernel, as well as the relative weighting of each order of interaction.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1599445879"
],
"abstract": [
"We consider the problem of high-dimensional non-linear variable selection for supervised learning. Our approach is based on performing linear selection among exponentially many appropriately defined positive definite kernels that characterize non-linear interactions between the original variables. To select efficiently from these many kernels, we use the natural hierarchical structure of the problem to extend the multiple kernel learning framework to kernels that can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a graph-adapted sparsity-inducing norm, in polynomial time in the number of selected kernels. Moreover, we study the consistency of variable selection in high-dimensional settings, showing that under certain assumptions, our regularization framework allows a number of irrelevant variables which is exponential in the number of observations. Our simulations on synthetic datasets and datasets from the UCI repository show state-of-the-art predictive performance for non-linear regression problems."
]
}
|
1112.4394
|
2953110594
|
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.
|
A closely related procedure from the statistics literature is smoothing-splines ANOVA (SS-ANOVA) @cite_7 . An SS-ANOVA model is estimated as a weighted sum of splines along each dimension, plus a sum of splines over all pairs of dimensions, all triplets, etc, with each individual interaction term having a separate weighting parameter. Because the number of terms to consider grows exponentially in the order, in practice, only terms of first and second order are usually considered. Learning in SS-ANOVA is usually done via penalized-maximum likelihood with a fixed sparsity hyperparameter.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2146766088"
],
"abstract": [
"Foreword 1. Background 2. More splines 3. Equivalence and perpendicularity, or, what's so special about splines? 4. Estimating the smoothing parameter 5. 'Confidence intervals' 6. Partial spline models 7. Finite dimensional approximating subspaces 8. Fredholm integral equations of the first kind 9. Further nonlinear generalizations 10. Additive and interaction splines 11. Numerical methods 12. Special topics Bibliography Author index."
]
}
|
1112.4451
|
1829909901
|
While the engineering of operating systems is well understood, their formal structure and properties are not. The latter needs a clear definition of the purpose of an OS and an identification of the core. In this paper I offer definitions of the OS, processes and files, and present a few useful principles. The principles allow us to identify work like closure and continuation algorithms, in programming languages that is useful for the OS problem. The definitions and principles should yield a symbolic, albeit semiquantitative, framework that encompasses practice. Towards that end I specialise the definitions to describe conventional OSes and identify the core operations for a single computer OS that can be used to express their algorithms. The assumptions underlying the algorithms offer the design space framework. The paging and segmentation algorithms for conventional OSes are extracted from the framework as a check. Among the insights the emerge is that an OS is a constructive proof of equivalence between models of computation. Clear and useful definitions and principles are the first step towards a fully quantitative structure of an OS.
|
The growing complexity of the OS problem has greatly increased the turnaround time for building experimental systems and efforts to build such systems have reduced. The decade long K42 effort by Wisniewski al @cite_10 is a notable exception. They explore the facets of building a full OS, share the experience and insights, and point out to the need to fill the gap between good values of research (producing meaningful results beyond microbenchmarks) and actual research practice (high cost of complete OS research against time-cost constraints). They point out that some important practical questions, e.g. the useful lifetime of the GNU Linux or Windows structures, are unanswered, and offer plausible reasons for the lack of whole-OS research efforts. Whole-OS efforts, e.g. to describe the structure of an OS, are needed in addition to incremental work.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2035144019"
],
"abstract": [
"We started the K42 project more than ten years ago with the ambitious goal of developing an operating system for next-generation hardware that would be widely valued and thus widely used. Based on the premise that current operating systems were not designed to be scalable, customizable, or maintainable, we set forth to rectify that by applying proven techniques from other disciplines to operating systems and by developing additional innovative mechanisms. Now, ten year later, K42 is used by ten or so universities and national labs for research purposes, not ten million information technology departments desiring better everyday computing platforms. As a presentation to the primary operating systems community we provide an examination from two different perspectives as to what went right and what went wrong. First, we concentrate on what technology worked well and why, and what technology failed or caused undue difficulties, and why. Second, based on that experience, we provide our thoughts on the state and direction of the OS community at large. To be clear, this paper is neither a results paper nor an overview paper; we refer to other papers for background material. Rather, it is an exploration by researchers with experience with at least six different previous operating systems of the merit of technologies investigated in K42, and an extrapolation of the implications of that experience to the wider operating system community."
]
}
|
1112.4626
|
2950238675
|
We present a new circular-arc cartogram model in which countries are drawn as polygons with circular arcs instead of straight-line segments. Given a political map and values associated with each country in the map, a cartogram is a distorted map in which the areas of the countries are proportional to the corresponding values. In the circular-arc cartogram model straight-line segments can be replaced by circular arcs in order to modify the areas of the polygons, while the corners of the polygons remain fixed. The countries in circular-arc cartograms have the aesthetically pleasing appearance of clouds or snowflakes, depending on whether their edges are bent outwards or inwards. This makes it easy to determine whether a country has grown or shrunk, just by its overall shape. We show that determining whether a given map and given area-values can be realized as a circular-arc cartogram is an NP-hard problem. Next we describe a heuristic method for constructing circular-arc cartograms, which uses a max-flow computation on the dual graph of the map, along with a computation of the straight skeleton of the underlying polygonal decomposition. Our method is implemented and produces cartograms that, while not yet perfectly accurate, achieve many of the desired areas in our real-world examples.
|
The problem of representing additional information on top of a geographic map dates back to the 19th century, and highly schematized rectangular cartograms can be found in the 1934 work of Raisz @cite_20 . With rectangular cartograms it is not always possible to preserve all country adjacencies and realize all areas accurately @cite_27 @cite_25 . Eppstein studied area-universal rectangular layouts and characterized the class of rectangular layouts for which all area-assignments can be achieved with combinatorially equivalent layouts @cite_21 . If the requirement that rectangles are used is relaxed to allow the use of rectilinear regions then de Berg @cite_22 showed that all adjacencies can be preserved and all areas can be realized with 40-sided regions. In a series of papers the polygon complexity that is sufficient to realize any rectilinear cartogram was decreased from 40 over 34 corners @cite_18 , 12 corners @cite_23 , 10 corners @cite_15 down to 8 corners @cite_24 , which is best possible due to the earlier lower bound of 8-sided regions @cite_11 .
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2009461451",
"2138620696",
"2952129579",
"2010712298",
"2148734466",
"1868235818",
"",
"2148987114",
"2322867406",
"2052977033"
],
"abstract": [
"We consider orthogonal drawings of a plane graph G with specified face areas. For a natural number k, a k-gonal drawing of G is an orthogonal drawing such that the boundary of G is drawn as a rectangle and each inner face is drawn as a polygon with at most k corners whose area is equal to the specified value. In this paper, we show that every slicing graph G with a slicing tree T and a set of specified face areas admits a 10-gonal drawing D such that the boundary of each slicing subgraph that appears in T is also drawn as a polygon with at most 10 corners. Such a drawing D can be found in linear time.",
"Let G=(V,E) be a plane triangulated graph where each vertex is assigned a positive weight. A rectilinear dual of G is a partition of a rectangle into |V| simple rectilinear regions, one for each vertex, such that two regions are adjacent if and only if the corresponding vertices are connected by an edge in E. A rectilinear dual is called a cartogram if the area of each region is equal to the weight of the corresponding vertex. We show that every vertex-weighted plane triangulated graph G admits a cartogram of constant complexity, that is, a cartogram where the number of vertices of each region is constant. Furthermore, such a rectilinear cartogram can be constructed in O(nlogn) time where n=|V|.",
"A rectangular layout is a partition of a rectangle into a finite set of interior-disjoint rectangles. Rectangular layouts appear in various applications: as rectangular cartograms in cartography, as floorplans in building architecture and VLSI design, and as graph drawings. Often areas are associated with the rectangles of a rectangular layout and it might hence be desirable if one rectangular layout can represent several area assignments. A layout is area-universal if any assignment of areas to rectangles can be realized by a combinatorially equivalent rectangular layout. We identify a simple necessary and sufficient condition for a rectangular layout to be area-universal: a rectangular layout is area-universal if and only if it is one-sided. More generally, given any rectangular layout L and any assignment of areas to its regions, we show that there can be at most one layout (up to horizontal and vertical scaling) which is combinatorially equivalent to L and achieves a given area assignment. We also investigate similar questions for perimeter assignments. The adjacency requirements for the rectangles of a rectangular layout can be specified in various ways, most commonly via the dual graph of the layout. We show how to find an area-universal layout for a given set of adjacency requirements whenever such a layout exists.",
"In a rectilinear dual of a planar graph vertices are represented by simple rectilinear polygons, while edges are represented by side-contact between the corresponding polygons. A rectilinear dual is called a cartogram if the area of each region is equal to a pre-specified weight. The complexity of a cartogram is determined by the maximum number of corners (or sides) required for any polygon. In a series of papers the polygonal complexity of such representations for maximal planar graphs has been reduced from the initial 40 to 34, then to 12 and very recently to the currently best known 10. Here we describe a construction with 8-sided polygons, which is optimal in terms of polygonal complexity as 8-sided polygons are sometimes necessary. Specifically, we show how to compute the combinatorial structure and how to refine it into an area-universal rectangular layout in linear time. The exact cartogram can be computed from the area-universal layout with numerical iteration, or can be approximated with a hill-climbing heuristic. We also describe an alternative construction of cartograms for Hamiltonian maximal planar graphs, which allows us to directly compute the cartograms in linear time. Moreover, we prove that even for Hamiltonian graphs 8-sided rectilinear polygons are necessary, by constructing a non-trivial lower bound example. The complexity of the cartograms can be reduced to 6 if the Hamiltonian path has the extra property that it is one-legged, as in outer-planar graphs. Thus, we have optimal representations (in terms of both polygonal complexity and running time) for Hamiltonian maximal planar and maximal outer-planar graphs.",
"In many application domains, data is collected and referenced by its geospatial location. Nowadays, different kinds of maps are used to emphasize the spatial distribution of one or more geospatial attributes. The nature of geospatial statistical data is the highly nonuniform distribution in the real world data sets. This has several impacts on the resulting map visualizations. Classical area maps tend to highlight patterns in large areas, which may, however, be of low importance. Cartographers and geographers used cartograms or value-by-area maps to address this problem long before computers were available. Although many automatic techniques have been developed, most of the value-by-area cartograms are generated manually via human interaction. In this paper, we propose a novel visualization technique for geospatial data sets called RecMap. Our technique approximates a rectangular partition of the (rectangular) display area into a number of map regions preserving important geospatial constraints. It is a fully automatic technique with explicit user control over all exploration constraints within the exploration process. Experiments show that our technique produces visualizations of geospatial data sets, which enhance the discovery of global and local correlations, and demonstrate its performance in a variety of applications",
"We give an algorithm to create orthogonal drawings of 3- connected 3-regular planar graphs such that each interior face of the graph is drawn with a prescribed area. This algorithm produces a drawing with at most 12 corners per face and 4 bends per edge, which improves the previous known result of 34 corners per face.",
"",
"A rectangular cartogram is a type of map where every region is a rectangle. The size of the rectangles is chosen such that their areas represent a geographic variable (e.g., population). Good rectangular cartograms are hard to generate: The area specifications for each rectangle may make it impossible to realize correct adjacencies between the regions and so hamper the intuitive understanding of the map. We present the first algorithms for rectangular cartogram construction. Our algorithms depend on a precise formalization of region adjacencies and build upon existing VLSI layout algorithms. Furthermore, we characterize a non-trivial class of rectangular subdivisions for which exact cartograms can be computed efficiently. An implementation of our algorithms and various tests show that in practice, visually pleasing rectangular cartograms with small cartographic error can be generated effectively.",
"T HE idea of the statistical cartogram occurred to the author when he had occasion to prepare maps of the United States showing the distribution of various economic units, such as steel factories, textile mills, power plants, banks, etc. These maps were far too crowded in the northeast to be useful, while elsewhere, for the most part, they were relatively empty. If a way could be found to increase the scale of the northeastern region and reduce that of the west, distribution could be shown more clearly. Simple distortion of the map would be misleading, but, if we go a step farther, discard altogether the outlines of the country, and give each region a rectangular form of size proportional to the value represented, we arrive at the rectangular statistical cartogram. For purposes of comparison it is essential that a definite system of construction should be followed and identical arrangement should be used whatever values are represented. The system here used starts always with the larger divisions and by \"proportionate halving\" arrives at the smaller ones. It should be emphasized that the statistical cartogram is not a map. Although it has roughly the proportions of the country and retains as far as possible the relative locations of the various regions, the cartogram is purely a geometrical design to visualize certain statistical facts and to work out certain problems of distribution. Examples of these cartograms are given in the accompanying figures. The division into regions follows the usage of the United States Census Bureau, because only from this source are data available. If natural geographic regions could be used instead, the cartograms would be still more instructive.",
"Given a planar triangulated graph (PTG) G, the problem of constructing a floor-plan F such that G is the dual of F and the boundary of F is rectangular is studied. It is shown that if only zero-concave rectilinear modules (CRM) (or rectangular modules) and 1-CRM (i.e., L-shaped) are allowed, there are PTGs that do not admit any floor-plan. However, if 2-bend modules (e.g., T-shaped and Z-shaped) are also allowed, then every biconnected PTG admits a floor-plan. Thus, the employment of 2-bend modules is necessary and sufficient for graph dualization floor-planning. A linear-time algorithm for constructing a 2-CRM floor-plan of an arbitrary PTG is proposed."
]
}
|
1112.4626
|
2950238675
|
We present a new circular-arc cartogram model in which countries are drawn as polygons with circular arcs instead of straight-line segments. Given a political map and values associated with each country in the map, a cartogram is a distorted map in which the areas of the countries are proportional to the corresponding values. In the circular-arc cartogram model straight-line segments can be replaced by circular arcs in order to modify the areas of the polygons, while the corners of the polygons remain fixed. The countries in circular-arc cartograms have the aesthetically pleasing appearance of clouds or snowflakes, depending on whether their edges are bent outwards or inwards. This makes it easy to determine whether a country has grown or shrunk, just by its overall shape. We show that determining whether a given map and given area-values can be realized as a circular-arc cartogram is an NP-hard problem. Next we describe a heuristic method for constructing circular-arc cartograms, which uses a max-flow computation on the dual graph of the map, along with a computation of the straight skeleton of the underlying polygonal decomposition. Our method is implemented and produces cartograms that, while not yet perfectly accurate, achieve many of the desired areas in our real-world examples.
|
More general cartograms, without restrictions to rectangular or rectilinear shapes, have also been studied. Dougenik introduced a method based on force fields where the map is divided into cells and every cell has a force related to its data value which affects the other cells @cite_17 . Dorling used a cellular automaton approach, where regions exchange cells until an equilibrium has been achieved, i.e., each region has attained the desired amount of cells @cite_8 . This technique can result in significant distortions, thereby reducing readability and recognizability. Keim defined a distance between the original map and the cartogram with a metric based on Fourier transforms, and then used a scan-line algorithm to reposition the edges so as to optimize the metric @cite_1 . Edelsbrunner and Waupotitsch generated cartograms using a sequence of homeomorphic deformations and measured the quality with local distance distortion metrics @cite_13 . Kocmoud and House @cite_0 described a technique that combines the cell-based approach of Dorling @cite_8 with the homeomorphic deformations of Edelsbrunner and Waupotitsch @cite_13 .
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_13",
"@cite_17"
],
"mid": [
"1507189098",
"2151045902",
"2007701319",
"2199159668",
"2071807189"
],
"abstract": [
"",
"Cartograms are a well-known technique for showing geography-related statistical information, such as population demographics and epidemiological data. The basic idea is to distort a map by resizing its regions according to a statistical parameter, but in a way that keeps the map recognizable. We formally define a family of cartogram drawing problems. We show that even simple variants are unsolvable in the general case. Because the feasible variants are NP-complete, heuristics are needed to solve the problem. Previously proposed solutions suffer from problems with the quality of the generated drawings. For a cartogram to be recognizable, it is important to preserve the global shape or outline of the input map, a requirement that has been overlooked in the past. To address this, our objective function for cartogram drawing includes both global and local shape preservation. To measure the degree of shape preservation, we propose a shape similarity function, which is based on a Fourier transformation of the polygons' curvatures. Also, our application is visualization of dynamic data, for which we need an algorithm that recalculates a cartogram in a few seconds. None of the previous algorithms provides adequate performance with an acceptable level of quality for this application. We therefore propose an efficient iterative scanline algorithm to reposition edges while preserving local and global shapes. Scanlines may be generated automatically or entered interactively to guide the optimization process more closely. We apply our algorithm to several example data sets and provide a detailed comparison of the two variants of our algorithm and previous approaches.",
"Area cartograms are used for visualizing geographically distributed data by attaching measurements to regions of a map and scaling the regions such that their areas are proportional to the measured quantities. A continuous area cartogram is a cartogram that is constructed without changing the underlying map topology. We present a new algorithm for the construction of continuous area cartograms that was developed by viewing their construction as a constrained optimization problem. The algorithm uses a relaxation method that exploits hierarchical resolution, constrained dynamics, and a scheme that alternates goals of achieving correct region areas and adjusting region shapes. It is compared favorably to existing methods in its ability to preserve region shape recognition cues, while still achieving high accuracy.",
"Abstract A homeomorphism from R 2 to itself distorts metric quantities, such as distance and area. We describe an algorithm that constructs homeomorphisms with prescribed area distortion. Such homeomorphisms can be used to generate cartograms, which are geographic maps purposely distorted so their area distributions reflects a variable different from area, as for example population density. The algorithm generates the homeomorphism through a sequence of local piecewise linear homeomorphic changes. Sample results produced by the preliminary implementation of the method are included.",
"Continuous area cartograms distort planimetric maps to produce a desired set of areas while preserving the topology of the original map. We present a computer algorithm which achieves the result iteratively with high accuracy. The approach uses a model of forces exerted from each polygon centroid, acting on coordinates in inverse proportion to distance. This algorithm can handle more realistic descriptions of polygon boundaries than previous algorithms and manual methods, thus enhancing visual recognition."
]
}
|
1112.4626
|
2950238675
|
We present a new circular-arc cartogram model in which countries are drawn as polygons with circular arcs instead of straight-line segments. Given a political map and values associated with each country in the map, a cartogram is a distorted map in which the areas of the countries are proportional to the corresponding values. In the circular-arc cartogram model straight-line segments can be replaced by circular arcs in order to modify the areas of the polygons, while the corners of the polygons remain fixed. The countries in circular-arc cartograms have the aesthetically pleasing appearance of clouds or snowflakes, depending on whether their edges are bent outwards or inwards. This makes it easy to determine whether a country has grown or shrunk, just by its overall shape. We show that determining whether a given map and given area-values can be realized as a circular-arc cartogram is an NP-hard problem. Next we describe a heuristic method for constructing circular-arc cartograms, which uses a max-flow computation on the dual graph of the map, along with a computation of the straight skeleton of the underlying polygonal decomposition. Our method is implemented and produces cartograms that, while not yet perfectly accurate, achieve many of the desired areas in our real-world examples.
|
A popular method by Gastner and Newman @cite_6 projects the original map onto a distorted grid, calculated so that cell areas match the pre-defined values. This method relies on a physical model in which the desired areas are achieved via an iterative diffusion process. Flow moves from one country to another until a balanced distribution is reached, i.e., the density is the same everywhere. The cartograms produced this way are mostly readable and have no cartographic error. However, some countries may be deformed into shapes very different from those in the original map, and the complexity of the polygons can increase significantly.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2105795805"
],
"abstract": [
"Map makers have for many years searched for a way to construct cartograms, maps in which the sizes of geographic regions such as countries or provinces appear in proportion to their population or some other analogous property. Such maps are invaluable for the representation of census results, election returns, disease incidence, and many other kinds of human data. Unfortunately, to scale regions and still have them fit together, one is normally forced to distort the regions' shapes, potentially resulting in maps that are difficult to read. Many methods for making cartograms have been proposed, some of them are extremely complex, but all suffer either from this lack of readability or from other pathologies, like overlapping regions or strong dependence on the choice of coordinate axes. Here, we present a technique based on ideas borrowed from elementary physics that suffers none of these drawbacks. Our method is conceptually simple and produces useful, elegant, and easily readable maps. We illustrate the method with applications to the results of the 2000 U.S. presidential election, lung cancer cases in the State of New York, and the geographical distribution of stories appearing in the news."
]
}
|
1112.4090
|
1693479002
|
This paper considers a state dependent broadcast channel with one transmitter, Alice, and two receivers, Bob and Eve. The problem is to effectively convey ("amplify") the channel state sequence to Bob while "masking" it from Eve. The extent to which the state sequence cannot be masked from Eve is referred to as leakage. This can be viewed as a secrecy problem, where we desire that the channel state itself be minimally leaked to Eve while being communicated to Bob. The paper is aimed at characterizing the trade-off region between amplification and leakage rates for such a system. An achievable coding scheme is presented, wherein the transmitter transmits a partial state information over the channel to facilitate the amplification process. For the case when Bob observes a stronger signal than Eve, the achievable coding scheme is enhanced with secure refinement. Outer bounds on the trade-off region are also derived, and used in characterizing some special case results. In particular, the optimal amplification-leakage rate difference, called as differential amplification capacity, is characterized for the reversely degraded discrete memoryless channel, the degraded binary, and the degraded Gaussian channels. In addition, for the degraded Gaussian model, the extremal corner points of the trade-off region are characterized, and the gap between the outer bound and achievable rate-regions is shown to be less than half a bit for a wide set of channel parameters.
|
On the other hand, the problems of state amplification and state masking are individually solved in @cite_36 @cite_10 @cite_32 for point-to-point channels. Both @cite_36 @cite_10 and @cite_32 consider the problem of reliable transmission of messages in addition to state amplification and state masking respectively. In this paper, we consider the problem of amplifying the state to a desired receiver while trying to minimize the leakage (or mask the state) to the eavesdropper.
|
{
"cite_N": [
"@cite_36",
"@cite_10",
"@cite_32"
],
"mid": [
"2050804383",
"",
"2151533263"
],
"abstract": [
"We formulate a problem of state information transmission over a state-dependent channel with states known at the transmitter. In particular, we solve a problem of minimizing the mean-squared channel state estimation error E spl par S sup n - S spl circ sup n spl par for a state-dependent additive Gaussian channel Y sup n = X sup n + S sup n + Z sup n with an independent and identically distributed (i.i.d.) Gaussian state sequence S sup n = (S sub 1 , ..., S sub n ) known at the transmitter and an unknown i.i.d. additive Gaussian noise Z sup n . We show that a simple technique of direct state amplification (i.e., X sup n = spl alpha S sup n ), where the transmitter uses its entire power budget to amplify the channel state, yields the minimum mean-squared state estimation error. This same channel can also be used to send additional independent information at the expense of a higher channel state estimation error. We characterize the optimal tradeoff between the rate R of the independent information that can be reliably transmitted and the mean-squared state estimation error D. We show that any optimal (R, D) tradeoff pair can be achieved via a simple power-sharing technique, whereby the transmitter power is appropriately allocated between pure information transmission and state amplification.",
"",
"We consider the problem of rate-R, channel coding with causal noncausal side information at the transmitter, under an additional requirement of minimizing the amount of information that can be learned from the channel output about the state sequence, which is defined in terms of the mutual information between the state sequence and the channel output sequence. A single-letter characterization is provided for the achievable region of pairs (R, E) . Explicit results for the Gaussian case (Costa's dirty-paper channel) are derived in full detail."
]
}
|
1112.3880
|
2295705918
|
One of the key problems in migrating multi-component enterprise applications to Clouds is selecting the best mix of VM images and Cloud infrastructure services. A migration process has to ensure that Quality of Service (QoS) requirements are met, while satisfying conflicting selection criteria, e.g. throughput and cost. When selecting Cloud services, application engineers must consider heterogeneous sets of criteria and complex dependencies across multiple layers impossible to resolve manually. To overcome this challenge, we present the generic recommender framework CloudGenius and an implementation that leverage well known multi-criteria decision making technique Analytic Hierarchy Process to automate the selection process based on a model, factors, and QoS requirements related to enterprise applications. In particular, we introduce a structured migration process for multi-component enterprise applications, clearly identify the most important criteria relevant to the selection problem and present a multi-criteria-based selection algorithm. Experiments with the software prototype CumulusGenius show time complexities.
|
Multiple approaches have been introduced by the Web service community that define multi-component Web services @cite_24 @cite_0 but do not address the characteristics of the Cloud. Existing work in the Cloud context provides provider or service evaluation methods but lacks multi-component support @cite_20 . Multiple approaches for multi-component setups in the Cloud have applied optimization @cite_9 @cite_14 @cite_6 @cite_18 and performance measurement techniques @cite_1 for selecting hardware resources (provider side) or Cloud infrastructure services (client side) according to quantitative criteria (throughput, availability, cost, reputation, etc.). While doing so, they have largely ignored the need for VM images, a migration process with transparent decision support and adaptability to custom criteria, and, hence, lack flexibility.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_20"
],
"mid": [
"2119326913",
"2082201484",
"1513336745",
"2121884932",
"2158871468",
"1962632332",
"205327835",
"2075708947"
],
"abstract": [
"In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.",
"This paper focuses on service deployment optimization in cloud computing environments. In a cloud, an application is assumed to consist of multiple services. Each service in an application can be deployed as one or more service instances. Different service instances operate at different quality of service (QoS) levels depending on the amount of computing resources assigned to them. In order to satisfy given performance requirements, i.e. service level agreements (SLAs), each application is required to optimize its deployment configuration such as the number of service instances, the amount of computing resources to assign and the locations of service instances. Since this problem is NP-hard and often faces trade-offs among conflicting QoS objectives in SLAs, existing optimization methods often fail to solve it. mathrmE3-R is a multiobjective genetic algorithm that seeks a set of Pareto-optimal deployment configurations that satisfy SLAs and exhibit the trade-offs among conflicting QoS objectives. By leveraging queueing theory, E3-R estimates the performance of an application and aids defining SLAs in a probabilistic manner. Moreover, E3-R automatically reduces the number of QoS objectives and improves the quality of solutions further. Experimental studies demonstrate that E3-R efficiently obtains quality deployment configurations that satisfy given SLAs. Copyright © 2011 John Wiley & Sons, Ltd.",
"Services in cloud computing can be categorized into two groups: Application services and Utility Computing Services. Compositions in the application level are similar to the Web service compositions in SOC (Service-Oriented Computing). Compositions in the utility level are similar to the task matching and scheduling in grid computing. Contributions of this paper include: 1) An extensible QoS model is proposed to calculate the QoS values of services in cloud computing. 2) A genetic-algorithm-based approach is proposed to compose services in cloud computing. 3) A comparison is presented between the proposed approach and other algorithms, i.e., exhaustive search algorithms and random selection algorithms.",
"While many public cloud providers offer pay-as-you-go computing, their varying approaches to infrastructure, virtualization, and software services lead to a problem of plenty. To help customers pick a cloud that fits their needs, we develop CloudCmp, a systematic comparator of the performance and cost of cloud providers. CloudCmp measures the elastic computing, persistent storage, and networking services offered by a cloud along metrics that directly reflect their impact on the performance of customer applications. CloudCmp strives to ensure fairness, representativeness, and compliance of these measurements while limiting measurement cost. Applying CloudCmp to four cloud providers that together account for most of the cloud customers today, we find that their offered services vary widely in performance and costs, underscoring the need for thoughtful provider selection. From case studies on three representative cloud applications, we show that CloudCmp can guide customers in selecting the best-performing provider for their applications.",
"With increasing demand for computing and memory, distributed computing systems have attracted a lot of attention. Resource allocation is one of the most important challenges in the distributed systems specially when the clients have Service Level Agreements (SLAs) and the total profit in the system depends on how the system can meet these SLAs. In this paper, an SLA-based resource allocation problem for multi-tier applications in the cloud computing is considered. An upper bound on the total profit is provided and an algorithm based on force-directed search is proposed to solve the problem. The processing, memory requirement, and communication resources are considered as three dimensions in which optimization is performed. Simulation results demonstrate the effectiveness of the proposed heuristic algorithm.",
"The Internet is going through several major changes. It has become a vehicle of Web services rather than just a repository of information. Many organizations are putting their core business competencies on the Internet as a collection of Web services. An important challenge is to integrate them to create new value-added Web services in ways that could never be foreseen forming what is known as Business-to-Business (B2B) services. Therefore, there is a need for modeling techniques and tools for reliable Web service composition. In this paper, we propose a Petri net-based algebra, used to model control flows, as a necessary constituent of reliable Web service composition process. This algebra is expressive enough to capture the semantics of complex Web service combinations.",
"In this paper, a model based colored Petri net (CPN) to provide semantic support for web service composition is proposed, and the reliability and maintainability of composite services are improved. The composite constructs in the model are sequence, concurrent, choice, loop and replace. The web service is formally defined by a CPN. A closed composing algebra is defined to obtain a framework which enables declarative composition of web services. Availability, confidentiality, and integrity of composite service are analyzed within the framework of the model based CPN.",
"Cloud computing promises to provide high performance, on-demand services in a flexible and affordable manner, it offers the benefits of fast and easy deployment, scalability and service oriented architecture. It promises substantial cost reduction together with increased flexibility than the traditional IT operation. Cloud service providers typically come with various levels of services and performance characteristics. In addition, there are different types of user applications with specific requirements such as availability, security and computational power. Currently, there are no standard ranking and classification services for the users to select the appropriate providers to fit their application requirements. Determining the best cloud computing service for a specific application is a challenge and often determines the success of the underlying business of the service consumers. In this paper, we propose a set of cloud computing specific performance and quality of service (QoS) attributes, an information collection mechanism and the analytic algorithm based on Singular Value Decomposition Technique (SVD) to determine the best service provider for a user application with a specific set of requirements. This technique provides an automatic best-fit procedure which does not require a formal knowledge model."
]
}
|
1112.3880
|
2295705918
|
One of the key problems in migrating multi-component enterprise applications to Clouds is selecting the best mix of VM images and Cloud infrastructure services. A migration process has to ensure that Quality of Service (QoS) requirements are met, while satisfying conflicting selection criteria, e.g. throughput and cost. When selecting Cloud services, application engineers must consider heterogeneous sets of criteria and complex dependencies across multiple layers impossible to resolve manually. To overcome this challenge, we present the generic recommender framework CloudGenius and an implementation that leverage well known multi-criteria decision making technique Analytic Hierarchy Process to automate the selection process based on a model, factors, and QoS requirements related to enterprise applications. In particular, we introduce a structured migration process for multi-component enterprise applications, clearly identify the most important criteria relevant to the selection problem and present a multi-criteria-based selection algorithm. Experiments with the software prototype CumulusGenius show time complexities.
|
Additionally, there is preliminary work that provides decision support for selecting VM images and infrastructure services. @cite_7 propose an approach that selects Cloud VM images and Cloud infrastructure services with an ontology-based requirements check but lacks a service evaluation. Khajeh- @cite_10 @cite_12 developed the Cloud Adoption Toolkit that offers a high-level decision support for IT system migration of enterprises. The focus of the decision support is on risk management and a cost model which incorporates expected workload on the IT system.
|
{
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_7"
],
"mid": [
"1886606525",
"2141401843",
"2138938054"
],
"abstract": [
"Cloud computing promises a radical shift in the provisioning of computing resources within the enterprise. This paper describes the challenges that decision makers face when assessing the feasibility of the adoption of cloud computing in their organizations, and describes our Cloud Adoption Toolkit, which has been developed to support this process. The toolkit provides a framework to support decision makers in identifying their concerns, and matching these concerns to appropriate tools techniques that can be used to address them. Cost Modeling is the most mature tool in the toolkit, and this paper shows its effectiveness by demonstrating how practitioners can use it to examine the costs of deploying their IT systems on the cloud. The Cost Modeling tool is evaluated using a case study of an organization that is considering the migration of some of its IT systems to the cloud. The case study shows that running systems on the cloud using a traditional ‘always on’ approach can be less cost effective, and the elastic nature of the cloud has to be used to reduce costs. Therefore, decision makers have to model the variations in resource usage and their systems' deployment options to obtain accurate cost estimates. Copyright © 2011 John Wiley & Sons, Ltd.",
"This paper describes two tools that aim to support decision making during the migration of IT systems to the cloud. The first is a modeling tool that produces cost estimates of using public IaaS clouds. The tool enables IT architects to model their applications, data and infrastructure requirements in addition to their computational resource usage patterns. The tool can be used to compare the cost of different cloud providers, deployment options and usage scenarios. The second tool is a spreadsheet that outlines the benefits and risks of using IaaS clouds from an enterprise perspective, this tool provides a starting point for risk assessment. Two case studies were used to evaluate the tools. The tools were useful as they informed decision makers about the costs, benefits and risks of using the cloud.",
"Cloud computing is a computing paradigm which allows access of computing elements and storages on-demand over the Internet. Virtual Appliances, pre-configured, ready-to-run applications are emerging as a breakthrough technology to solve the complexities of service deployment on Cloud infrastructure. However, an automated approach to deploy required appliances on the most suitable Cloud infrastructure is neglected by previous works which is the focus of this work. In this paper, we propose an effective architecture using ontology-based discovery to provide QoS aware deployment of appliances on Cloud service providers. In addition, we test our approach on a case study and the result shows the efficiency and effectiveness of the proposed work."
]
}
|
1112.3506
|
2951825064
|
We study the boundary of tractability for the Max-Cut problem in graphs. Our main result shows that Max-Cut above the Edwards-Erd o s bound is fixed-parameter tractable: we give an algorithm that for any connected graph with n vertices and m edges finds a cut of size m 2 + (n-1) 4 + k in time 2^O(k)n^4, or decides that no such cut exists. This answers a long-standing open question from parameterized complexity that has been posed several times over the past 15 years. Our algorithm is asymptotically optimal, under the Exponential Time Hypothesis, and is strengthened by a polynomial-time computable kernel of polynomial size.
|
For variants of Max-Cut , the boundary of tractability'' above guaranteed values has also been investigated in the setting of parameterized complexity. For instance, in Max-Bisection we seek a cut such that the number of vertices in both sides of the bipartition is as equal as possible, the tight lower bound on the bisection size is only @math ; fixed-parameter tractability of Max-Bisection above @math was recently shown by Gutin and Yeo @cite_20 and Mnich and Zenklusen @cite_8 .
|
{
"cite_N": [
"@cite_20",
"@cite_8"
],
"mid": [
"1999270628",
"92894981"
],
"abstract": [
"In a graph G=(V,E), a bisection (X,Y) is a partition of V into sets X and Y such that |X|=<|Y|=<|X|+1. The size of (X,Y) is the number of edges between X and Y. In the Max Bisection problem we are given a graph G=(V,E) and are required to find a bisection of maximum size. It is not hard to see that @?|E| 2@? is a tight lower bound on the maximum size of a bisection of G. We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph G=(V,E) has a bisection of size at least @?|E| 2@?+k, where k is the parameter. We show that this parameterized problem has a kernel with O(k^2) vertices and O(k^3) edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most O(k^2) vertices and O(k^3) edges.",
"A bisection of a graph is a bipartition of its vertex set in which the number of vertices in the two parts differ by at most one, and the size of the bisection is the number of edges which go across the two parts. Every graph with m edges has a bisection of size at least ⌈m 2 ⌉, and this bound is sharp for infinitely many graphs. Therefore, Gutin and Yeo considered the parameterized complexity of deciding whether an input graph with m edges has a bisection of size at least ⌈m 2 ⌉+k, where k is the parameter. They showed fixed-parameter tractability of this problem, and gave a kernel with O(k2) vertices. Here, we improve the kernel size to O(k) vertices. Under the Exponential Time Hypothesis, this result is best possible up to constant factors."
]
}
|
1112.3265
|
2100960788
|
The eects of social inuence and homophily suggest that both network structure and node attribute information should inform the tasks of link prediction and node attribute inference. Recently, [28, 29] proposed Social-Attribute Network (SAN), an attribute-augmented social network, to integrate network structure and node attributes to perform both link prediction and attribute inference. They focused on generalizing the random walk with restart algorithm to the SAN framework and showed improved performance. In this paper, we extend the SAN framework with several leading supervised and unsupervised link prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference. Moreover, we make the novel observation that attribute inference can help inform link prediction, i.e., link prediction accuracy is further improved by rst inferring missing attributes. We comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel, largescale Google+ dataset, which we make publicly available 1 .
|
A wide range of link prediction methods have been developed. Liben-Nowell and Kleinberg @cite_2 surveyed a set of unsupervised link prediction algorithms. Li @cite_9 proposed Maximal Entropy Random Walk (MERW). @cite_18 proposed the PropFlow algorithm which is similar to RWwR but more localized. However, none of these approaches leverage node attribute information.
|
{
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_2"
],
"mid": [
"2136116685",
"2003707464",
"2420733993"
],
"abstract": [
"Link prediction is a fundamental problem in social network analysis. The key technique in unsupervised link prediction is to find an appropriate similarity measure between nodes of a network. A class of wildly used similarity measures are based on random walk on graph. The traditional random walk (TRW) considers the link structures by treating all nodes in a network equivalently, and ignores the centrality of nodes of a network. However, in many real networks, nodes of a network not only prefer to link to the similar node, but also prefer to link to the central nodes of the network. To address this issue, we use maximal entropy random walk (MERW) for link prediction, which incorporates the centrality of nodes of the network. First, we study certain important properties of MERW on graph @math by constructing an eigen-weighted graph G. We show that the transition matrix and stationary distribution of MERW on G are identical to the ones of TRW on G. Based on G, we further give the maximal entropy graph Laplacians, and show how to fast compute the hitting time and commute time of MERW. Second, we propose four new graph kernels and two similarity measures based on MERW for link prediction. Finally, to exhibit the power of MERW in link prediction, we compare 27 various link prediction methods over 3 synthetic and 8 real networks. The results show that our newly proposed MERW based methods outperform the state-of-the-art method on most datasets.",
"This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30 AUC.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the \"proximity\" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures."
]
}
|
1112.3265
|
2100960788
|
The eects of social inuence and homophily suggest that both network structure and node attribute information should inform the tasks of link prediction and node attribute inference. Recently, [28, 29] proposed Social-Attribute Network (SAN), an attribute-augmented social network, to integrate network structure and node attributes to perform both link prediction and attribute inference. They focused on generalizing the random walk with restart algorithm to the SAN framework and showed improved performance. In this paper, we extend the SAN framework with several leading supervised and unsupervised link prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference. Moreover, we make the novel observation that attribute inference can help inform link prediction, i.e., link prediction accuracy is further improved by rst inferring missing attributes. We comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel, largescale Google+ dataset, which we make publicly available 1 .
|
Previous works in @cite_0 @cite_11 aim at inferring node attributes (e.g., ethnicity and political orientation) using supervised learning methods with features extracted from user names and user-generated texts. Zheleva and Getoor @cite_30 map attribute inference to a relational classification problem. They find that methods using group information achieve good results. These approaches are complementary to ours since they use additional information apart from network structure and node attributes. In this paper, we transform the attribute inference problem into a link prediction problem with the SAN model. Therefore, any link prediction algorithm can be used to infer missing attributes. More importantly, we demonstrate that attribute inference can in turn help link prediction with the SAN model.
|
{
"cite_N": [
"@cite_0",
"@cite_30",
"@cite_11"
],
"mid": [
"2211683330",
"2103133870",
""
],
"abstract": [
"We present several novel minimally-supervised models for detecting latent attributes of social media users, with a focus on ethnicity and gender. Previouswork on ethnicity detection has used coarse-grained widely separated classes of ethnicity and assumed the existence of large amounts of training data such as the US census, simplifying the problem. Instead, we examine content generated by users in addition to name morpho-phonemics to detect ethnicity and gender. Further, weaddress this problem in a challenging setting where the ethnicity classes are more fine grained -- ethnicity classes in Nigeria -- and with very limited training data.",
"In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles.",
""
]
}
|
1112.2930
|
2952307158
|
We consider some generalizations of the Asymmetric Traveling Salesman Path problem. Suppose we have an asymmetric metric G = (V,A) with two distinguished nodes s,t. We are also given a positive integer k. The goal is to find k paths of minimum total cost from s to t whose union spans all nodes. We call this the k-Person Asymmetric Traveling Salesmen Path problem (k-ATSPP). Our main result for k-ATSPP is a bicriteria approximation that, for some parameter b >= 1 we may choose, finds between k and k + k b paths of total length O(b log |V|) times the optimum value of an LP relaxation based on the Held-Karp relaxation for the Traveling Salesman problem. On one extreme this is an O(log |V|)-approximation that uses up to 2k paths and on the other it is an O(k log |V|)-approximation that uses exactly k paths. Next, we consider the case where we have k pairs of nodes (s_1,t_1), ..., (s_k,t_k). The goal is to find an s_i-t_i path for every pair such that each node of G lies on at least one of these paths. Simple approximation algorithms are presented for the special cases where the metric is symmetric or where s_i = t_i for each i. We also show that the problem can be approximated within a factor O(log n) when k=2. On the other hand, we demonstrate that the general problem cannot be approximated within any bounded ratio unless P = NP.
|
The variant of finding Hamiltonian paths in asymmetric metrics, namely ATSPP, has only recently been studied from the perspective of approximation algorithms. The first approximation algorithm was an @math -approximation by Lam and Newman @cite_17 . Following this, Chekuri and Pal @cite_2 brought the ratio down to @math . Finally, Feige and Singh @cite_9 proved that an @math -approximation for ATSP implies a @math -approximation for ATSPP for any constant @math . Combining their result with the recent ATSP algorithm in @cite_15 yields an @math -approximation for ATSPP.
|
{
"cite_N": [
"@cite_15",
"@cite_9",
"@cite_2",
"@cite_17"
],
"mid": [
"1480355964",
"2137717103",
"330417094",
"1976231547"
],
"abstract": [
"",
"In metric asymmetric traveling salesperson problems the input is a complete directed graph in which edge weights satisfy the triangle inequality, and one is required to find a minimum weight walk that visits all vertices. In the asymmetric traveling salesperson problem (ATSP) the walk is required to be cyclic. In asymmetric traveling salesperson path problem (ATSPP), the walk is required to start at vertex sand to end at vertex t. We improve the approximation ratio for ATSP from @math to @math . This improvement is based on a modification of the algorithm of [JACM 05] that achieved the previous best approximation ratio. We also show a reduction from ATSPP to ATSP that loses a factor of at most 2 + i¾?in the approximation ratio, where i¾?> 0 can be chosen to be arbitrarily small, and the running time of the reduction is polynomial for every fixed i¾?. Combined with our improved approximation ratio for ATSP, this establishes an approximation ratio of @math for ATSPP, improving over the previous best ratio of 4log e ni¾? 2.76log 2 nof Chekuri and Pal [Approx 2006].",
"Compounds corresponding to the formula: I wherein Z represents a radical which completes a condensed aromatic ring system; R1 represents an n-valent aliphatic or aromatic radical; R2 represents H, alkyl or aryl, R3 represents one or more radicals to control the diffusion properties and the activation pH; and n represents 1 or 2, are suitable ED precursor compounds for use in color-photographic recording materials. They are preferably used in a combination with reducible dye-releasers. They are also suitable as so-called scavengers.",
"In the traveling salesman path problem, we are given a set of cities, traveling costs between city pairs and fixed source and destination cities. The objective is to find a minimum cost path from the source to destination visiting all cities exactly once. In this paper, we study polyhedral and combinatorial properties of a variant we call the traveling salesman walk problem, in which the objective is to find a minimum cost walk from the source to destination visiting all cities at least once. We first characterize traveling salesman walk perfect graphs, graphs for which the convex hull of incidence vectors of traveling salesman walks can be described by linear inequalities. We show these graphs have a description by way of forbidden minors and also characterize them constructively. We also address the asymmetric traveling salesman path problem (ATSPP) and give a factor @math -approximation algorithm for this problem."
]
}
|
1112.2930
|
2952307158
|
We consider some generalizations of the Asymmetric Traveling Salesman Path problem. Suppose we have an asymmetric metric G = (V,A) with two distinguished nodes s,t. We are also given a positive integer k. The goal is to find k paths of minimum total cost from s to t whose union spans all nodes. We call this the k-Person Asymmetric Traveling Salesmen Path problem (k-ATSPP). Our main result for k-ATSPP is a bicriteria approximation that, for some parameter b >= 1 we may choose, finds between k and k + k b paths of total length O(b log |V|) times the optimum value of an LP relaxation based on the Held-Karp relaxation for the Traveling Salesman problem. On one extreme this is an O(log |V|)-approximation that uses up to 2k paths and on the other it is an O(k log |V|)-approximation that uses exactly k paths. Next, we consider the case where we have k pairs of nodes (s_1,t_1), ..., (s_k,t_k). The goal is to find an s_i-t_i path for every pair such that each node of G lies on at least one of these paths. Simple approximation algorithms are presented for the special cases where the metric is symmetric or where s_i = t_i for each i. We also show that the problem can be approximated within a factor O(log n) when k=2. On the other hand, we demonstrate that the general problem cannot be approximated within any bounded ratio unless P = NP.
|
There is a linear programming (LP) relaxation for each of these problems based on the Held-Karp relaxation for TSP @cite_7 . For TSP, this relaxation is: [ ] Many of the approximation algorithms mentioned above also bound the integrality gap of the respective Held-Karp LP relaxation. For TSP, Wolsey @cite_0 proved the solutions found by Christofides' algorithm @cite_23 are within 3 2 of the optimal solution to the above LP relaxation. For ATSP, Williamson @cite_6 proved that the algorithm of Frieze @cite_8 bounds the integrality gap of its respective LP by @math . The improved @math -approximation for ATSP in @cite_15 improved the bound on gap to the same ratio. For TSP paths, An, Kleinberg and Shmoys @cite_14 first showed that Hoogeveen's algorithm bounds the integrality gap of a Held-Karp type relaxation for TSP paths by @math in cases where both endpoints are fixed. In the same paper they argue that their @math -approximation for this case also bounds the integrality gap by the same factor.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_15"
],
"mid": [
"",
"2295080441",
"2035610952",
"1877952604",
"1714357736",
"2117226423",
"1480355964"
],
"abstract": [
"",
"Electrophotographic recording material composed of a layer of selenium, selenium alloys, or selenium compounds, with arsenic as a additive, disposed on a conductive carrier is given improved properties by forming the layer to have a total arsenic content of 1 to 20 , by weight, and a concentration gradient such that the arsenic concentration decreases from the exposed surface of the layer in the direction toward the carrier and has a concentration of at least 13 at the exposed surface of the layer.",
"We consider the asymmetric traveling salesman problem for which the triangular inequality is satisfied. For various heuristics we construct examples to show that the worst-case ratio of length of tour found to minimum length tour is (n) for n city problems. We also provide a new O([log2n]) heuristic.",
"The Held-Karp heuristic for the Traveling Salesman Problem (TSP) has in practice provided near-optimal lower bounds on the cost of solutions to the TSP. We analyze the structure of Held-Karp solutions in order to shed light on their quality. In the symmetric case with triangle inequality, we show that a class of instances has planar solutions. We also show that Held-Karp solutions have a certain monotonicity property. This leads to an alternate proof of a result of Wolsey, which shows that the value of Held-Karp heuristic is always at least 2 3 OPT, where OPT is the cost of the optimum TSP tour. Additionally, we show that the value of the Held-Karp heuristic is equal to that of the linear relaxation of the biconnected-graph problem when edge costs are non-negative. In the asymmetric case with triangle inequality, we show that there are many equivalent definitions of the Held-Karp heuristic, which include finding optimally weighted 1-arborescences, 1-antiarborescences, asymmetric 1-trees, and assignment problems. We prove that monotonicity holds in the asymmetric case as well. These theorems imply that the value of the Held-Karp heuristic is no less than OPT and no less than the value of the Balas-Christofides heuristic for the asymmetric TSP. For the 1,2-TSP, we show that the Held-Karp heuristic cannot do any better than 9 10 OPT, even as the number of nodes tends to infinity. Portions of this thesis are joint work with David Shmoys.",
"We consider two questions arising in the analysis of heuristic algorithms. (i) Is there a general procedure involved when analysing a particular problem heuristic? (ii) How can heuristic procedures be incorporated into optimising algorithms such as branch and bound?",
"Abstract : An O(n sup 3) heuristic algorithm is described for solving n-city travelling salesman problems (TSP) whose cost matrix satisfies the triangularity condition. The algorithm involves as substeps the computation of a shortest spanning tree of the graph G defining the TSP, and the finding of a minimum cost perfect matching of a certain induced subgraph of G. A worst-case analysis of this heuristic shows that the ratio of the answer obtained to the optimum TSP solution is strictly less than 3 2. This represents a 50 reduction over the value 2 which was the previously best known such ratio for the performance of other polynomial-growth algorithms for the TSP.",
""
]
}
|
1112.2930
|
2952307158
|
We consider some generalizations of the Asymmetric Traveling Salesman Path problem. Suppose we have an asymmetric metric G = (V,A) with two distinguished nodes s,t. We are also given a positive integer k. The goal is to find k paths of minimum total cost from s to t whose union spans all nodes. We call this the k-Person Asymmetric Traveling Salesmen Path problem (k-ATSPP). Our main result for k-ATSPP is a bicriteria approximation that, for some parameter b >= 1 we may choose, finds between k and k + k b paths of total length O(b log |V|) times the optimum value of an LP relaxation based on the Held-Karp relaxation for the Traveling Salesman problem. On one extreme this is an O(log |V|)-approximation that uses up to 2k paths and on the other it is an O(k log |V|)-approximation that uses exactly k paths. Next, we consider the case where we have k pairs of nodes (s_1,t_1), ..., (s_k,t_k). The goal is to find an s_i-t_i path for every pair such that each node of G lies on at least one of these paths. Simple approximation algorithms are presented for the special cases where the metric is symmetric or where s_i = t_i for each i. We also show that the problem can be approximated within a factor O(log n) when k=2. On the other hand, we demonstrate that the general problem cannot be approximated within any bounded ratio unless P = NP.
|
Nagarajan and Ravi @cite_16 first showed that the integrality gap of an LP relaxation for ATSPP, which is the same as LP ) in this paper when @math , was @math . Later Friggstad, Salavatipour, and Svitkina @cite_3 showed a bound of @math in the integrality gap of this LP relaxation which is currently the best bound. We note that the result of Feige and Singh in @cite_9 that relates the approximability of ATSP and ATSPP does not extend to their integrality gaps in any obvious way.
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_3"
],
"mid": [
"2137717103",
"1492330513",
"2950391451"
],
"abstract": [
"In metric asymmetric traveling salesperson problems the input is a complete directed graph in which edge weights satisfy the triangle inequality, and one is required to find a minimum weight walk that visits all vertices. In the asymmetric traveling salesperson problem (ATSP) the walk is required to be cyclic. In asymmetric traveling salesperson path problem (ATSPP), the walk is required to start at vertex sand to end at vertex t. We improve the approximation ratio for ATSP from @math to @math . This improvement is based on a modification of the algorithm of [JACM 05] that achieved the previous best approximation ratio. We also show a reduction from ATSPP to ATSP that loses a factor of at most 2 + i¾?in the approximation ratio, where i¾?> 0 can be chosen to be arbitrarily small, and the running time of the reduction is polynomial for every fixed i¾?. Combined with our improved approximation ratio for ATSP, this establishes an approximation ratio of @math for ATSPP, improving over the previous best ratio of 4log e ni¾? 2.76log 2 nof Chekuri and Pal [Approx 2006].",
"We study the directed minimum latency problem: given an n-vertex asymmetric metric (V,d) with a root vertex ri¾? V, find a spanning path originating at rthat minimizes the sum of latencies at all vertices (the latency of any vertex vi¾? Vis the distance from rto valong the path). This problem has been well-studied on symmetric metrics, and the best known approximation guarantee is 3.59 [3]. For any @math O( n^ ^3 ) @math =O( n )$, which implies (for any fixed i¾?> 0) a polynomial time O(n1 2 + i¾?)-approximation algorithm for directed latency. In the special case of metrics induced by shortest-paths in an unweighted directed graph, we give an O(log2n) approximation algorithm. As a consequence, we also obtain an O(log2n) approximation algorithm for minimizing the weighted completion time in no-wait permutation flowshop scheduling. We note that even in unweighted directed graphs, the directed latency problem is at least as hard to approximate as the well-studied asymmetric traveling salesman problem, for which the best known approximation guarantee is O(logn).",
"We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem."
]
}
|
1112.2930
|
2952307158
|
We consider some generalizations of the Asymmetric Traveling Salesman Path problem. Suppose we have an asymmetric metric G = (V,A) with two distinguished nodes s,t. We are also given a positive integer k. The goal is to find k paths of minimum total cost from s to t whose union spans all nodes. We call this the k-Person Asymmetric Traveling Salesmen Path problem (k-ATSPP). Our main result for k-ATSPP is a bicriteria approximation that, for some parameter b >= 1 we may choose, finds between k and k + k b paths of total length O(b log |V|) times the optimum value of an LP relaxation based on the Held-Karp relaxation for the Traveling Salesman problem. On one extreme this is an O(log |V|)-approximation that uses up to 2k paths and on the other it is an O(k log |V|)-approximation that uses exactly k paths. Next, we consider the case where we have k pairs of nodes (s_1,t_1), ..., (s_k,t_k). The goal is to find an s_i-t_i path for every pair such that each node of G lies on at least one of these paths. Simple approximation algorithms are presented for the special cases where the metric is symmetric or where s_i = t_i for each i. We also show that the problem can be approximated within a factor O(log n) when k=2. On the other hand, we demonstrate that the general problem cannot be approximated within any bounded ratio unless P = NP.
|
In the full version of @cite_3 , the authors studied extensions of their @math -approximation for ATSPP to @math -ATSPP. They demonstrated that @math -ATSPP can be approximated within @math and that this bounds the integrality gap of LP ) by the same factor. Though not stated explicitly, their techniques can also be used to devise a bicriteria approximation for @math -ATSPP that uses @math paths of total cost at most @math times the value of LP ) in a manner similar to the algorithm in the proof of Theorem 1.3 in @cite_3 . As far as we know, no results are known for General @math -ATSPP even for the case @math .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2950391451"
],
"abstract": [
"We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1 2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem."
]
}
|
1112.2414
|
2024356620
|
Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP--PARAFAC alternating Poisson regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee--Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mil...
|
Much of the past work in nonnegative matrix and tensor analysis has focused on the LS error @cite_39 @cite_46 @cite_27 @cite_42 @cite_37 @cite_13 @cite_9 , which corresponds to an assumption of normal independently identically distributed (i.i.d.) noise. The focus of this paper is KL divergence, which corresponds to maximum likelihood estimation under an independent Poisson assumption; see Poisson . The seminal work in this domain are the papers of Lee and Seung @cite_18 @cite_19 , which propose very simple update formulas for both LS and KL divergence, resulting in a very low cost-per-iteration. Welling and Weber @cite_43 were the first to generalize the Lee and Seung algorithms to nonnegative tensor factorization (NTF). Applications of NTF based on KL-divergence include EEG analysis @cite_5 and sound source separation @cite_49 . We note that generalizations of KL divergence have also been proposed in the literature, including Bregman divergence @cite_54 @cite_38 @cite_7 and beta divergence @cite_4 @cite_29 .
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_42",
"@cite_54",
"@cite_29",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_43",
"@cite_49",
"@cite_5",
"@cite_46",
"@cite_13"
],
"mid": [
"2132267493",
"",
"1902027874",
"1984035260",
"2172195418",
"",
"",
"2168446235",
"",
"2059745395",
"",
"",
"2073502026",
"1965586400",
"2093492509",
"",
""
],
"abstract": [
"There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank- @math approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Bregman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of @math tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant @math on @math .",
"",
"Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.",
"In this paper we propose new algorithms for 3D tensor decomposition factorization with many potential applications, especially in multi-way blind source separation (BSS), multidimensional data analysis, and sparse signal image representations. We derive and compare three classes of algorithms: multiplicative, fixed-point alternating least squares (FPALS) and alternating interior-point gradient (AIPG) algorithms. Some of the proposed algorithms are characterized by improved robustness, efficiency and convergence rates and can be applied for various distributions of data and additive noise.",
"We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naive Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative PARAFAC, will always have optimal solutions. The result holds for any choice of norms and, under a mild assumption, even Bregman divergences. Copyright © 2009 John Wiley & Sons, Ltd.",
"",
"",
"Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse nonnegative representation for nonnegative input data. NNMA has found a wide variety of applications, including text analysis, document clustering, face image recognition, language modeling, speech processing and many others. Despite these numerous applications, the algorithmic development for computing the NNMA factors has been relatively deficient. This paper makes algorithmic progress by modeling and solving (using multiplicative updates) new generalized NNMA problems that minimize Bregman divergences between the input matrix and its low-rank approximation. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms. In addition, the paper shows how to use penalty functions for incorporating constraints other than nonnegativity into the problem. Further, some interesting extensions to the use of \"link\" functions for modeling nonlinear relationships are also discussed.",
"",
"A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of dimensions n × m. The method solves the bilinear matrix problem X = GF + E where G is the unknown left hand factor matrix (scores) of dimensions n × p, F is the unknown right hand factor matrix (loadings) of dimensions p × m, and E is the matrix of residuals. The problem is solved in the weighted least squares sense: G and F are determined so that the Frobenius norm of E divided (element-by-element) by σ is minimized. Furthermore, the solution is constrained so that all the elements of G and F are required to be non-negative. It is shown that the solutions by PMF are usually different from any solutions produced by the customary factor analysis (FA, i.e. principal component analysis (PCA) followed by rotations). Usually PMF produces a better fit to the data than FA. Also, the result of PF is guaranteed to be non-negative, while the result of FA often cannot be rotated so that all negative entries would be eliminated. Different possible application areas of the new method are briefly discussed. In environmental data, the error estimates of data can be widely varying and non-negativity is often an essential feature of the underlying models. Thus it is concluded that PMF is better suited than FA or PCA in many environmental applications. Examples of successful applications of PMF are shown in companion papers.",
"",
"",
"Abstract A novel fixed point algorithm for positive tensor factorization (PTF) is introduced. The update rules efficiently minimize the reconstruction error of a positive tensor over positive factors. Tensors of arbitrary order can be factorized, which extends earlier results in the literature. Experiments show that the factors of PTF are easier to interpret than those produced by methods based on the singular value decomposition, which might contain negative values. We also illustrate the tendency of PTF to generate sparsely distributed codes.",
"An algorithm for Non-negative Tensor Factorisation is introduced which extends current matrix factorisation techniques to deal with tensors. The effectiveness of the algorithm is then demonstrated through tests on synthetic data. The algorithm is then employed as a means of performing sound source separation on two channel mixtures, and the separation capabilities of the algorithm demonstrated on a two channel mixture containing saxophone, strings and bass guitar. Keywords - Non-negative tensor factorisation, sound source separation.",
"Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widely used for numerous applications, including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative least squares (ANLS) framework is a block coordinate descent approach for solving NMF, which was recently shown to be theoretically sound and empirically efficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativity-constrained least squares problem that overcomes a limitation of the active set method. We introduce ideas that efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence property of the ANLS framework and can easily be extended to other constrained NMF formulations. Extensive computational comparisons using data sets that are from real life applications as well as those artificially generated show that the proposed algorithm provides state-of-the-art performance in terms of computational speed.",
"",
""
]
}
|
1112.2414
|
2024356620
|
Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative log-likelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP--PARAFAC alternating Poisson regression (CP-APR) that is based on a majorization-minimization approach. It can be shown that CP-APR is a generalization of the Lee--Seung multiplicative updates. We show how to prevent the algorithm from converging to non-KKT points and prove convergence of CP-APR under mil...
|
In terms of convergence, Lin @cite_8 and Gillis and Glienur @cite_28 have shown convergence of two different modified versions of the Lee-Seung method for LS. Finesso and Spreij @cite_21 (tensor extension in @cite_2 ) have shown convergence of the Lee-Seung method for KL divergence; however, we show later that numerical issues arise if the iterates come near to the boundary. This is related to the problems demonstrated by Gonzalez and Zhang @cite_20 that show, in the case of LS loss, the Lee and Seung method can converge to non-KKT points; we show a similar example for KL divergence in misconvergence .
|
{
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_2",
"@cite_20"
],
"mid": [
"2132538571",
"2117513548",
"2012642013",
"2129455341",
""
],
"abstract": [
"Nonnegative matrix factorization (NMF) is useful to find basis information of nonnegative data. Currently, multiplicative updates are a simple and popular way to find the factorization. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, no proof has shown that multiplicative updates converge to a stationary point of the NMF optimization problem. Stationarity is important as it is a necessary condition of a local minimum. This paper discusses the difficulty of proving the convergence. We propose slight modifications of existing updates and prove their convergence. Techniques invented in this paper may be applied to prove the convergence for other bound-constrained optimization problems.",
"Nonnegative Matrix Factorization (NMF) is a data analysis technique which allows compression and interpretation of nonnegative data. NMF became widely studied after the publication of the seminal paper by Lee and Seung (Learning the Parts of Objects by Nonnegative Matrix Factorization, Nature, 1999, vol. 401, pp. 788--791), which introduced an algorithm based on Multiplicative Updates (MU). More recently, another class of methods called Hierarchical Alternating Least Squares (HALS) was introduced that seems to be much more efficient in practice. In this paper, we consider the problem of approximating a not necessarily nonnegative matrix with the product of two nonnegative matrices, which we refer to as Nonnegative Factorization (NF) ; this is the subproblem that HALS methods implicitly try to solve at each iteration. We prove that NF is NP-hard for any fixed factorization rank, using a reduction to the maximum edge biclique problem. We also generalize the multiplicative updates to NF, which allows us to shed some light on the differences between the MU and HALS algorithms for NMF and give an explanation for the better performance of HALS. Finally, we link stationary points of NF with feasible solutions of the biclique problem to obtain a new type of biclique finding algorithm (based on MU) whose iterations have an algorithmic complexity proportional to the number of edges in the graph, and show that it performs better than comparable existing methods.",
"Abstract In this paper we consider the Nonnegative Matrix Factorization (NMF) problem: given an (elementwise) nonnegative matrix V ∈ R + m × n find, for assigned k, nonnegative matrices W ∈ R + m × k and H ∈ R + k × n such that V = WH. Exact, nontrivial, nonnegative factorizations do not always exist, hence it is interesting to pose the approximate NMF problem. The criterion which is commonly employed is I-divergence between nonnegative matrices. The problem becomes that of finding, for assigned k, the factorization WH closest to V in I-divergence. An iterative algorithm, EM like, for the construction of the best pair (W, H) has been proposed in the literature. In this paper we interpret the algorithm as an alternating minimization procedure a la Csiszar–Tusnady and investigate some of its stability properties. NMF is widespreading as a data analysis method in applications for which the positivity constraint is relevant. There are other data analysis methods which impose some form of nonnegativity: we discuss here the connections between NMF and Archetypal Analysis.",
"In this paper we study Nonnegative Tensor Factorization (NTF) based on the Kullback---Leibler (KL) divergence as an alternative Csiszar---Tusnady procedure. We propose new update rules for the aforementioned divergence that are based on multiplicative update rules. The proposed algorithms are built on solid theoretical foundations that guarantee that the limit point of the iterative algorithm corresponds to a stationary solution of the optimization procedure. Moreover, we study the convergence properties of the optimization procedure and we present generalized pythagorean rules. Furthermore, we provide clear probabilistic interpretations of these algorithms. Finally, we discuss the connections between generalized Probabilistic Tensor Latent Variable Models (PTLVM) and NTF, proposing in that way algorithms for PTLVM for arbitrary multivariate probabilistic mass functions.",
""
]
}
|
1112.2254
|
1906722996
|
In this paper we investigate a new computing paradigm, called SocialCloud, in which computing nodes are governed by social ties driven from a bootstrapping trust-possessing social graph. We investigate how this paradigm differs from existing computing paradigms, such as grid computing and the conventional cloud computing paradigms. We show that incentives to adopt this paradigm are intuitive and natural, and security and trust guarantees provided by it are solid. We propose metrics for measuring the utility and advantage of this computing paradigm, and using real-world social graphs and structures of social traces; we investigate the potential of this paradigm for ordinary users. We study several design options and trade-offs, such as scheduling algorithms, centralization, and straggler handling, and show how they affect the utility of the paradigm. Interestingly, we conclude that whereas graphs known in the literature for high trust properties do not serve distributed trusted computing algorithms, such as Sybil defenses---for their weak algorithmic properties, such graphs are good candidates for our paradigm for their self-load-balancing features.
|
Systems built on top of social networks include file sharing systems @cite_9 , anonymous communication systems @cite_16 @cite_56 Sybil defenses @cite_15 @cite_41 @cite_48 @cite_24 , referral and filtering systems @cite_52 @cite_6 , and live streaming @cite_2 . Most of these applications weigh the trust in social graph, and an algorithmic property that makes the operation of these systems on top of social network effective. Another set of applications that exploit social networks' trust is routing @cite_1 @cite_7 @cite_53 @cite_21 ---in several settings, where it has been shown that connectivity in social graphs can be of benefit in disconnected networks. Finally, assumptions of social network-based systems are explored recently, where Sybil defenses and their assumptions are studied in @cite_50 , and trust is challenged in @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_41",
"@cite_48",
"@cite_9",
"@cite_53",
"@cite_1",
"@cite_52",
"@cite_6",
"@cite_56",
"@cite_24",
"@cite_21",
"@cite_50",
"@cite_2",
"@cite_15",
"@cite_16"
],
"mid": [
"2142772221",
"2082674813",
"2134320193",
"2138454540",
"2136347453",
"2039646365",
"2078129565",
"2155048531",
"2155106456",
"1563865627",
"2101890615",
"1532920824",
"2127503167",
"2134570253",
"1551760018",
""
],
"abstract": [
"Social network-based Sybil defenses exploit the algorithmic properties of social graphs to infer the extent to which an arbitrary node in such a graph should be trusted. However, these systems do not consider the different amounts of trust represented by different graphs, and different levels of trust between nodes, though trust is being a crucial requirement in these systems. For instance, co-authors in an academic collaboration graph are trusted in a different manner than social friends. Furthermore, some social friends are more trusted than others. However, previous designs for social network-based Sybil defenses have not considered the inherent trust properties of the graphs they use. In this paper we introduce several designs to tune the performance of Sybil defenses by accounting for differential trust in social graphs and modeling these trust values by biasing random walks performed on these graphs. Surprisingly, we find that the cost function, the required length of random walks to accept all honest nodes with overwhelming probability, is much greater in graphs with high trust values, such as co-author graphs, than in graphs with low trust values such as online social networks. We show that this behavior is due to the community structure in high-trust graphs, requiring longer walk to traverse multiple communities. Furthermore, we show that our proposed designs to account for trust, while increase the cost function of graphs with low trust value, decrease the advantage of attacker.",
"Message delivery in sparse Mobile Ad hoc Networks (MANETs) is difficult due to the fact that the network graph is rarely (if ever) connected. A key challenge is to find a route that can provide good delivery performance and low end-to-end delay in a disconnected network graph where nodes may move freely. This paper presents a multidisciplinary solution based on the consideration of the so-called small world dynamics which have been proposed for economy and social studies and have recently revealed to be a successful approach to be exploited for characterising information propagation in wireless networks. To this purpose, some bridge nodes are identified based on their centrality characteristics, i.e., on their capability to broker information exchange among otherwise disconnected nodes. Due to the complexity of the centrality metrics in populated networks the concept of ego networks is exploited where nodes are not required to exchange information about the entire network topology, but only locally available information is considered. Then SimBet Routing is proposed which exploits the exchange of pre-estimated \"betweenness' centrality metrics and locally determined social \"similarity' to the destination node. We present simulations using real trace data to demonstrate that SimBet Routing results in delivery performance close to Epidemic Routing but with significantly reduced overhead. Additionally, we show that SimBet Routing outperforms PRoPHET Routing, particularly when the sending and receiving nodes have low connectivity.",
"Decentralized systems, such as structured overlays, are subject to the Sybil attack, in which an adversary creates many false identities to increase its influence. This paper describes a one-hop distributed hash table which uses the social links between users to strongly resist the Sybil attack. The social network is assumed to be fast mixing, meaning that a random walk in the honest part of the network quickly approaches the uniform distribution. As in the related SybilLimit system [25], with a social network of n honest nodes and m honest edges, the protocol can tolerate up to o(n log n) attack edges (social links from honest nodes to compromised nodes). The routing tables contain O(√m log m) entries per node and are constructed efficiently by a distributed protocol. This is the first sublinear solution to this problem. Preliminary simulation results are presented to demonstrate the approach's effectiveness.",
"A wear-resistant liner for center plate structure of a railway vehicle is provided and such liner is defined by an ultra high molecular weight polymeric material having met al reinforcing means embedded in and surrounded by the polymeric material which serves as a matrix for the reinforcing means and the met al reinforcing means comprises a met al structure having openings therein for receiving the polymeric material completely therethrough and enabling better embedment of the met al structure with the met al structure providing reinforcement and preventing cold flow of the polymeric material, and with the met al structure being a grid-like structure.",
"Privacy -- the protection of information from unauthorized disclosure -- is increasingly scarce on the Internet. The lack of privacy is particularly true for popular peer-to-peer data sharing applications such as BitTorrent where user behavior is easily monitored by third parties. Anonymizing overlays such as Tor and Freenet can improve user privacy, but only at a cost of substantially reduced performance. Most users are caught in the middle, unwilling to sacrifice either privacy or performance. In this paper, we explore a new design point in this tradeoff between privacy and performance. We describe the design and implementation of a new P2P data sharing protocol, called OneSwarm, that provides users much better privacy than BitTorrent and much better performance than Tor or Freenet. A key aspect of the OneSwarm design is that users have explicit configurable control over the amount of trust they place in peers and in the sharing model for their data: the same data can be shared publicly, anonymously, or with access control, with both trusted and untrusted peers. OneSwarm's novel lookup and transfer techniques yield a median factor of 3.4 improvement in download times relative to Tor and a factor of 6.9 improvement relative to Freenet. OneSwarm is publicly available and has been downloaded by hundreds of thousands of users since its release.",
"The growth of Web 2.0 and fundamental theoretical breakthroughs have led to an avalanche of interest in social networks. This paper focuses on the problem of modeling how social networks accomplish tasks through peer production style collaboration. We propose a general interaction model for the underlying social networks and then a specific model ( i L ink for social search and message routing. A key contribution here is the development of a general learning framework for making such online peer production systems work at scale. The i L ink model has been used to develop a system for FAQ generation in a social network (FAQ tory ), and experience with its application in the context of a full-scale learning-driven workflow application (CALO) is reported. We also discuss methods of adapting i L ink technology for use in military knowledge sharing portals and other message routing systems. Finally, the paper shows the connection of i L ink to SQM, a theoretical model for social search that is a generalization of Markov Decision Processes and the popular Pagerank model.",
"Delay-tolerant network architectures exploit mobile devices carried by users to enable new networked applications. Efficiently routing information through these DTNs faces new challenges such as mobility and the dynamic nature of the network. Previous work has looked at using encountered nodes to build a social network for routing. In this work we construct routing tables from users' self-reported social networks. Initial experiments indicate that this significantly reduces the delivery cost of transmitting messages through a DTN.",
"",
"Collaborative filters help people make choices based on the opinions of other people. GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles. News reader clients display predicted scores and make it easy for users to rate articles after they read them. Rating servers, called Better Bit Bureaus, gather and disseminate the ratings. The rating servers predict scores based on the heuristic that people who agreed in the past will probably agree again. Users can protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction. The entire architecture is open: alternative software for news clients and Better Bit Bureaus can be developed independently and can interoperate with the components we have developed.",
"As decentralized computing scenarios get ever more popular, unstructured topologies are natural candidates to consider running mix networks upon. We consider mix network topologies where mixes are placed on the nodes of an unstructured network, such as social networks and scale-free random networks. We explore the efficiency and traffic analysis resistance properties of mix networks based on unstructured topologies as opposed to theoretically optimal structured topologies, under high latency conditions. We consider a mix of directed and undirected network models, as well as one real world case study - the LiveJournal friendship network topology. Our analysis indicates that mix-networks based on scale-free and small-world topologies have, firstly, mix-route lengths that are roughly comparable to those in expander graphs; second, that compromise of the most central nodes has little effect on anonymization properties, and third, batch sizes required for warding off intersection attacks need to be an order of magnitude higher in unstructured networks in comparison with expander graph topologies.",
"Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.",
"The equality and anonymity of peer-to-peer networks makes them vulnerable to routing denial of service attacks from misbehaving nodes. In this paper, we investigate how existing social networks can benefit P2P networks by leveraging the inherent trust associated with social links. We present a trust model that lets us compare routing algorithms for P2P networks overlaying social networks. We propose SPROUT, a DHT routing algorithm that significantly increases the probability of successful routing by using social links. Finally, we discuss further optimization and design choices for both the model and the routing algorithm.",
"Social networks provide interesting algorithmic properties that can be used to bootstrap the security of distributed systems. For example, it is widely believed that social networks are fast mixing, and many recently proposed designs of such systems make crucial use of this property. However, whether real-world social networks are really fast mixing is not verified before, and this could potentially affect the performance of such systems based on the fast mixing property. To address this problem, we measure the mixing time of several social graphs, the time that it takes a random walk on the graph to approach the stationary distribution of that graph, using two techniques. First, we use the second largest eigenvalue modulus which bounds the mixing time. Second, we sample initial distributions and compute the random walk length required to achieve probability distributions close to the stationary distribution. Our findings show that the mixing time of social graphs is much larger than anticipated, and being used in literature, and this implies that either the current security systems based on fast mixing have weaker utility guarantees or have to be less efficient, with less security guarantees, in order to compensate for the slower mixing.",
"Multimedia social networks have become an emerging research area, in which analysis and modeling of the behavior of users who share multimedia are of ample importance in understanding the impact of human dynamics on multimedia systems. In peer-to-peer live-streaming social networks, users cooperate with each other to provide a distributed, highly scalable and robust platform for live streaming applications. However, every user wishes to use as much bandwidth as possible to receive a high-quality video, while full cooperation cannot be guaranteed. This paper proposes a game-theoretic framework to model user behavior and designs incentive-based strategies to stimulate user cooperation in peer-to-peer live streaming. We first analyze the Nash equilibrium and the Pareto optimality of two-person game and then extend to multiuser case. We also take into consideration selfish users' cheating behavior and malicious users' attacking behavior. Both our analytical and simulation results show that the proposed strategies can effectively stimulate user cooperation, achieve cheat free, attack resistance and help to provide reliable services.",
"SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results.",
""
]
}
|
1112.2254
|
1906722996
|
In this paper we investigate a new computing paradigm, called SocialCloud, in which computing nodes are governed by social ties driven from a bootstrapping trust-possessing social graph. We investigate how this paradigm differs from existing computing paradigms, such as grid computing and the conventional cloud computing paradigms. We show that incentives to adopt this paradigm are intuitive and natural, and security and trust guarantees provided by it are solid. We propose metrics for measuring the utility and advantage of this computing paradigm, and using real-world social graphs and structures of social traces; we investigate the potential of this paradigm for ordinary users. We study several design options and trade-offs, such as scheduling algorithms, centralization, and straggler handling, and show how they affect the utility of the paradigm. Interestingly, we conclude that whereas graphs known in the literature for high trust properties do not serve distributed trusted computing algorithms, such as Sybil defenses---for their weak algorithmic properties, such graphs are good candidates for our paradigm for their self-load-balancing features.
|
Perhaps the closest vein of related work in the literature to our work is on the use of social networks for building computing services. Until the time of writing this work, most of the prior research work has been solely focused on providing storage services, but not a platform of computations. Such storage services use slightly different economical model from 's model, where payment per Megabyte per month rates are used as opposed to our eco-system. Examples of such efforts are reported by Sato @cite_29 and @cite_3 ). @cite_19 have further explored a first step in the direction of building cloud computing platforms on top of social networks where by considering the access control model in this domain with preferred access control guarantees. The results of this work can be used as a building block in our work to improve the quality of access control and authorization.
|
{
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_3"
],
"mid": [
"2148471161",
"2157437046",
"2109572691"
],
"abstract": [
"Digital signatures are an important security mechanism, especially when non-repudiation is desired. However, non-repudiation is meaningful only when the private signing keys and functions are adequately protected --- an assumption that is very difficult to accommodate in the real world because computers (and thus cryptographic keys and functions) could be relatively easily compromised. One approach to resolving, or at least alleviating, this problem is to use threshold cryptography. But how should such techniques be employed in the real world? In this paper we propose exploiting social networks whereby average users take advantage of their trusted ones to help secure their cryptographic keys. While the idea is simple from an individual user's perspective, we aim to understand the resulting systems from a whole-system perspective. Specifically, we propose and investigate two measures of the resulting systems: attack-resilience, which captures the security consequences due to the compromise of some computers and thus the compromise of the cryptographic key shares stored on them; availability, which captures the effect when computers are not always responsive (due to the peer-to-peer nature of social networks).",
"Emerging virtualization technologies are making ubiquitous access to on-demand computing, network and storage resources to deliver various applications over public Internet. In this paper we present how the telecom operation support systems (OSS) that provide Enterprise to Enterprise (E2E) transactions, switching management, on-demand service management and scalability have evolved to provide next generation cloud management. Fujitsu’s Social Cloud OSS provides multi-vendor, multi-network management, multi-layer Service Level Agreement (SLA) assurance, on-demand service management and impact analysis to businesses. The Social Cloud OSS service management solution for cloud computing will be the next killer application that will facilitate easy access to cloud services with appropriate SLAs and enable the society to use social networking applications that are currently being delivered using clouds.",
"Today, it is common for users to own more than tens of gigabytes of digital pictures, videos, experimental traces, etc. Although many users already back up such data on a cheap second disk, it is desirable to also seek off-site redundancies so that important data can survive threats such as natural disasters and operator mistakes. Commercial online backup service is expensive [1, 11]. An alternative solution is to use a peer-to-peer storage system. However, existing cooperative backup systems are plagued by two long-standing problems [3, 4, 9, 19, 27]: enforcing minimal availability from participating nodes, and ensuring that nodes storing others' backup data will not deny restore service in times of need."
]
}
|
1112.2254
|
1906722996
|
In this paper we investigate a new computing paradigm, called SocialCloud, in which computing nodes are governed by social ties driven from a bootstrapping trust-possessing social graph. We investigate how this paradigm differs from existing computing paradigms, such as grid computing and the conventional cloud computing paradigms. We show that incentives to adopt this paradigm are intuitive and natural, and security and trust guarantees provided by it are solid. We propose metrics for measuring the utility and advantage of this computing paradigm, and using real-world social graphs and structures of social traces; we investigate the potential of this paradigm for ordinary users. We study several design options and trade-offs, such as scheduling algorithms, centralization, and straggler handling, and show how they affect the utility of the paradigm. Interestingly, we conclude that whereas graphs known in the literature for high trust properties do not serve distributed trusted computing algorithms, such as Sybil defenses---for their weak algorithmic properties, such graphs are good candidates for our paradigm for their self-load-balancing features.
|
With similar flavor of distributed computing services design, there has been prior works in literature on using volunteers' resources for computations exploiting locality of data @cite_38 @cite_57 , examination of programing paradigms, like MapReduce @cite_18 on such paradigm @cite_44 @cite_26 . Finally, our work shares several commonalities with the grid and volunteer computing systems @cite_0 @cite_44 @cite_38 @cite_57 @cite_54 , of which many aspects are explored in the literature. Trust of grid computing and volunteer-based systems is explored in @cite_10 @cite_11 @cite_25 @cite_12 @cite_51 . Applications built on top of these systems, that would fit to our use model, are reported in @cite_57 @cite_26 @cite_33 , among others.
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_33",
"@cite_54",
"@cite_57",
"@cite_44",
"@cite_0",
"@cite_51",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"67629752",
"2157355837",
"2151763581",
"2104560855",
"2103363198",
"2090355322",
"2070275167",
"2140332639",
"2014301909",
"2121743635",
"",
"2156523427",
"2104538152"
],
"abstract": [
"Current cloud services are deployed on well-provisioned and centrally controlled infrastructures. However, there are several classes of services for which the current cloud model may not fit well: some do not need strong performance guarantees, the pricing may be too expensive for some, and some may be constrained by the data movement costs to the cloud. To satisfy the requirements of such services, we propose the idea of using distributed voluntary resources--those donated by end-user hosts--to form nebulas: more dispersed, less-managed clouds. We first discuss the requirements of cloud services and the challenges in meeting these requirements in such voluntary clouds. We then present some possible solutions to these challenges and also discuss opportunities for further improvements to make nebulas a viable cloud paradigm.",
"MapReduce advantages over parallel databases include storage-system independence and fine-grain fault tolerance for large jobs.",
"MapReduce is a highly-popular paradigm for high-performance computing over large data sets in large-scale platforms. However, when the source data is widely distributed and the computing platform is also distributed, e.g. data is collected in separate data center locations, the most efficient architecture for running Hadoop jobs over the entire data set becomes non-trivial. In this paper, we show the traditional single-cluster MapReduce setup may not be suitable for situations when data and compute resources are widely distributed. Further, we provide recommendations for alternative (and even hierarchical) distributed MapReduce setup configurations, depending on the workload and data set.",
"Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users. This paper reviews recent advances of Cloud computing, identifies the concepts and characters of scientific Clouds, and finally presents an example of scientific Cloud for data centers",
"Millions of computer owners worldwide contribute computer time to the search for extraterrestrial intelligence, performing the largest computation ever.",
"Current cloud infrastructures are important for their ease of use and performance. However, they suffer from several shortcomings. The main problem is inefficient data mobility due to the centralization of cloud resources. We believe such clouds are highly unsuited for dispersed-data-intensive applications, where the data may be spread at multiple geographical locations (e.g., distributed user blogs). Instead, we propose a new cloud model called Nebula: a dispersed, context-aware, and cost-effective cloud. We provide experimental evidence for the need for Nebulas using a distributed blog analysis application followed by the system architecture and components of our system.",
"MapReduce offers an ease-of-use programming paradigm for processing large data sets, making it an attractive model for distributed volunteer computing systems. However, unlike on dedicated resources, where MapReduce has mostly been deployed, such volunteer computing systems have significantly higher rates of node unavailability. Furthermore, nodes are not fully controlled by the MapReduce framework. Consequently, we found the data and task replication scheme adopted by existing MapReduce implementations woefully inadequate for resources with high unavailability. To address this, we propose MOON, short for MapReduce On Opportunistic eNvironments. MOON extends Hadoop, an open-source implementation of MapReduce, with adaptive task and data scheduling algorithms in order to offer reliable MapReduce services on a hybrid resource architecture, where volunteer computing systems are supplemented by a small set of dedicated nodes. Our tests on an emulated volunteer computing system, which uses a 60-node cluster where each node possesses a similar hardware configuration to a typical computer in a student lab, demonstrate that MOON can deliver a three-fold performance improvement to Hadoop in volatile, volunteer computing environments.",
"The design, implementation, and performance of the Condor scheduling system, which operates in a workstation environment, are presented. The system aims to maximize the utilization of workstations with as little interference as possible between the jobs it schedules and the activities of the people who own workstations. It identifies idle workstations and schedules background jobs on them. When the owner of a workstation resumes activity at a station, Condor checkpoints the remote job running on the station and transfers it to another workstation. The system guarantees that the job will eventually complete, and that very little, if any, work will be performed more than once. A performance profile of the system is presented that is based on data accumulated from 23 stations during one month. >",
"The success of grid computing in open environments like the Internet is highly dependent on the adoption of mechanisms to detect failures and malicious sabotage attempts. It is also required to maintain a trust management system that permits one to distinguish the trustable from the non-trustable participants in a global computation. Without these mechanisms, users with data-critical applications will never rely on desktop grids, and will rather prefer to support higher costs to run their computations in closed and secure computing systems. This paper discusses the topics of sabotage-tolerance and trust management. After reviewing the state-of-the-art, we present two novel techniques: a mechanism for sabotage detection and a protocol for distributed trust management. The proposed techniques are targeted at the paradigm of volunteer-based computing commonly used on desktop grids.",
"A grid computing system is a geographically distributed environment with autonomous domains that share resources amongst themselves. One primary goal of such a grid environment is to encourage domain-to-domain interactions and increase the confidence of domains to use or share resources: (a) without losing control over their own resources; and (b) ensuring confidentiality for others. To achieve this, the \"trust\" notion needs to be addressed so that trustworthiness makes such geographically distributed systems become more attractive and reliable for day-to-day use. In this paper we view trust in two steps: (a) verifying the identity of an entity and what that identity is authorized to do; and (b) monitoring and managing the behavior of the entity and building a trust level based on that behavior The identity trust has been the focus of many researchers, but unfortunately the behavior trust has not attracted much attention. We present a formal definition of behavior trust and reputation and discuss a behavior trust management architecture that models the process of evolving and managing of behavior trust in grid computing systems.",
"",
"Peer-to-peer file-sharing networks are currently receiving much attention as a means of sharing and distributing information. However, as recent experience shows, the anonymous, open nature of these networks offers an almost ideal environment for the spread of self-replicating inauthentic files.We describe an algorithm to decrease the number of downloads of inauthentic files in a peer-to-peer file-sharing network that assigns each peer a unique global trust value, based on the peer's history of uploads. We present a distributed and secure method to compute global trust values, based on Power iteration. By having peers use these global trust values to choose the peers from whom they download, the network effectively identifies malicious peers and isolates them from the network.In simulations, this reputation system, called EigenTrust, has been shown to significantly decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to deliberately subvert the system.",
"Resource management is a central part of a Grid computing system. In a large-scale wide-area system such as the Grid, security is a prime concern. One approach is to be conservative and implement techniques such as sandboxing, encryption, and other access control mechanisms on all elements of the Grid. However, the overhead caused by such a design may negate the advantages of Grid computing. This study examines the integration of the notion of \"trust\" into resource management such that the allocation process is aware of the security implications. We present a formal definition of trust and discuss a model for incorporating trust into Grid systems. As an example application of the ideas proposed, a resource management algorithm that incorporates trust is presented. The performance of the algorithm is examined via simulations."
]
}
|
1112.2020
|
1593517648
|
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
|
@cite_4 present the problem of @math -anonymizing a trajectory database with respect to a sensitive event database. The goal is to make sure that every event is shared by at least @math users. Specifically, they develop a new generalization mechanism known as , which achieves better utility than conventional hierarchy- or partition-based generalization. @cite_35 consider the emerging trajectory data publishing scenario, in which users' sensitive attributes are published with trajectory data and consequently propose the @math -privacy model that thwarts both identity linkages on trajectory data and attribute linkages via trajectory data. They develop a generic solution for various data utility metrics by use of . All these approaches @cite_9 , @cite_23 , @cite_15 , @cite_4 , @cite_35 , @cite_24 are built based on partition-based privacy models, and therefore are not able to provide sufficient privacy protection for trajectory data. The major contribution of our paper is the use of differential privacy, which provides significantly stronger privacy guarantees.
|
{
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_9",
"@cite_24",
"@cite_23",
"@cite_15"
],
"mid": [
"",
"2025483242",
"2010362518",
"2133819144",
"2096558825",
"2100921307"
],
"abstract": [
"",
"This article examines a new problem of k-anonymity with respect to a reference dataset in privacy-aware location data publishing: given a user dataset and a sensitive event dataset, we want to generalize the user dataset such that by joining it with the event dataset through location, each event is covered by at least k users. Existing k-anonymity algorithms generalize every k user locations to the same vague value, regardless of the events. Therefore, they tend to overprotect against the privacy compromise and make the published data less useful. In this article, we propose a new generalization paradigm called local enlargement, as opposed to conventional hierarchy- or partition-based generalization. Local enlargement guarantees that user locations are enlarged just enough to cover all events k times, and thus maximize the usefulness of the published data. We develop an O(Hn)-approximate algorithm under the local enlargement paradigm, where n is the maximum number of events a user could possibly cover and Hn is the Harmonic number of n. With strong pruning techniques and mathematical analysis, we show that it runs efficiently and that the generalized user locations are up to several orders of magnitude smaller than those by the existing algorithms. In addition, it is robust enough to protect against various privacy attacks.",
"Preserving individual privacy when publishing data is a problem that is receiving increasing attention. According to the fc-anonymity principle, each release of data must be such that each individual is indistinguishable from at least k - 1 other individuals. In this paper we study the problem of anonymity preserving data publishing in moving objects databases. We propose a novel concept of k-anonymity based on co-localization that exploits the inherent uncertainty of the moving object's whereabouts. Due to sampling and positioning systems (e.g., GPS) imprecision, the trajectory of a moving object is no longer a polyline in a three-dimensional space, instead it is a cylindrical volume, where its radius delta represents the possible location imprecision: we know that the trajectory of the moving object is within this cylinder, but we do not know exactly where. If another object moves within the same cylinder they are indistinguishable from each other. This leads to the definition of (k,delta) -anonymity for moving objects databases. We first characterize the (k, delta)-anonymity problem and discuss techniques to solve it. Then we focus on the most promising technique by the point of view of information preservation, namely space translation. We develop a suitable measure of the information distortion introduced by space translation, and we prove that the problem of achieving (k,delta) -anonymity by space translation with minimum distortion is NP-hard. Faced with the hardness of our problem we propose a greedy algorithm based on clustering and enhanced with ad hoc pre-processing and outlier removal techniques. The resulting method, named NWA (Never Walk .Alone), is empirically evaluated in terms of data quality and efficiency. Data quality is assessed both by means of objective measures of information distortion, and by comparing the results of the same spatio-temporal range queries executed on the original database and on the (k, delta)-anonymized one. Experimental results show that for a wide range of values of delta and k, the relative error introduced is kept low, confirming that NWA produces high quality (k, delta)-anonymized data.",
"In recent years, spatio-temporal and moving objects databases have gained considerable interest, due to the diffusion of mobile devices (e.g., mobile phones, RFID devices and GPS devices) and of new applications, where the discovery of consumable, concise, and applicable knowledge is the key step. Clearly, in these applications privacy is a concern, since models extracted from this kind of data can reveal the behavior of group of individuals, thus compromising their privacy. Movement data present a new challenge for the privacy-preserving data mining community because of their spatial and temporal characteristics. In this position paper we briefly present an approach for the generalization of movement data that can be adopted for obtaining k-anonymity in spatio-temporal datasets; specifically, it can be used to realize a framework for publishing of spatio-temporal data while preserving privacy. We ran a preliminary set of experiments on a real-world trajectory dataset, demonstrating that this method of generalization of trajectories preserves the clustering analysis results.",
"We study the problem of protecting privacy in the publication of location sequences. Consider a database of trajectories, corresponding to movements of people, captured by their transactions when they use credit or RFID debit cards. We show that, if such trajectories are published exactly (by only hiding the identities of persons that followed them), there is a high risk of privacy breach by adversaries who hold partial information about them (e.g., shop owners). In particular, we show that one can use partial trajectory knowledge as a quasi-identifier for the remaining locations in the sequence. We device a data suppression technique, which prevents this type of breach, while keeping the posted data as accurate as possible.",
"Moving object databases (MOD) have gained much interest in recent years due to the advances in mobile communications and positioning technologies. Study of MOD can reveal useful information (e.g., traffic patterns and congestion trends) that can be used in applications for the common benefit. In order to mine and or analyze the data, MOD must be published, which can pose a threat to the location privacy of a user. Indeed, based on prior knowledge of a user's location at several time points, an attacker can potentially associate that user to a specific moving object (MOB) in the published database and learn her position information at other time points. In this paper, we study the problem of privacy-preserving publishing of moving object database. Unlike in microdata, we argue that in MOD, there does not exist a fixed set of quasi-identifier (QID) attributes for all the MOBs. Consequently the anonymization groups of MOBs (i.e., the sets of other MOBs within which to hide) may not be disjoint. Thus, there may exist MOBs that can be identified explicitly by combining different anonymization groups. We illustrate the pitfalls of simple adaptations of classical k-anonymity and develop a notion which we prove is robust against privacy attacks. We propose two approaches, namely extreme-union and symmetric anonymization, to build anonymization groups that provably satisfy our proposed k-anonymity requirement, as well as yield low information loss. We ran an extensive set of experiments on large real-world and synthetic datasets of vehicular traffic. Our results demonstrate the effectiveness of our approach."
]
}
|
1112.2020
|
1593517648
|
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
|
In the last few years, differential privacy has been employed in various applications. Currently most of the research on differential privacy concentrates on the with the goal of either reducing the magnitude of added noise @cite_5 , @cite_20 , @cite_13 , @cite_17 or releasing certain data mining results @cite_14 @cite_38 @cite_27 @cite_18 @cite_7 . Dwork @cite_2 provides an overview of recent works on differential privacy.
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"",
"2123733729",
"",
"",
"2077217970",
"2057576485",
"2129428954",
"2099259603",
"2132768130"
],
"abstract": [
"",
"",
"The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously. Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself. The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube.",
"",
"",
"In the information realm, loss of privacy is usually associated with failure to control access to information, to control the flow of information, or to control the purposes for which information is employed. Differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved: privacy-preserving statistical analysis of data. The problem of statistical disclosure control – revealing accurate statistics about a set of respondents while preserving the privacy of individuals – has a venerable history, with an extensive literature spanning statistics, theoretical computer science, security, databases, and cryptography (see, for example, the excellent survey [1], the discussion of related work in [2] and the Journal of Official Statistics 9 (2), dedicated to confidentiality and disclosure control). This long history is a testament the importance of the problem. Statistical databases can be of enormous social value; they are used for apportioning resources, evaluating medical therapies, understanding the spread of disease, improving economic utility, and informing us about ourselves as a species. The data may be obtained in diverse ways. Some data, such as census, tax, and other sorts of official data, are compelled; others are collected opportunistically, for example, from traffic on the internet, transactions on Amazon, and search engine query logs; other data are provided altruistically, by respondents who hope that sharing their information will help others to avoid a specific misfortune, or more generally, to increase the public good. Altruistic data donors are typically promised their individual data will be kept confidential – in short, they are promised “privacy.” Similarly, medical data and legally compelled data, such as census data, tax return data, have legal privacy mandates. In our view, ethics demand that opportunistically obtained data should be treated no differently, especially when there is no reasonable alternative to engaging in the actions that generate the data in question. The problems remain: even if data encryption, key management, access control, and the motives of the data curator",
"We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.",
"We define a new interactive differentially private mechanism --- the median mechanism --- for answering arbitrary predicate queries that arrive online. Given fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). With respect to the number of queries, our guarantee is close to the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input distributions. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting.",
"Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. But despite much recent work, optimal strategies for answering a collection of related queries are not known. We propose the matrix mechanism, a new algorithm for answering a workload of predicate counting queries. Given a workload, the mechanism requests answers to a different set of queries, called a query strategy, which are answered using the standard Laplace mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two stage process can result in a more complex correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show this problem can be formulated as a rank-constrained semidefinite program. Finally, we analyze two seemingly distinct techniques, whose similar behavior is explained by viewing them as instances of the matrix mechanism.",
"Prior work in differential privacy has produced techniques for answering aggregate queries over sensitive data in a privacy-preserving way. These techniques achieve privacy by adding noise to the query answers. Their objective is typically to minimize absolute errors while satisfying differential privacy. Thus, query answers are injected with noise whose scale is independent of whether the answers are large or small. The noisy results for queries whose true answers are small therefore tend to be dominated by noise, which leads to inferior data utility. This paper introduces iReduct, a differentially private algorithm for computing answers with reduced relative error. The basic idea of iReduct is to inject different amounts of noise to different query results, so that smaller (larger) values are more likely to be injected with less (more) noise. The algorithm is based on a novel resampling technique that employs correlated noise to improve data utility. Performance is evaluated on an instantiation of iReduct that generates marginals, i.e., projections of multi-dimensional histograms onto subsets of their attributes. Experiments on real data demonstrate the effectiveness of our solution."
]
}
|
1112.2020
|
1593517648
|
With the increasing prevalence of location-aware devices, trajectory data has been generated and collected in various application domains. Trajectory data carries rich in- formation that is useful for many data analysis tasks. Yet, improper publishing and use of trajectory data could jeopardize individual privacy. However, it has been shown that existing privacy-preserving trajectory data publishing methods derived from partition-based privacy models, for example k-anonymity, are unable to provide sufficient privacy protection. In this paper, motivated by the data publishing scenario at the Societe de transport de Montreal (STM), the public transit agency in Montreal area, we study the problem of publishing trajectory data under the rigorous differential privacy model. We propose an efficient data-dependent yet differentially private sanitization algorithm, which is applicable to different types of trajectory data. The efficiency of our approach comes from adaptively narrowing down the output domain by building a noisy prefix tree based on the underlying data. Moreover, as a post-processing step, we make use of the inherent constraints of a prefix tree t o conduct constrained inferences, which lead to better utility. This is the first paper to introduce a practical solution for publi shing large volume of trajectory data under differential privacy. We examine the utility of sanitized data in terms of count queries and frequent sequential pattern mining. Extensive experiments on real-life trajectory data from the STM demonstrate that our approach maintains high utility and is scalable to large trajectory datasets.
|
Two very recent papers @cite_29 , @cite_39 point out that data-dependent approaches are more efficient and more effective for generating a differentially private release. @cite_29 propose a generalization-based sanitization algorithm for with the goal of classification analysis. @cite_39 propose a probabilistic top-down partitioning algorithm for . Both approaches @cite_29 , @cite_39 make use of taxonomy trees to adaptively narrow down the output domain. However, due to the reasons mentioned in , they cannot be applied to trajectory data, in which is a major concern.
|
{
"cite_N": [
"@cite_29",
"@cite_39"
],
"mid": [
"2005107218",
"2293703278"
],
"abstract": [
"Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among the existing privacy models, ∈-differential privacy provides one of the strongest privacy guarantees and has no assumptions about an adversary's background knowledge. Most of the existing solutions that ensure ∈-differential privacy are based on an interactive model, where the data miner is only allowed to pose aggregate queries to the database. In this paper, we propose the first anonymization algorithm for the non-interactive setting based on the generalization technique. The proposed solution first probabilistically generalizes the raw data and then adds noise to guarantee ∈-differential privacy. As a sample application, we show that the anonymized data can be used effectively to build a decision tree induction classifier. Experimental results demonstrate that the proposed non-interactive anonymization algorithm is scalable and performs better than the existing solutions for classification analysis.",
"Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets."
]
}
|
1112.2188
|
1934586604
|
In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents’ experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is stil l limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined s tructure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we prop ose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through si mulations.
|
A closely-related strategic game model to our work is the global game @cite_19 @cite_22 . In the global game, all agents, with limited knowledge on the system state and information hold by other agents, make decisions simultaneously. The agent's reward in the game is determined by the system state and the number of agents making the same decision with him. The influence may be positive or negative depending on the type of network externality. An important characteristics of global game is that the equilibrium is unique, which simplifies the discussion on the possible outcome of the game. It draws great attentions in various research fields, such as financial crisis @cite_14 , sensor networks @cite_3 and cognitive radio networks @cite_18 . Since all players in the global game make decisions simultaneously, there is no learning involved in the global game.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_3",
"@cite_19"
],
"mid": [
"2161534819",
"2106799897",
"2156625082",
"2091845511",
"2107031016"
],
"abstract": [
"This paper studies decentralized dynamic spectrum access using the theory of multivariate global games. We consider a network of cognitive radios (CRs) where each CR obtains noisy multivariate measurements of the quality of several logical channels and needs to decide which channel to access. Assuming the CRs are rational devices, each CR determines which channel to access, based on its expected throughput and Bayesian estimate of the intention of other CRs. We formulate conditions for which the Bayesian Nash equilibrium (BNE) of the resulting global game is monotonically increasing in the quality of the logical channel. This leads to a simple characterization of the competitive optimal behavior of the system as a function of the prior probability distribution of spectrum hole occupancy, channel quality and observation noise. In obtaining the characterization of the BNE, we extend recent results in univariate global games to the multivariate case.",
"Many argue that crises -- such as currency attacks, bank runs and riots -- can be described as times of non-fundamental volatility. We argue that crises are also times when endogenous sources of information are closely monitored and thus an important part of the phenomena. We study the role of endogenous information in generating volatility by introducing a financial market in a coordination game where agents have heterogeneous information about the fundamentals. The equilibrium price aggregates information without restoring common knowledge. In contrast to the case with exogenous information, we find that uniqueness may not be obtained as a perturbation from common knowledge: multiplicity is ensured when individuals observe fundamentals with small idiosyncratic noise. Multiplicity may emerge also in the financial price. When the equilibrium is unique, it becomes more sensitive to non-fundamental shocks as private noise is reduced.",
"Global games are games of incomplete information whose type space is determined by the players each observing a noisy signal of the underlying state. With strategic complementarities, global games often have a unique, dominance solvable equilibrium, allowing analysis of a number of economic models of coordination failure. For symmetric binary action global games, equilibrium strategies in the limit (as noise becomes negligible) are simple to characterize in terms of 'diffuse' beliefs over the actions of others. We describe a number of economic applications that fall in this category. We also explore the distinctive roles of public and private information in this setting, review results for general global games, discuss the relationship between global games and a literature on higher order beliefs in game theory and describe the relationship to local interaction games and dynamic games with payoff shocks.",
"This paper considers two methodologies for decentralized sensor activation in wireless sensor networks for energy-efficient monitoring. First, decentralized activation in wireless sensor networks is investigated using the theory of global games. Given a large number of sensors which can operate in either an energy-efficient ''low-resolution'' monitoring mode, or a more costly ''high-resolution'' mode, the problem of computing and executing a strategy for mode selection is formulated as a global game with diverse utilities and noise conditions. We formulate Bayes-Nash equilibrium conditions for which a simple threshold strategy is competitively optimal for each sensor, and propose a scheme for decentralized threshold computation. The second class of results we consider is in a non-Bayesian context where sensors deploy simple adaptive filtering algorithms and the global behavior converges to the set of correlated equilibria.",
"A global game is an incomplete information game where the actual payoff structure is determined by a random draw from a given class of games and where each player makes a noisy observation of the selected game. For 2 x 2 games, it is shown that, when the noise vanishes, iterated elimination of dominated strategies in the global game forces the players to conform to J. C. Harsanyi and R. Selten's risk dominance criterion. Copyright 1993 by The Econometric Society."
]
}
|
1112.2188
|
1934586604
|
In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents’ experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is stil l limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined s tructure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we prop ose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through si mulations.
|
In recent years, several works @cite_26 @cite_24 @cite_13 @cite_0 @cite_17 make efforts to introduce the learning and signaling into the global game. Dasgupta's first attempt was investigating a binary investment model, while one project will succeed only when enough number of agents invest in the project in @cite_26 . Then, Dasgupta studied a two-period dynamic global game, where the agents have the options to delay their decisions in order to have better private information of the unknown state in @cite_17 .
|
{
"cite_N": [
"@cite_26",
"@cite_24",
"@cite_0",
"@cite_13",
"@cite_17"
],
"mid": [
"1535719876",
"1787709854",
"2024712778",
"2070694319",
"2162853770"
],
"abstract": [
"We incorporate strategic complementarities into a multi-agent sequential choice model with observable actions and private information. In this framework agents are concerned with learning from predecessors, signalling to successors, and coordinating their actions with those of others. Coordination problems have hitherto been studied using static coordination games which do not allow for learning behavior. Social learning has been examined using games of sequential action under uncertainty, but in the absence of strategic complementarities (herding models). Our model captures the strategic behavior of static coordination games, the social learning aspect of herding models, and the signalling behavior missing from both of these classes of models in one unified framework. In sequential action problems with incomplete information, agents exhibit herd behavior if later decision makers assign too little importance to their private information, choosing instead to imitate their predecessors. In our setting we demonstrate that agents may exhibit either strong herd behavior (complete imitation) or weak herd behavior (overoptimism) and characterize the informational requirements for these distinct outcomes. We also characterize the informational requirements to ensure the possibility of coordination upon a risky but socially optimal action in a game with finite but unboundedly large numbers of players.",
"This paper introduces signaling in a global game so as to examine the informational role of policy in coordination environments such as currency crises and bank runs. While exogenous asymmetric information has been shown to select a unique equilibrium, we show that the endogenous information generated by policy interventions leads to multiple equilibria. The policy maker is thus trapped into a position in which self-fulfilling expectations dictate not only the coordination outcome but also the optimal policy. This result does not rely on the freedom to choose out-of-equilibrium beliefs, nor on the policy being a public signal; it may obtain even if the policy is observed with idiosyncratic noise.",
"Recently, it has been claimed that full-information multiple equilibria in games with strategic complementarities are not robust, because generalizing to allow slightly heterogeneous information implies uniqueness. This paper argues that this \"global games\" uniqueness result is itself not robust. If we generalize by allowing most agents to observe a few previous actions before choosing, instead of forcing players to move exactly simultaneously, then multiplicity of outcomes is restored. Only a small sample of observations is needed to make our herding equilibrium behave like a full-information sunspot equilibrium instead of a global games equilibrium.",
"Global games of regime change-coordination games of incomplete information in which a status quo is abandoned once a sufficiently large fraction of agents attack it-have been used to study crises phenomena such as currency attacks, bank runs, debt crises, and political change. We extend the static benchmark examined in the literature by allowing agents to take actions in many periods and to learn about the underlying fundamentals over time. We first provide a simple recursive algorithm for the characterization of monotone equilibria. We then show how the interaction of the knowledge that the regime survived past attacks with the arrival of information over time, or with changes in fundamentals, leads to interesting equilibrium properties. First, multiplicity may obtain under the same conditions on exogenous information that guarantee uniqueness in the static benchmark. Second, fundamentals may predict the eventual fate of the regime but not the timing or the number of attacks. Finally, equilibrium dynamics can alternate between phases of tranquility-where no attack is possible-and phases of distress-where a large attack can occur-even without changes in fundamentals. Copyright The Econometric Society 2007.",
"What is the effect of offering agents an option to delay their choices in a global coordination game? We address this question by considering a canonical binary action global game, and allowing players to delay their irreversible decisions. Those that delay have access to accurate private information at the second stage, but receive lower payoffs. We show that, as noise vanishes, as long as the benefit to taking the risky action early is greater than the benefit of taking the risky action late, the introduction of the option to delay reduces the incidence of coordination failure in equilibrium relative to the standard case where all agents must choose their actions at the same time. We outline the welfare implications of this finding, and probe the robustness of our results from a variety of angles."
]
}
|
1112.2188
|
1934586604
|
In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents’ experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is stil l limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined s tructure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we prop ose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through si mulations.
|
Angeletos studied a specific dynamic global game called regime of changes @cite_24 @cite_13 . In the regime of changes game, each agent may propose an attack to the status quo, i.e., the current politic state of the society. When the collected attacks are large enough, the status quo is abandoned and all attackers receive positive payoffs. If the status quo does not change, the attackers receive negative payoffs. Angeletos first studied a signaling model with signals at the beginning of the game in @cite_24 . Then, they proposed a multiple stages dynamic game to study the learning behaviors of agents in the regime of change game in @cite_13 .
|
{
"cite_N": [
"@cite_24",
"@cite_13"
],
"mid": [
"1787709854",
"2070694319"
],
"abstract": [
"This paper introduces signaling in a global game so as to examine the informational role of policy in coordination environments such as currency crises and bank runs. While exogenous asymmetric information has been shown to select a unique equilibrium, we show that the endogenous information generated by policy interventions leads to multiple equilibria. The policy maker is thus trapped into a position in which self-fulfilling expectations dictate not only the coordination outcome but also the optimal policy. This result does not rely on the freedom to choose out-of-equilibrium beliefs, nor on the policy being a public signal; it may obtain even if the policy is observed with idiosyncratic noise.",
"Global games of regime change-coordination games of incomplete information in which a status quo is abandoned once a sufficiently large fraction of agents attack it-have been used to study crises phenomena such as currency attacks, bank runs, debt crises, and political change. We extend the static benchmark examined in the literature by allowing agents to take actions in many periods and to learn about the underlying fundamentals over time. We first provide a simple recursive algorithm for the characterization of monotone equilibria. We then show how the interaction of the knowledge that the regime survived past attacks with the arrival of information over time, or with changes in fundamentals, leads to interesting equilibrium properties. First, multiplicity may obtain under the same conditions on exogenous information that guarantee uniqueness in the static benchmark. Second, fundamentals may predict the eventual fate of the regime but not the timing or the number of attacks. Finally, equilibrium dynamics can alternate between phases of tranquility-where no attack is possible-and phases of distress-where a large attack can occur-even without changes in fundamentals. Copyright The Econometric Society 2007."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.