aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1308.1224
1813221039
In this work, a benchmark to evaluate the retrieval performance of soundtrack recommendation systems is proposed. Such systems aim at finding songs that are played as background music for a given set of images. The proposed benchmark is based on preference judgments, where relevance is considered a continuous ordinal variable and judgments are collected for pairs of songs with respect to a query (i.e., set of images). To capture a wide variety of songs and images, we use a large space of possible music genres, different emotions expressed through music, and various query-image themes. The benchmark consists of two types of relevance assessments: (i) judgments obtained from a user study, that serve as a "gold standard" for (ii) relevance judgments gathered through Amazon's Mechanical Turk. We report on an analysis of relevance judgments based on different levels of user agreement and investigate the performance of two state-of-the-art soundtrack recommendation systems using the proposed benchmark.
For music similarity, @cite_34 conclude that coarse levels of relevance measure, usually used in text retrieval, are not applicable . Instead, they use a large number of relevance levels created from partially ordered lists. The ground truth in this case is given as ranked list of document groups, such that documents in one group have the same relevance. The work by Urbano et. al. @cite_1 addresses some limitations of this approach by proposing different measures of similarity between groups of retrieved documents. Measuring retrieval effectiveness with these large number of levels can be achieved using the Average Dynamic Recall @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_34", "@cite_1" ], "mid": [ "2109139264", "2102791842", "2102904392" ], "abstract": [ "For the RISM A II collection of musical incipits (short extracts of scores, taken from the beginning), we have established a ground truth based on the opinions of human experts. It contains correctly ranked matches for a set of given queries. These ranked lists contain groups of documents whose ranks were not significantly different. In other words, they are only partially ordered. To make use of the available information for measuring the quality of retrieval results, we introduce the \"average dynamic recall\" (ADR) that averages the recall among a dynamic set of relevant documents, taking into account the fact that the ground truth reliably orders groups of matches, but not always individual matches. Dynamic recall measures how many of the documents that should have appeared before or at a given position in the result list actually have appeared. ADR at a given position averages this measure up to the given position. Our measure was first used at the MIREX 2005 Symbolic Melodic Similarity contest.", "Musical incipits are short extracts of scores, taken from the beginning. The RISM A II collection contains about half a million of them. This large collection size makes a ground truth very interesting for the development of music retrieval methods, but at the same time makes it very dicult to establish one. Human experts cannot be expected to sift through half a million melodies to find the best matches for a given query. For 11 queries, we filtered the collection so that about 50 candidates per query were left, which we then presented to 35 human experts for a final ranking. We present our filtering methods, the experiment design, and the resulting ground truth. To obtain ground truths, we ordered the incipits by the median ranks assigned to them by the human experts. For every incipit, we used the Wilcoxon rank sum test to compare the list of ranks assigned to it with the lists of ranks assigned to its predecessors. As a result, we know which rank dierences are statistically significant, which gives us groups of incipits whose correct ranking we know. This ground truth can be used for evaluating music information retrieval systems. A good retrieval system should order the incipits in a way that the order of the groups we identified is not violated, and it should include all high-ranking melodies that we found. It might, however, find additional good matches since our filtering process is not guaranteed to be perfect.", "G round truths based on partially ordered lists have been used for some years now to evaluate the effectiveness of Music Information Retrieval systems, especially in tasks related to symbolic melodic similarity. However, there has been practically no meta-evaluation to measure or improve the correctness of these evaluations. In this paper we revise the methodology used to generate these ground truths and disclose some issues that need to be addressed. In particular, we focus on the arrangement and aggregation of the relevant results, and show that it is not possible to ensure lists completely consistent. We develop a measure of consistency based on Average Dynamic Recall and propose several alternatives to arrange the lists, all of which prove to be more consistent than the original method. The results of the MIREX 2005 evaluation are revisited using these alternative ground truths." ] }
1308.0309
2106670160
Detecting and visualizing what are the most relevant changes in an evolving network is an open challenge in several domains. We present a fast algorithm that filters subsets of the strongest nodes and edges representing an evolving weighted graph and visualize it by either creating a movie, or by streaming it to an interactive network visualization tool. The algorithm is an approximation of exponential sliding time-window that scales linearly with the number of interactions. We compare the algorithm against rectangular and exponential sliding time-window methods. Our network filtering algorithm: (i) captures persistent trends in the structure of dynamic weighted networks, (ii) smoothens transitions between the snapshots of dynamic network, and (iii) uses limited memory and processor time. The algorithm is publicly available as open-source software.
Graph drawing @cite_27 @cite_26 is a branch of information visualization that has acquired great importance in complex systems analysis. A good pictorial representation of a graph can highlight its most important structural components, logically partition its different regions, and point out the most central nodes and the edges on which the information flows more frequently or quickly. The rapid development of computer-aided visualization tools and the refinement of graph layout algorithms @cite_11 @cite_59 @cite_41 @cite_49 allowed increasingly higher-quality visualizations of large graphs @cite_42 . As a result, many open tools for static graph analysis and visualization have been developed in the last decade. Among the best known we mention Walrus @cite_33 , Pajek @cite_40 @cite_38 , Visone @cite_57 , GUESS @cite_28 , Networkbench @cite_46 , NodeXL @cite_17 , and Tulip @cite_10 . Studies about comparisons of different tools have also been published recently @cite_6 .
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_41", "@cite_28", "@cite_42", "@cite_17", "@cite_6", "@cite_57", "@cite_27", "@cite_40", "@cite_59", "@cite_49", "@cite_46", "@cite_10", "@cite_11" ], "mid": [ "1947595544", "2102664288", "137863291", "", "2113630591", "2147468287", "2135844668", "1641238295", "67385939", "1581875325", "1502432690", "1600348603", "", "", "36570781", "2075220720" ], "abstract": [ "This is an extensively revised and expanded second edition of the successful textbook on social network analysis integrating theory, applications, and network analysis using Pajek. The main structural concepts and their applications in social research are introduced with exercises. Pajek software and data sets are available so readers can learn network analysis through application and case studies. Readers will have the knowledge, skill, and tools to apply social network analysis across the social sciences, from anthropology and sociology to business administration and history. This second edition has a new chapter on random network models, for example, scale-free and small-world networks and Monte Carlo simulation; discussion of multiple relations, islands, and matrix multiplication; new structural indices such as eigenvector centrality, degree distribution, and clustering coefficients; new visualization options that include circular layout for partitions and drawing a network geographically as a 3D surface; and using Unicode labels. This new edition also includes instructions on exporting data from Pajek to R software. It offers updated descriptions and screen shots for working with Pajek (version 2.03).", "From the Publisher: This book is designed to describe fundamental algorithmic techniques for constructing drawings of graphs. Suitable as a book or reference manual, its chapters offer an accurate, accessible reflection of the rapidly expanding field of graph drawing.", "Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.", "", "As graph models are applied to more widely varying fields, researchers struggle with tools for exploring and analyzing these structures. We describe GUESS, a novel system for graph exploration that combines an interpreted language with a graphical front end that allows researchers to rapidly prototype and deploy new visualizations. GUESS also contains a novel, interactive interpreter that connects the language and interface in a way that facilities exploratory visualization tasks. Our language, Gython, is a domain-specific embedded language which provides all the advantages of Python with new, graph specific operators, primitives, and shortcuts. We highlight key aspects of the system in the context of a large user survey and specific, real-world, case studies ranging from social and knowledge networks to distributed computer network analysis.", "This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as Web browsing, state-transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective.", "We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied.", "Information visualization is a powerful tool for analyzing the dynamic nature of social communities. Using Nation of Neighbors community network as a testbed, we propose five principles of implementing temporal visualizations for social networks and present two research prototypes: NodeXL and TempoVis. Three different states are defined in order to visualize the temporal changes of social networks. We designed the prototypes to show the benefits of the proposed ideas by letting users interactively explore temporal changes of social networks.", "Social network analysis is a subdiscipline of the social sciences using graph-theoretic concepts to understand and explain social structure.We describe the main issues in social network analysis. General principles are laid out for visualizing network data in a way that conveys structural information relevant to specific research questions. Based on these innovative graph drawing techniques integrating the analysis and visualization of social networks are introduced.", "", "Pajek (spider, in Slovene) is a program package, for Windows (32 bit), for analysis and visualization of large networks (having thousands of vertices). It is freely available, for noncommercial use, at its home page: http: vlado.fmf.uni-lj.si pub networks pajek", "References.- Technical Foundations.- 1 Introduction.- 2 Graphs and Their Representation.- 3 Graph Planarity and Embeddings.- 4 Graph Drawing Methods.- References.- WilmaScope - A 3D Graph Visualization System.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Pajek - Analysis and Visualization of Large Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Tulip - A Huge Graph Visualization Framework.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Graphviz and Dynagraph - Static and Dynamic Graph Drawing Tools.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- AGD - A Library of Algorithms for Graph Drawing.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- yFiles - Visualization and Automatic Layout of Graphs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GDS - A Graph Drawing Server on the Internet.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- BioPath - Exploration and Visualization of Biochemical Pathways.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- DBdraw - Automatic Layout of Relational Database Schemas.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GoVisual - A Diagramming Software for UML Class Diagrams.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- CrocoCosmos - 3D Visualization of Large Object-oriented Programs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- ViSta - Visualizing Statecharts.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- visone - Analysis and Visualization of Social Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Polyphemus and Hermes - Exploration and Visualization of Computer Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.", "", "", "Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Based on a decade of research and development of this framework, we present the architecture, consisting of a suite of tools and techniques, that can be used to address a large variety of domain-specific problems. With Tulip, we aim to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing. The current framework enables the development of algorithms, visual encodings, interaction techniques, data models, and domain-specific visualizations. The software model facilitates the reuse of components and allows the developers to focus on programming their application. This development pipeline makes the framework efficient for research prototyping as well as the development of end-user applications.", "" ] }
1308.0309
2106670160
Detecting and visualizing what are the most relevant changes in an evolving network is an open challenge in several domains. We present a fast algorithm that filters subsets of the strongest nodes and edges representing an evolving weighted graph and visualize it by either creating a movie, or by streaming it to an interactive network visualization tool. The algorithm is an approximation of exponential sliding time-window that scales linearly with the number of interactions. We compare the algorithm against rectangular and exponential sliding time-window methods. Our network filtering algorithm: (i) captures persistent trends in the structure of dynamic weighted networks, (ii) smoothens transitions between the snapshots of dynamic network, and (iii) uses limited memory and processor time. The algorithm is publicly available as open-source software.
The interest in depicting the shape of online social networks @cite_45 @cite_0 and the availability of live data streams from online social media motivated the development of tools for animated visualizations of @cite_25 , in contexts, where temporal graph evolution is known in advance, as well as in scenarios, where the graph updates are received in a streaming fashion @cite_54 . Several tools supporting dynamics visualization emerged, which include GraphAEL @cite_55 , GleamViz @cite_9 , Gephi @cite_12 , and GraphStream @cite_23 . Despite static visualizations based on time-windows @cite_6 , alluvial diagrams @cite_31 , or matrices @cite_65 @cite_64 @cite_39 have been explored as solutions to capture the graph evolution, dynamic graph drawing remains the technique that has attracted more interest in the research community so far. Compared to static visualizations, dynamic animations present additional challenges: user studies have shown that they can be perceived as harder to parse visually, even though they have the potential to be more informative and engaging @cite_32 .
{ "cite_N": [ "@cite_64", "@cite_55", "@cite_54", "@cite_9", "@cite_65", "@cite_6", "@cite_39", "@cite_0", "@cite_32", "@cite_45", "@cite_23", "@cite_31", "@cite_25", "@cite_12" ], "mid": [ "2133483636", "1522840240", "2513172345", "2126393394", "2071892440", "1641238295", "2546140789", "2167144079", "1831799535", "1546650522", "1613533661", "2155369095", "1596411313", "2125910575" ], "abstract": [ "We propose a new approach to visualize social networks. Most common network visualizations rely on graph drawing. While without doubt useful, graphs suffer from limitations like cluttering and important patterns may not be realized especially when networks change over time. Our approach adapts pixel-oriented visualization techniques to social networks as an addition to traditional graph visualizations. The visualization is exemplified using social networks based on corporate wikis.", "GraphAEL extracts three types of evolving graphs from the Graph Drawing literature and creates 2D and 3D animations of the evolutions. We study citation graphs, topic graphs, and collaboration graphs. We also create difference graphs which capture the nature of change between two given time periods. GraphAEL can be accessed online at http: graphael.cs.arizona.edu.", "Spectral methods are naturally suited for dynamic graph layout, because moderate changes of a graph yield moderate changes of the layout under weak assumptions. We discuss some general principles for dynamic graph layout and derive a dynamic spectral layout approach for the animation of small-world models.", "Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions.", "Visualization plays a crucial role in understanding dynamic social networks at many different levels (i.e., group, subgroup, and individual). Node-link-based visualization techniques are currently widely used for these tasks and have been demonstrated to be effective, but it was found that they also have limitations in representing temporal changes, particularly at the individual and subgroup levels. To overcome these limitations, this article presents a new network visualization technique, called “TimeMatrix,” based on a matrix representation. Interaction techniques, such as overlay controls, a temporal range slider, semantic zooming, and integrated network statistical measures, support analysts in studying temporal social networks. To validate the design, the article presents a user study involving three social scientists analyzing inter-organizational collaboration data. The study demonstrates how TimeMatrix may help analysts gain insights about the temporal aspects of network data that can be subseque...", "Information visualization is a powerful tool for analyzing the dynamic nature of social communities. Using Nation of Neighbors community network as a testbed, we propose five principles of implementing temporal visualizations for social networks and present two research prototypes: NodeXL and TempoVis. Three different states are defined in order to visualize the temporal changes of social networks. We designed the prototypes to show the benefits of the proposed ideas by letting users interactively explore temporal changes of social networks.", "Visualizations of static networks in the form of node-link diagrams have evolved rapidly, though researchers are still grappling with how best to show evolution of nodes over time in these diagrams. This paper introduces NetVisia, a social network visualization system designed to support users in exploring temporal evolution in networks by using heat maps to display node attribute changes over time. NetVisia's novel contributions to network visualizations are to (1) cluster nodes in the heat map by similar metric values instead of by topological similarity, and (2) align nodes in the heat map by events. We compare NetVisia to existing systems and describe a formative user evaluation of a NetVisia prototype with four participants that emphasized the need for tool tips and coordinated views. Despite the presence of some usability issues, in 30-40 minutes the user evaluation participants discovered new insights about the data set which had not been discovered using other systems. We discuss implemented improvements to NetVisia, and analyze a co-occurrence network of 228 business intelligence concepts and entities. This analysis confirms the utility of a clustered heat map to discover outlier nodes and time periods.", "A social network consists of people who interact in some way such as members of online communities sharing information via the WWW. To learn more about how to facilitate community building e.g. in organizations, it is important to analyze the interaction behavior of their members over time. So far, many tools have been provided that allow for the analysis of static networks and some for the temporal analysis of networks - however only on the vertex and edge level. In this paper we propose two approaches to analyze the evolution of two different types of online communities on the level of subgroups: The first method consists of statistical analyses and visualizations that allow for an interactive analysis of subgroup evolutions in communities that exhibit a rather membership structure. The second method is designed for the detection of communities in an environment with highly fluctuating members. For both methods, we discuss results of experiments with real data from an online student community.", "Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at different points in time. Here, we report on two experiments, one in a laboratory environment and another using an asynchronous remote web-based platform, Mechanical Turk, to compare the efficiency of animated displays versus static displays. Four tasks are studied with each visual representation, where two characterise overview level information presentation, and two characterise micro level analytical tasks. For the tasks studied in these experiments and within the limits of the experimental system, the results of this study indicate that static representations are generally more effective particularly in terms of time performance, when compared to fully animated movie representations of dynamic networks.", "Recent years have witnessed the dramatic popularity of online social networking services, in which millions of members publicly articulate mutual \"friendship\" relations. Guided by ethnographic research of these online communities, we have designed and implemented a visualization system for playful end-user exploration and navigation of large scale online social networks. Our design builds upon familiar node link network layouts to contribute customized techniques for exploring connectivity in large graph structures, supporting visual search and analysis, and automatically identifying and visualizing community structures. Both public installation and controlled studies of the system provide evidence of the system's usability, capacity for facilitating discovery, and potential for fun and engaged social activity", "The notion of complex systems is common to many domains, from Biology to Economy, Computer Science, Physics, etc. Often, these systems are made of sets of entities moving in an evolving environment. One of their major characteristics is the emergence of some global properties stemmed from local interactions between the entities themselves and between the entities and the environment. The structure of these systems as sets of interacting entities leads researchers to model them as graphs. However, their understanding requires most often to consider the dynamics of their evolution. It is indeed not relevant to study some properties out of any temporal consideration. Thus, dynamic graphs seem to be a very suitable model for investigating the emergence and the conservation of some properties. GraphStream is a Java-based library whose main purpose is to help researchers and developers in their daily tasks of dynamic problem modeling and of classical graph management tasks: creation, processing, display, etc. It may also be used, and is indeed already used, for teaching purpose. GraphStream relies on an event-based engine allowing several event sources. Events may be included in the core of the application, read from a file or received from an event handler.", "Change is a fundamental ingredient of interaction patterns in biology, technology, the economy, and science itself: Interactions within and between organisms change; transportation patterns by air, land, and sea all change; the global financial flow changes; and the frontiers of scientific research change. Networks and clustering methods have become important tools to comprehend instances of these large-scale structures, but without methods to distinguish between real trends and noisy data, these approaches are not useful for studying how networks change. Only if we can assign significance to the partitioning of single networks can we distinguish meaningful structural changes from random fluctuations. Here we show that bootstrap resampling accompanied by significance clustering provides a solution to this problem. To connect changing structures with the changing function of networks, we highlight and summarize the significant structural changes with alluvial diagrams and realize de Solla Price's vision of mapping change in science: studying the citation pattern between about 7000 scientific journals over the past decade, we find that neuroscience has transformed from an interdisciplinary specialty to a mature and stand-alone discipline.", "Algorithms and Theory of Computation Handbook, Second Edition provides an up-to-date compendium of fundamental computer science topics and techniques. It also illustrates how the topics and techniques come together to deliver efficient solutions to important practical problems. New to the Second EditionAlong with updating and revising many of the existing chapters, this second edition contains more than 20 new chapters. This edition now covers external memory, parameterized, self-stabilizing, and pricing algorithms as well as the theories of algorithmic coding, privacy and anonymity, databases, computational games, and communication networks. It also discusses computational topology, computational number theory, natural language processing, and grid computing and explores applications in intensity-modulated radiation therapy, voting, DNA research, systems biology, and financial derivatives. This best-selling handbook continues to help computer professionals and engineers find significant information on various algorithmic topics. The expert contributors clearly define the terminology, present basic results and techniques, and offer a number of current references to the in-depth literature. They also provide a glimpse of the major research issues concerning the relevant topics.", "Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization." ] }
1308.0309
2106670160
Detecting and visualizing what are the most relevant changes in an evolving network is an open challenge in several domains. We present a fast algorithm that filters subsets of the strongest nodes and edges representing an evolving weighted graph and visualize it by either creating a movie, or by streaming it to an interactive network visualization tool. The algorithm is an approximation of exponential sliding time-window that scales linearly with the number of interactions. We compare the algorithm against rectangular and exponential sliding time-window methods. Our network filtering algorithm: (i) captures persistent trends in the structure of dynamic weighted networks, (ii) smoothens transitions between the snapshots of dynamic network, and (iii) uses limited memory and processor time. The algorithm is publicly available as open-source software.
More in general, there are several open fronts in empirical research in graph visualization to identify the impact of certain factors on the quality of the animation (e.g., speed @cite_69 , interactivity @cite_36 ). An extensive overview of this aspect has been conducted recently by @cite_15 . Methods to preserve the stability of nodes and the consistency of the network structure leveraging hierarchical organization on nodes have been proposed @cite_5 @cite_14 @cite_68 @cite_47 . User studies have shown that hierarchical approaches that collapse several nodes in larger meta-nodes can improve graph readability in cases of high edge density @cite_24 . The graph layout also has a significant impact on the readability of graphs @cite_8 . Some work has been done to adapt spectral and force-directed graph layouts @cite_61 to that recompute the position of nodes at time @math based on the previous positions at time @math - @math minimizing displacement of vertices @cite_66 @cite_63 @cite_54 @cite_72 or to propose new stress-minimization'' strategies to map the changes in the graph @cite_60 .
{ "cite_N": [ "@cite_61", "@cite_69", "@cite_14", "@cite_47", "@cite_8", "@cite_36", "@cite_60", "@cite_54", "@cite_24", "@cite_72", "@cite_63", "@cite_5", "@cite_15", "@cite_68", "@cite_66" ], "mid": [ "1520174226", "2099718964", "1502972208", "1489443448", "1584092895", "2167252678", "2047854635", "2513172345", "2141026840", "2100112610", "1524145331", "1533210402", "1969957132", "2111781385", "2111244106" ], "abstract": [ "Graph layout methods described in previous chapters were based on structural characteristics of the graph, or a preprocessed version of the graph. Often, such knowledge is not provided. In this chapter, we take a look at a class of methods applicable to general graphs, without prior knowledge of any structural properties. Their common denominator is that they liken the graph to a system of interacting physical objects, the underlying assumption being that relaxed (energy-minimal) states of suitably defined systems correspond to readable and informative layouts.", "Effective visualization of dynamic graphs remains an open research topic, and many state-of-the-art tools use animated node-link diagrams for this purpose. Despite its intuitiveness, the effectiveness of animation in node-link diagrams has been questioned, and several empirical studies have shown that animation is not necessarily superior to static visualizations. However, the exact mechanics of perceiving animated node-link diagrams are still unclear. In this paper, we study the impact of different dynamic graph metrics on user perception of the animation. After deriving candidate visual graph metrics, we perform an exploratory user study where participants are asked to reconstruct the event sequence in animated node-link diagrams. Based on these findings, we conduct a second user study where we investigate the most important visual metrics in depth. Our findings show that node speed and target separation are prominent visual metrics to predict the performance of event sequencing tasks. © 2012 Wiley Periodicals, Inc.", "We propose a heuristic for dynamic hierarchical graph drawing. Applications include incremental graph browsing and editing, display of dynamic data structures and networks, and browsing large graphs. The heuristic is an on-line interpretation of the static layout algorithm of Sugiyama, Togawa and Toda. It incorporates topological and geometric information with the objective of making layout animations that are incrementally stable and readable through long editing sequences. We measured the performance of a prototype implementation.", "This paper presents a technique for visualizing the differences between two graphs. The technique assumes that a unique labeling of the nodes for each graph is available, where if a pair of labels match, they correspond to the same node in both graphs. Such labeling often exists in many application areas: IP addresses in computer networks, namespaces, class names, and function names in software engineering, to name a few. As many areas of the graph may be the same in both graphs, we visualize large areas of difference through a graph hierarchy. We introduce a path-preserving coarsening technique for degree one nodes of the same classification. We also introduce a path-preserving coarsening technique based on betweenness centrality that is able to illustrate major differences between two graphs.", "Social network analysis uses techniques from graph theory to analyze the structure of relationships among social actors such as individuals or groups. We investigate the effect of the layout of a social network on the inferences drawn by observers about the number of social groupings evident and the centrality of various actors in the network. We conducted an experiment in which eighty subjects provided answers about three drawings. The subjects were not told that the drawings were chosen from five different layouts of the same graph. We found that the layout has a significant effect on their inferences and present some initial results about the way certain Euclidean features will affect perceptions of structural features of the network. There is no “best” layout for a social network; when layouts are designed one must take into account the most important features of the network to be presented as well as the network itself.", "Several previous systems allow users to interactively explore a large input graph through cuts of a superimposed hierarchy. This hierarchy is often created using clustering algorithms or topological features present in the graph. However, many graphs have domain-specific attributes associated with the nodes and edges, which could be used to create many possible hierarchies providing unique views of the input graph. GrouseFlocks is a system for the exploration of this graph hierarchy space. By allowing users to see several different possible hierarchies on the same graph, the system helps users investigate graph hierarchy space instead of a single fixed hierarchy. GrouseFlocks provides a simple set of operations so that users can create and modify their graph hierarchies based on selections. These selections can be made manually or based on patterns in the attribute data provided with the graph. It provides feedback to the user within seconds, allowing interactive exploration of this space.", "Abstract As a consequence of the rising interest in longitudinal social networks and their analysis, there is also an increasing demand for tools to visualize them. We argue that similar adaptations of state-of-the-art graph-drawing methods can be used to visualize both, longitudinal networks and predictions of stochastic actor-oriented models (SAOMs), the most prominent approach for analyzing such networks. The proposed methods are illustrated on a longitudinal network of acquaintanceship among university freshmen.", "Spectral methods are naturally suited for dynamic graph layout, because moderate changes of a graph yield moderate changes of the layout under weak assumptions. We discuss some general principles for dynamic graph layout and derive a dynamic spectral layout approach for the animation of small-world models.", "Graph visualization systems often exploit opaque metanodes to reduce visual clutter and improve the readability of large graphs. This filtering can be done in a path-preserving way based on attribute values associated with the nodes of the graph. Despite extensive use of these representations, as far as we know, no formal experimentation exists to evaluate if they improve the readability of graphs. In this paper, we present the results of a user study that formally evaluates how such representations affect the readability of graphs. We also explore the effect of graph size and connectivity in terms of this primary research question. Overall, for our tasks, we did not find a significant difference when this clustering is used. However, if the graph is highly connected, these clusterings can improve performance. Also, if the graph is large enough and can be simplified into a few metanodes, benefits in performance on global tasks are realized. Under these same conditions, however, performance of local attribute tasks may be reduced.", "This paper presents an algorithm for drawing a sequence of graphs online. The algorithm strives to maintain the global structure of the graph and, thus, the user's mental map while allowing arbitrary modifications between consecutive layouts. The algorithm works online and uses various execution culling methods in order to reduce the layout time and handle large dynamic graphs. Techniques for representing graphs on the GPU allow a speedup by a factor of up to 17 compared to the CPU implementation. The scalability of the algorithm across GPU generations is demonstrated. Applications of the algorithm to the visualization of discussion threads in Internet sites and to the visualization of social networks are provided.", "In this paper we present a generic algorithm for drawing sequences of graphs. This algorithm works for different layout algorithms and related metrics and adjustment strategies. It differs from previous work on dynamic graph drawing in that it considers all graphs in the sequence (offline) instead of just the previous ones (online) when computing the layout for each graph of the sequence. We introduce several general adjustment strategies and give examples of these strategies in the context of force-directed graph layout. Finally some results from our first prototype implementation are discussed.", "Graph drawings are a basic component of user interfaces that display relationships between objects. Generating incrementally stable layouts is important for many applications. This paper describes DynaDAG, a new heuristic for incremental layout of directed acyclic graphs drawn as hierarchies, and its application in the DynaGraph system.", "The usage of visualizations to aid the analysis of time oriented data plays an important role in various fields of applications. The need to visualize such data was decisive for the development of different visualization techniques over the last years. One of the frequently applied techniques is animation in order to illustrate the movements in such a way to make changes in the data transparent. However, evaluation studies of such animated interfaces for time-oriented data with potential users are still difficult to find. In this paper, we present our observations based on a systematic literature review with the motivation to support researchers and designers to identify future directions for their research. The literature review is split in two parts: (1) research on animation from the field of psychology, and (2) evaluation studies with the focus on animation of time-oriented data.", "We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets", "Many graph drawing (GD) scenarios are dynamic inasmuch as they involve a repeated redrawing of the graph after frequently occurring changes to the graph structure and or some layout properties." ] }
1308.0419
2949269728
In this paper, we address the following research problem: How can we generate a meaningful split grammar that explains a given facade layout? To evaluate if a grammar is meaningful, we propose a cost function based on the description length and minimize this cost using an approximate dynamic programming framework. Our evaluation indicates that our framework extracts meaningful split grammars that are competitive with those of expert users, while some users and all competing automatic solutions are less successful.
Our work builds on grammar-based procedural modeling. We mainly use splitting rules as they are commonly employed for facade modeling @cite_25 @cite_8 . For the generation of mass models, turtle commands like translate, rotate, and scale @cite_33 @cite_8 are often more useful. One goal of our work is the user-friendly generation of grammar rules. Here, the interactive editing framework of or a visual programming interface @cite_13 are other useful tools that contribute to the same goal. Finally, other approaches try to generate facade layouts without the use of grammars. work directly on textures, and use optimization, and propose a sampling algorithm for a probabilistic model using factor graphs.
{ "cite_N": [ "@cite_13", "@cite_25", "@cite_33", "@cite_8" ], "mid": [ "2118323000", "", "2000690667", "2108389405" ], "abstract": [ "A proposed rule-based editing metaphor intuitively lets artists create buildings without changing their workflow. It's based on the realization that the rule base represents a directed acyclic graph and on a shift in the development paradigm from product-based to rule-based representations. Users can visually add or edit rules, connect them to control the workflow, and easily create commands that expand the artist's toolbox (for example, Boolean operations or local controlling operators). This approach opens new possibilities, from model verification to model editing through graph rewriting.", "", "1 Graphical modeling using L-systems.- 1.1 Rewriting systems.- 1.2 DOL-systems.- 1.3 Turtle interpretation of strings.- 1.4 Synthesis of DOL-systems.- 1.4.1 Edge rewriting.- 1.4.2 Node rewriting.- 1.4.3 Relationship between edge and node rewriting.- 1.5 Modeling in three dimensions.- 1.6 Branching structures.- 1.6.1 Axial trees.- 1.6.2 Tree OL-systems.- 1.6.3 Bracketed OL-systems.- 1.7 Stochastic L-systems.- 1.8 Context-sensitive L-systems.- 1.9 Growth functions.- 1.10 Parametric L-systems.- 1.10.1 Parametric OL-systems.- 1.10.2 Parametric 2L-systems.- 1.10.3 Turtle interpretation of parametric words.- 2 Modeling of trees.- 3 Developmental models of herbaceous plants.- 3.1 Levels of model specification.- 3.1.1 Partial L-systems.- 3.1.2 Control mechanisms in plants.- 3.1.3 Complete models.- 3.2 Branching patterns.- 3.3 Models of inflorescences.- 3.3.1 Monopodial inflorescences.- 3.3.2 Sympodial inflorescences.- 3.3.3 Polypodial inflorescences.- 3.3.4 Modified racemes.- 4 Phyllotaxis.- 4.1 The planar model.- 4.2 The cylindrical model.- 5 Models of plant organs.- 5.1 Predefined surfaces.- 5.2 Developmental surface models.- 5.3 Models of compound leaves.- 6 Animation of plant development.- 6.1 Timed DOL-systems.- 6.2 Selection of growth functions.- 6.2.1 Development of nonbranching filaments.- 6.2.2 Development of branching structures.- 7 Modeling of cellular layers.- 7.1 Map L-systems.- 7.2 Graphical interpretation of maps.- 7.3 Microsorium linguaeforme.- 7.4 Dryopteris thelypteris.- 7.5 Modeling spherical cell layers.- 7.6 Modeling 3D cellular structures.- 8 Fractal properties of plants.- 8.1 Symmetry and self-similarity.- 8.2 Plant models and iterated function systems.- Epilogue.- Appendix A Software environment for plant modeling.- A.1 A virtual laboratory in botany.- A.2 List of laboratory programs.- Appendix B About the figures.- Turtle interpretation of symbols.", "This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures." ] }
1308.0419
2949269728
In this paper, we address the following research problem: How can we generate a meaningful split grammar that explains a given facade layout? To evaluate if a grammar is meaningful, we propose a cost function based on the description length and minimize this cost using an approximate dynamic programming framework. Our evaluation indicates that our framework extracts meaningful split grammars that are competitive with those of expert users, while some users and all competing automatic solutions are less successful.
There are several papers that propose initial solutions on deriving a shape grammar from facade images. The pioneering work by Bekins, Aliaga, and Rosen @cite_3 @cite_20 proposes a grammar that splits a facade into floors and then encodes a one-dimensional sequence of elements. The advantage of this approach is that it reduces the problem to a sequence of one-dimensional problems, but the disadvantage is that it only applies to facades with this structure and is not applicable to general two-dimensional layouts. This approach is therefore more similar to finding the parameters of a pre-determined shape grammar. Several other authors also follow this general approach @cite_24 @cite_2 . An important contribution is the inverse procedural modeling of vector art @cite_29 , because it is the first formal treatment of the inverse procedural modeling problem in computer graphics.
{ "cite_N": [ "@cite_29", "@cite_3", "@cite_24", "@cite_2", "@cite_20" ], "mid": [ "2155710590", "2146795709", "", "2073581653", "2099983485" ], "abstract": [ "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-systemrules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols ofan L-systemalphabet. Theterminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.", "We present build-by-number, a technique for quickly designing architectural structures that can be rendered photorealistically at interactive rates. We combine image-based capturing and rendering with procedural modeling techniques to allow the creation of novel structures in the style of real-world structures. Starting with a simple model recovered from a sparse image set, the model is divided into feature regions, such as doorways, windows, and brick. These feature regions essentially comprise a mapping from model space to image space, and can be recombined to texture a novel model. Procedural rules for the growth and reorganization of the model are automatically derived to allow for very fast editing and design. Further, the redundancies marked by the feature labeling can be used to perform automatic occlusion replacement and color equalization in the finished scene, which is rendered using view-dependent texture mapping on standard graphics hardware. Results using four captured scenes show that a great variety of novel structures can be created very quickly once a captured scene is available, and rendered with a degree of realism comparable to the original scene.", "", "Abstract Frequently, terrestrial LiDAR and image data are used to extract high resolution building geometry like windows, doors and protrusions for three-dimensional (3D) facade reconstruction. However, such a purely data driven bottom-up modelling of facade structures is only feasible if the available observations meet considerable requirements on data quality. Errors in measurement, varying point densities, reduced accuracies, as well as incomplete coverage affect the achievable correctness and reliability of the reconstruction result. While dependence on data quality is a general disadvantage with data driven bottom-up approaches, model based top-down reconstructions are much more robust. Algorithms introduce knowledge about the appearance and arrangement of objects. Thus, they cope with data uncertainty and allow for a procedural modelling of building structures in a predefined architectural style, which is inherent in grammar or model descriptions. We aim at a quality sensitive facade reconstruction which is on the one hand robust against erroneous and incomplete data, but on the other hand not subject to prespecified rules or models. For this purpose, we combine bottom-up and top-down strategies by integrating automatically inferred rules into a data driven reconstruction process. Facade models reconstructed during a bottom-up method serve as a knowledge base for further processing. Dominant or repetitive features and regularities as well as their hierarchical relationship are detected from the modelled facade elements and automatically translated into rules. These rules together with the 3D representations of the modelled facade elements constitute a formal grammar. It holds all the information which is necessary to reconstruct facades in the style of the given building. The paper demonstrates that the proposed algorithm is very flexible towards different data quality and incomplete sensor data. The inferred grammar is used for the verification of the facade model produced during the data driven reconstruction process and the generation of synthetic facades for which only partial or no sensor data is available. Moreover, knowledge propagation is not restricted to facades of one single building. Based on a small set of formal grammars derived from just a few observed buildings, facade reconstruction is also possible for whole districts featuring uniform architectural styles.", "Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data amplification and can be coupled with fast-rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate particular content. In this paper, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and, using our system, can automatically complete the building \"in the style of other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy, and paste metaphors, the entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of the others." ] }
1308.0419
2949269728
In this paper, we address the following research problem: How can we generate a meaningful split grammar that explains a given facade layout? To evaluate if a grammar is meaningful, we propose a cost function based on the description length and minimize this cost using an approximate dynamic programming framework. Our evaluation indicates that our framework extracts meaningful split grammars that are competitive with those of expert users, while some users and all competing automatic solutions are less successful.
There are several important research questions related to inverse procedural modeling that are complementary to our work. When dealing with noisy input or input that is not segmented, lower level shape understanding, most importantly symmetry detection, is the first important step to inverse procedural modeling @cite_24 @cite_32 . After a set of shape grammars has been learned from typical input facades (e.g., using the method described in this paper), they can be used as priors to guide further reconstruction efforts. This very exciting and important line of work has been picked up by several research groups, e.g., @cite_5 @cite_11 @cite_34 @cite_28 @cite_22 @cite_30 @cite_15 @cite_26 @cite_9 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_28", "@cite_9", "@cite_32", "@cite_24", "@cite_5", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2051403286", "2007261304", "2093664813", "2140965591", "2154484971", "2117878059", "", "2970073282", "2154069107", "2146503126", "2155296376" ], "abstract": [ "In this paper we tackle the problem of 3D modeling for urban environment using a modular, flexible and powerful approach driven from procedural generation. To this end, typologies of architectures are modeled through shape grammars that consist of a set of derivation rules and a set of shape dictionary elements. Appearance (from statistical point of view with respect to the individual pixel's properties) of the dictionary elements is then learned using a set of training images. Image classifiers are trained towards recovering image support with respect to the semantics. Then, given a new image and the corresponding footprint, the modeling problem is formulated as a search of the space of shapes, that can be generated on-the-fly by deriving the grammar on the input axiom. Defining an image-based score function for the produced instances using the trained classifiers, the best rules are selected, making sure that we keep exploring the space by allowing some rules to be randomly selected. New rules are then generated by resampling around the selected rules. At the finest level, these rules define the 3D model of the building. Promising results on complex and varying architectural styles demonstrate the potential of the presented method.", "We address shape grammar parsing for facade segmentation using Reinforcement Learning (RL). Shape parsing entails simultaneously optimizing the geometry and the topology (e.g. number of floors) of the facade, so as to optimize the fit of the predicted shape with the responses of pixel-level 'terminal detectors'. We formulate this problem in terms of a Hierarchical Markov Decision Process, by employing a recursive binary split grammar. This allows us to use RL to efficiently find the optimal parse of a given facade in terms of our shape grammar. Building on the RL paradigm, we exploit state aggregation to speedup computation, and introduce image-driven exploration in RL to accelerate convergence. We achieve state-of-the-art results on facade parsing, with a significant speed-up compared to existing methods, and substantial robustness to initial conditions. We demonstrate that the method can also be applied to interactive segmentation, and to a broad variety of architectural styles.", "We present a method for detecting and parsing buildings from unorganized 3D point clouds into a compact, hierarchical representation that is useful for high-level tasks. The input is a set of range measurements that cover large-scale urban environment. The desired output is a set of parse trees, such that each tree represents a semantic decomposition of a building – the nodes are roof surfaces as well as volumetric parts inferred from the observable surfaces. We model the above problem using a simple and generic grammar and use an efficient dependency parsing algorithm to generate the desired semantic description. We show how to learn the parameters of this simple grammar in order to produce correct parses of complex structures. We are able to apply our model on large point clouds and parse an entire city.", "We present a passive computer vision method that exploits existing mapping and navigation databases in order to automatically create 3D building models. Our method defines a grammar for representing changes in building geometry that approximately follow the Manhattan-world assumption which states there is a predominance of three mutually orthogonal directions in the scene. By using multiple calibrated aerial images, we extend previous Manhattan-world methods to robustly produce a single, coherent, complete geometric model of a building with partial textures. Our method uses an optimization to discover a 3D building geometry that produces the same set of facade orientation changes observed in the captured images. We have applied our method to several real-world buildings and have analyzed our approach using synthetic buildings.", "In this paper, we use shape grammars (SGs) for facade parsing, which amounts to segmenting 2D building facades into balconies, walls, windows, and doors in an architecturally meaningful manner. The main thrust of our work is the introduction of reinforcement learning (RL) techniques to deal with the computational complexity of the problem. RL provides us with techniques such as Q-learning and state aggregation which we exploit to efficiently solve facade parsing. We initially phrase the 1D parsing problem in terms of a Markov Decision Process, paving the way for the application of RL-based tools. We then develop novel techniques for the 2D shape parsing problem that take into account the specificities of the facade parsing problem. Specifically, we use state aggregation to enforce the symmetry of facade floors and demonstrate how to use RL to exploit bottom-up, image-based guidance during optimization. We provide systematic results on the Paris building dataset and obtain state-of-the-art results in a fraction of the time required by previous methods. We validate our method under diverse imaging conditions and make our software and results available online.", "In this paper, we address the problem of inverse procedural modeling: Given a piece of exemplar 3D geometry, we would like to find a set of rules that describe objects that are similar to the exemplar. We consider local similarity, i.e., each local neighborhood of the newly created object must match some local neighborhood of the exemplar. We show that we can find explicit shape modification rules that guarantee strict local similarity by looking at the structure of the partial symmetries of the object. By cutting the object into pieces along curves within symmetric areas, we can build shape operations that maintain local similarity by construction. We systematically collect such editing operations and analyze their dependency to build a shape grammar. We discuss how to extract general rewriting systems, context free hierarchical rules, and grid-based rules. All of this information is derived directly from the model, without user interaction. The extracted rules are then used to implement tools for semi-automatic shape modeling by example, which are demonstrated on a number of different example data sets. Overall, our paper provides a concise theoretical and practical framework for inverse procedural modeling of 3D objects.", "", "", "High-quality urban reconstruction requires more than multi-view reconstruction and local optimization. The structure of facades depends on the general layout, which has to be optimized globally. Shape grammars are an established method to express hierarchical spatial relationships, and are therefore suited as representing constraints for semantic facade interpretation. Usually inference uses numerical approximations, or hard-coded grammar schemes. Existing methods inspired by classical grammar parsing are not applicable on real-world images due to their prohibitively high complexity. This work provides feasible generic facade reconstruction by combining low-level classifiers with mid-level object detectors to infer an irregular lattice. The irregular lattice preserves the logical structure of the facade while reducing the search space to a manageable size. We introduce a novel method for handling symmetry and repetition within the generic grammar. We show competitive results on two datasets, namely the Paris 2010 and the Graz 50. The former includes only Hausmannian, while the latter includes Classicism, Biedermeier, Historicism, Art Nouveau and post-modern architectural styles.", "d city models are used in a huge number of applications today. They are applicable in the area of urban planning and city development, tourism and marketing and as well for navigation. All these applications need a 3d city model of a large area. And in these days the desire for actuality and a high degree of details is rising. Due to this the modelling of buildings as block models as before is not sufficient any more. There is a need for facade reconstruction methods that model windows and other facade elements in more detail. To acquire the huge demand of models, automatic methods are needed. In this paper we show the importance of the use of structure information for the reconstruction process. With an example we demonstrate the problems of reconstructions methods which work without structure information. Then we present how to use a grammar to integrate structure in the process. Thereafter we present a manual modelling tool which is based on our facade grammar. And finally an automatic facade reconstruction method based on reversible jump Markov Chain Monte Carlo (rjMCMC) is shown.", "We propose a novel grammar-driven approach for reconstruction of buildings and landmarks. Our approach complements Structure-from-Motion and image-based analysis with a 'inverse' procedural modeling strategy. So far, procedural modeling has mostly been used for creation of virtual buildings, while the inverse approaches typically focus on reconstruction of single facades. In our work, we reconstruct complete buildings as procedural models using template shape grammars. In the reconstruction process, we let the grammar interpreter automatically decide on which step to take next. The process can be seen as instantiating the template by determining the correct grammar parameters. As an example, we have chosen the reconstruction of Greek Doric temples. This process significantly differs from single facade segmentation due to the immediate need for 3D reconstruction." ] }
1308.0256
1719104839
Most CAD or other spatial data models, in particular boundary representation models, are called "topological" and represent spatial data by a structured collection of "topological primitives" like edges, vertices, faces, and volumes. These then represent spatial objects in geo-information- (GIS) or CAD systems or in building information models (BIM). Volume objects may then either be represented by their 2D boundary or by a dedicated 3D-element, the "solid". The latter may share common boundary elements with other solids, just as 2D-polygon topologies in GIS share common boundary edges. Despite the frequent reference to "topology" in publications on spatial modelling the formal link between mathematical topology and these "topological" models is hardly described in the literature. Such link, for example, cannot be established by the often cited nine-intersections model which is too elementary for that purpose. Mathematically, the link between spatial data and the modelled "real world" entities is established by a chain of "continuous functions" - a very important topological notion, yet often overlooked by spatial data modellers. This article investigates how spatial data can actually be considered topological spaces, how continuous functions between them are defined, and how CAD systems can make use of them. Having found examples of applications of continuity in CAD data models it turns out that of continuity has much practical relevance for CAD systems.
An example of topological spaces beyond @math in the context of relational databases is the theory of which converts a database schema into a topological space and tests it for so-called @math -cycles. This topological property severely affects the efficiency of some database query operations @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "1608639859" ], "abstract": [ "Database schemes (which, intuitively, are collections of table skeletons) can be viewed as hypergraphs. (A hypergraph is a generalization of an ordinary undirected graph, such that an edge need not contain exactly two nodes, but can instead contain an arbitrary nonzero number of nodes.) Unlike the situation for ordinary undirected graphs, there are several natural, nonequivalent notions of acyclicity for hypergraphs (and hence for database schemes). A large number of desirable properties of database schemes fall into a small number of equivalence classes, each completely characterized by the degree of acyclicity of the scheme. This paper is intended to be an informal introduction, in which the focus is mainly on the originally studied (and least restrictive) degree of acyclicity." ] }
1307.8371
2953340467
We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise. We demonstrate the effectiveness of our approach by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider the malicious noise model of Valiant and the adversarial label noise model of Kearns, Schapire, and Sellie. For malicious noise, where the adversary can corrupt an @math of fraction both the label part and the feature part, we provide a polynomial-time algorithm for learning linear separators in @math under the uniform distribution with near information-theoretic optimal noise tolerance of @math . We also get similar improvements for the adversarial label noise model. We obtain similar results for more general classes of distributions including isotropic log-concave distributions. In addition, our algorithms achieve a label complexity whose dependence on the error parameter @math is exponentially better than that of any passive algorithm. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of adversarial label noise, as well as the first analysis of active learning under the malicious noise model.
@cite_27 considered noise-tolerant learning of halfspaces under a more idealized noise model, known as the random noise model, in which the label of each example is flipped with a certain probability, independently of the feature vector. Some other, less closely related, work on efficient noise-tolerant learning of halfspaces includes @cite_0 @cite_27 @cite_37 @cite_9 @cite_11 @cite_6 @cite_19 @cite_42 .
{ "cite_N": [ "@cite_37", "@cite_9", "@cite_42", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_11" ], "mid": [ "2026103826", "2065923584", "2159376838", "1591252468", "", "2098278064", "2143362693", "1548189207" ], "abstract": [ "We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over Q ^n with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2- , improving the factor of 85 84 - due to Bshouty and Burroughs [8]. Finally, we show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.", "Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noise-free case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction @math of the examples exists (for some small constant @math ), it was not known how to efficiently find a halfspace that is correct on even 51 of the examples. Nor was a hardness result that ruled out getting agreement on more than 99.9 of the examples known. In this work, we close this gap in our understanding and prove that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense. Specifically, for arbitrary @math , we prove that given a set of examples-label pairs from the hypercube, a fraction @math of which can be explained by a halfspace, it is NP-hard to find a halfspace that correctly labels a fraction @math of the examples. The hardness result is tight since it is trivial to get agreement on @math the examples. In learning theory parlance, we prove that weak proper agnostic learning of halfspaces is hard. This settles a question that was raised by , in their work on learning halfspaces in the presence of random classification noise [Algorithmica, 22 (1998), pp. 35-52], and raised by authors of some more recent works as well. Along the way, we also obtain a strong hardness result for another basic computational problem: solving a linear system over the rationals.", "Given α, e, we study the time complexity required to improperly learn a halfs-pace with misclassification error rate of at most (1 + α) L*γ + e, where L*γ is the optimal γ-margin error rate. For α = 1 γ, polynomial time and sample complexity is achievable using the hinge-loss. For α = 0, Shalev- [2011] showed that poly(1 γ) time is impossible, while learning is possible in time exp(O(1 γ)). An immediate question, which this paper tackles, is what is achievable if α e (0, 1 γ). We derive positive results interpolating between the polynomial time for α = 1 γ and the exponential time for α = 0. In particular, we show that there are cases in which α = o(1 γ) but the problem is still solvable in polynomial time. Our results naturally extend to the adversarial online learning model and to the PAC learning with malicious noise model.", "Given some arbitrary distribution D over 0, 1 n and arbitrary target function c∗, the problem of agnostic learning of disjunctions is to achieve an error rate comparable to the error OPTdisj of the best disjunction with respect to (D, c∗). Achieving error O(n · OPTdisj) + ǫ is trivial, and Winnow [13] achieves error O(r ·OPTdisj) + ǫ, where r is the number of relevant variables in the best disjunction. In recent work, Peleg [14] shows how to achieve a bound of O( √ n ·OPTdisj)+ ǫ in polynomial time. In this paper we improve on Peleg’s bound, giving a polynomial-time algorithm achieving a bound of O(n ·OPTdisj) + ǫ for any constant α > 0. The heart of the algorithm is a method for weak-learning when OPTdisj = O(1 n), which can then be fed into existing agnostic boosting procedures to achieve the desired guarantee.", "", "We describe a simple algorithm that runs in time poly(n, 1 γ, 1 e) and learns an unknown n-dimensional γ-margin halfspace to accuracy 1 – e in the presence of malicious noise, when the noise rate is allowed to be as high as Θ(eγ√log(1 γ)). Previous efficient algorithms could only learn to accuracy e in the presence of malicious noise of rate at most Θ(eγ). Our algorithm does not work by optimizing a convex loss function. We show that no algorithm for learning γ-margin halfspaces that minimizes a convex proxy for misclassification error can tolerate malicious noise at a rate greater than Θ(eγ); this may partially explain why previous algorithms could not achieve the higher noise tolerance of our new algorithm.", "The authors consider the problem of learning a linear threshold function (a halfspace in n dimensions, also called a \"perceptron\"). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a linear program and solved in polynomial time with the ellipsoid algorithm (or interior point methods). On the other hand, simple greedy algorithms such as the perceptron algorithm seem to work well in practice and can be made noise tolerant; but, their running time depends on a separation parameter (which quantifies the amount of \"wiggle room\" available) and can be exponential in the description length of the input. They show how simple greedy methods can be used to find weak hypotheses (hypotheses that classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. This results in a polynomial-time algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. The algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R sup n , each given to b bits of precision, they show that one can remove only a small fraction of S so that in the remaining set T, for every vector v, max sub x spl epsiv T (v spl middot x) sup 2 spl les poly(n,b)|T| sup -1 spl Sigma sub x spl epsiv T (v spl middot x) sup 2 . After removing these outliers, they are able to show that a modified version of the perceptron learning algorithm works in polynomial time, even in the presence of random classification noise.", "We describe a new boosting algorithm which generates only smooth distributions which do not assign too much weight to any single example. We show that this new boosting algorithm can be used to construct efficient PAC learning algorithms which tolerate relatively high rates of malicious noise. In particular, we use the new smooth boosting algorithm to construct malicious noise tolerant versions of the PAC-model p-norm linear threshold learning algorithms described by Servedio (2002). The bounds on sample complexity and malicious noise tolerance of these new PAC algorithms closely correspond to known bounds for the online p-norm algorithms of Grove, Littlestone and Schuurmans (1997) and Gentile and Littlestone (1999). As special cases of our new algorithms we obtain linear threshold learning algorithms which match the sample complexity and malicious noise tolerance of the online Perceptron and Winnow algorithms. Our analysis reveals an interesting connection between boosting and noise tolerance in the PAC setting." ] }
1307.8371
2953340467
We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise. We demonstrate the effectiveness of our approach by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider the malicious noise model of Valiant and the adversarial label noise model of Kearns, Schapire, and Sellie. For malicious noise, where the adversary can corrupt an @math of fraction both the label part and the feature part, we provide a polynomial-time algorithm for learning linear separators in @math under the uniform distribution with near information-theoretic optimal noise tolerance of @math . We also get similar improvements for the adversarial label noise model. We obtain similar results for more general classes of distributions including isotropic log-concave distributions. In addition, our algorithms achieve a label complexity whose dependence on the error parameter @math is exponentially better than that of any passive algorithm. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of adversarial label noise, as well as the first analysis of active learning under the malicious noise model.
As we have mentioned, most prior theoretical work on active learning focuses on either sample complexity bounds (without regard for efficiency) or on providing polynomial time algorithms in the noiseless case or under simple noise models (random classification @cite_5 noise or linear noise @cite_20 @cite_48 ).
{ "cite_N": [ "@cite_5", "@cite_20", "@cite_48" ], "mid": [ "2963129126", "2132162087", "2147004922" ], "abstract": [ "", "We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N ?(1+?)(2+?) 2(3+?) where N denotes the number of queried labels, and ?>0 is the exponent in the low noise condition. For all @math this convergence rate is asymptotically faster than the rate N ?(1+?) (2+?) achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for ??? (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential.", "We present a new online learning algorithm in the selective sampling framework, where labels must be actively queried before they are revealed. We prove bounds on the regret of our algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. Our bounds both generalize and strictly improve over previous bounds in similar settings. Additionally, our selective sampling algorithm can be converted into an efficient statistical active learning algorithm. We extend our algorithm and analysis to the multiple-teacher setting, where the algorithm can choose which subset of teachers to query for each label. Finally, we demonstrate the effectiveness of our techniques on a real-world Internet search problem." ] }
1307.8371
2953340467
We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise. We demonstrate the effectiveness of our approach by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider the malicious noise model of Valiant and the adversarial label noise model of Kearns, Schapire, and Sellie. For malicious noise, where the adversary can corrupt an @math of fraction both the label part and the feature part, we provide a polynomial-time algorithm for learning linear separators in @math under the uniform distribution with near information-theoretic optimal noise tolerance of @math . We also get similar improvements for the adversarial label noise model. We obtain similar results for more general classes of distributions including isotropic log-concave distributions. In addition, our algorithms achieve a label complexity whose dependence on the error parameter @math is exponentially better than that of any passive algorithm. This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of adversarial label noise, as well as the first analysis of active learning under the malicious noise model.
@cite_20 @cite_48 online learning algorithms in the selective sampling framework are presented, where labels must be actively queried before they are revealed. Under the assumption that the label conditional distribution is a linear function determined by a fixed target vector, they provide bounds on the regret of the algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. As pointed out in @cite_48 , these results can also be converted to a distributional PAC setting where instances @math are drawn i.i.d. In this setting they obtain exponential improvement in label complexity over passive learning. These interesting results and techniques are not directly comparable to ours. Our framework is not restricted to halfspaces. Another important difference is that (as pointed out in @cite_7 ) the exponential improvement they give is not possible in the noiseless version of their setting. In other words, the addition of linear noise defined by the target makes the problem easier for active sampling. By contrast RCN can only make the classification task harder than in the realizable case.
{ "cite_N": [ "@cite_48", "@cite_7", "@cite_20" ], "mid": [ "2147004922", "2056138823", "2132162087" ], "abstract": [ "We present a new online learning algorithm in the selective sampling framework, where labels must be actively queried before they are revealed. We prove bounds on the regret of our algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. Our bounds both generalize and strictly improve over previous bounds in similar settings. Additionally, our selective sampling algorithm can be converted into an efficient statistical active learning algorithm. We extend our algorithm and analysis to the multiple-teacher setting, where the algorithm can choose which subset of teachers to query for each label. Finally, we demonstrate the effectiveness of our techniques on a real-world Internet search problem.", "Active learning is a protocol for supervised machine learning, in which a learning algorithm sequentially requests the labels of selected data points from a large pool of unlabeled data. This contrasts with passive learning, where the labeled data are taken at random. The objective in active learning is to produce a highly-accurate classifier, ideally using fewer labels than the number of random labeled data sufficient for passive learning to achieve the same. This article describes recent advances in our understanding of the theoretical benefits of active learning, and implications for the design of effective active learning algorithms. Much of the article focuses on a particular technique, namely disagreement-based active learning, which by now has amassed a mature and coherent literature. It also briefly surveys several alternative approaches from the literature. The emphasis is on theorems regarding the performance of a few general algorithms, including rigorous proofs where appropriate. However, the presentation is intended to be pedagogical, focusing on results that illustrate fundamental ideas, rather than obtaining the strongest or most general known theorems. The intended audience includes researchers and advanced graduate students in machine learning and statistics, interested in gaining a deeper understanding of the recent and ongoing developments in the theory of active learning.", "We introduce efficient margin-based algorithms for selective sampling and filtering in binary classification tasks. Experiments on real-world textual data reveal that our algorithms perform significantly better than popular and similarly efficient competitors. Using the so-called Mammen-Tsybakov low noise condition to parametrize the instance distribution, and assuming linear label noise, we show bounds on the convergence rate to the Bayes risk of a weaker adaptive variant of our selective sampler. Our analysis reveals that, excluding logarithmic factors, the average risk of this adaptive sampler converges to the Bayes risk at rate N ?(1+?)(2+?) 2(3+?) where N denotes the number of queried labels, and ?>0 is the exponent in the low noise condition. For all @math this convergence rate is asymptotically faster than the rate N ?(1+?) (2+?) achieved by the fully supervised version of the base selective sampler, which queries all labels. Moreover, for ??? (hard margin condition) the gap between the semi- and fully-supervised rates becomes exponential." ] }
1307.8049
2953360824
Research on distributed machine learning algorithms has focused primarily on one of two extremes - algorithms that obey strict concurrency constraints or algorithms that obey few or no such constraints. We consider an intermediate alternative in which algorithms optimistically assume that conflicts are unlikely and if conflicts do arise a conflict-resolution protocol is invoked. We view this "optimistic concurrency control" paradigm as particularly appropriate for large-scale machine learning algorithms, particularly in the unsupervised setting. We demonstrate our approach in three problem areas: clustering, feature learning and online facility location. We evaluate our methods via large-scale experiments in a cluster computing environment.
A great amount of work addresses scalable clustering algorithms @cite_22 @cite_11 @cite_18 . Many algorithms with provable approximation factors are streaming algorithms and inherently use hierarchies, or related divide-and-conquer approaches . The approximation factors in such algorithms multiply across levels , and demand a careful tradeoff between communication and approximation quality that is obviated in our framework. Other approaches use core sets @cite_13 @cite_17 . A lot of methods @cite_5 @cite_1 @cite_8 first collect a set of centers and then re-cluster them, and therefore need to communicate all intermediate centers. Our approach avoids that, since a center causes no rejections in the epochs after it is established: the rejection rate does not grow with @math . Still, as our examples demonstrate, our OCC framework can easily integrate and exploit many of the ideas in the cited works.
{ "cite_N": [ "@cite_18", "@cite_11", "@cite_22", "@cite_8", "@cite_1", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2950858762", "2123427850", "1870625491", "2118190603", "", "2091684877", "", "2146200992" ], "abstract": [ "Clustering problems have numerous applications and are becoming more challenging as the size of the data increases. In this paper, we consider designing clustering algorithms that can be used in MapReduce, the most popular programming environment for processing large datasets. We focus on the practical and popular clustering problems, @math -center and @math -median. We develop fast clustering algorithms with constant factor approximation guarantees. From a theoretical perspective, we give the first analysis that shows several clustering algorithms are in @math , a theoretical MapReduce class introduced by KarloffSV10 . Our algorithms use sampling to decrease the data size and they run a time consuming clustering algorithm such as local search or Lloyd's algorithm on the resulting data set. Our algorithms have sufficient flexibility to be used in practice since they run in a constant number of MapReduce rounds. We complement these results by performing experiments using our algorithms. We compare the empirical performance of our algorithms to several sequential and parallel algorithms for the @math -median problem. The experiments show that our algorithms' solutions are similar to or better than the other algorithms' solutions. Furthermore, on data sets that are sufficiently large, our algorithms are faster than the other parallel algorithms that we tested.", "Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.", "To cluster increasingly massive data sets that are common today in data and text mining, we propose a parallel implementation of the k-means clustering algorithm based on the message passing model. The proposed algorithm exploits the inherent data-parallelism in the kmeans algorithm. We analytically show that the speedup and the scaleup of our algorithm approach the optimal as the number of data points increases. We implemented our algorithm on an IBM POWERparallel SP2 with a maximum of 16 nodes. On typical test data sets, we observe nearly linear relative speedups, for example, 15.62 on 16 nodes, and essentially linear scaleup in the size of the data set and in the number of clusters desired. For a 2 gigabyte test data set, our implementation drives the 16 node SP2 at more than 1.8 gigaflops.", "Clustering is a popular problem with many applications. We consider the k-means problem in the situation where the data is too large to be stored in main memory and must be accessed sequentially, such as from a disk, and where we must use as little memory as possible. Our algorithm is based on recent theoretical results, with significant improvements to make it practical. Our approach greatly simplifies a recently developed algorithm, both in design and in analysis, and eliminates large constant factors in the approximation guarantee, the memory requirements, and the running time. We then incorporate approximate nearest neighbor search to compute k-means in o(nk) (where n is the number of data points; note that computing the cost, given a solution, takes Θ(nk) time). We show that our algorithm compares favorably to existing algorithms - both theoretically and experimentally, thus providing state-of-the-art performance in both theory and practice.", "", "We study clustering problems in the streaming model, where the goal is to cluster a set of points by making one pass (or a few passes) over the data using a small amount of storage space. Our main result is a randomized algorithm for the k--Median problem which produces a constant factor approximation in one pass using storage space O(k poly log n). This is a significant improvement of the previous best algorithm which yielded a 2O(1 e) approximation using O(ne) space. Next we give a streaming algorithm for the k--Median problem with an arbitrary distance function. We also study algorithms for clustering problems with outliers in the streaming model. Here, we give bicriterion guarantees, producing constant factor approximations by increasing the allowed fraction of outliers slightly.", "", "How can we train a statistical mixture model on a massive data set? In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset will also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size independent of the size of the data set. More precisely, we prove that a weighted set of O(dk3 e2) data points suffices for computing a (1 + e)-approximation for the optimal model on the original n data points. Moreover, such coresets can be efficiently constructed in a map-reduce style computation, as well as in a streaming setting. Our results rely on a novel reduction of statistical estimation to problems in computational geometry, as well as new complexity results about mixtures of Gaussians. We empirically evaluate our algorithms on several real data sets, including a density estimation problem in the context of earthquake detection using accelerometers in mobile phones." ] }
1307.8084
2258542455
For widespread deployment in domains characterized by partial observability, non-deterministic actions and unforeseen changes, robots need to adapt sensing, processing and interaction with humans to the tasks at hand. While robots typically cannot process all sensor inputs or operate without substantial domain knowledge, it is a challenge to provide accurate domain knowledge and humans may not have the time and expertise to provide elaborate and accurate feedback. The architecture described in this paper combines declarative programming and probabilistic reasoning to address these challenges, enabling robots to: (a) represent and reason with incomplete domain knowledge, resolving ambiguities and revising existing knowledge using sensor inputs and minimal human feedback; and (b) probabilistically model the uncertainty in sensor input processing and navigation. Specifically, Answer Set Programming (ASP), a declarative programming paradigm, is combined with hierarchical partially observable Markov decision processes (POMDPs), using domain knowledge to revise probabilistic beliefs, and using positive and negative observations for early termination of tasks that can no longer be pursued. All algorithms are evaluated in simulation and on mobile robots locating target objects in indoor domains.
Algorithms based on probabilistic graphical models such as POMDPs have been used to model the uncertainty in real-world sensing and navigation, enabling the use of robots in offices and hospitals @cite_0 @cite_4 @cite_11 . Since the rapid increase in state space dimensions of such formulations of complex problems make real-time operation difficult, researchers have developed algorithms that decompose complex problems into a hierarchy of simpler problems that are computationally tractable @cite_4 @cite_10 . However, it is still challenging to use POMDPs and other graphical models in large, complex state-action spaces. Furthermore, these probabilistic algorithms do not readily support representation of, and reasoning with, commonsense domain knowledge.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_4", "@cite_11" ], "mid": [ "1579750597", "1873201708", "1967769980", "2221452810" ], "abstract": [ "From an automated planning perspective the problem of practical mobile robot control in realistic environments poses many important and contrary challenges. On the one hand, the planning process must be lightweight, robust, and timely. Over the lifetime of the robot it must always respond quickly with new plans that accommodate exogenous events, changing objectives, and the underlying unpredictability of the environment. On the other hand, in order to promote efficient behaviours the planning process must perform computationally expensive reasoning about contingencies and possible revisions of subjective beliefs according to quantitatively modelled uncertainty in acting and sensing. Towards addressing these challenges, we develop a continual planning approach that switches between using a fast satisficing \"classical\" planner, to decide on the overall strategy, and decision-theoretic planning to solve small abstract subproblems where deeper consideration of the sensing model is both practical, and can significantly impact overall performance. We evaluate our approach in large problems from a realistic robot exploration domain.", "A key challenge to widespread deployment of mobile robots in the real-world is the ability to robustly and autonomously sense the environment and collaborate with teammates. Real-world domains are characterized by partial observability, non-deterministic action outcomes and unforeseen changes, making autonomous sensing and collaboration a formidable challenge. This paper poses vision-based sensing, information processing and collaboration as an instance of probabilistic planning using partially observable Markov decision processes. Reliable, efficient and autonomous operation is achieved using a hierarchical decomposition that includes: (a) convolutional policies to exploit the local symmetry of high-level visual search; (b) adaptive observation functions, policy re-weighting, automatic belief propagation and online updates of the domain map for autonomous adaptation to domain changes; and (c) a probabilistic strategy for a team of robots to robustly share beliefs. All algorithms are evaluated in simulation and on physical robots localizing target objects in dynamic indoor domains.", "This paper describes a mobile robotic assistant, developed to assist elderly individuals with mild cognitive and physical impairments, as well as support nurses in their daily activities. We present three software modules relevant to ensure successful human–robot interaction: an automated reminder system; a people tracking and detection system; and finally a high-level robot controller that performs planning under uncertainty by incorporating knowledge from low-level modules, and selecting appropriate courses of actions. During the course of experiments conducted in an assisted living facility, the robot successfully demonstrated that it could autonomously provide reminders and guidance for elderly residents.", "When mobile robots perform tasks in environments with humans, it seems appropriate for the robots to rely on such humans for help instead of dedicated human oracles or supervisors. However, these humans are not always available nor always accurate. In this work, we consider human help to a robot as concretely providing observations about the robot's state to reduce state uncertainty as it executes its policy autonomously. We model the probability of receiving an observation from a human in terms of their availability and accuracy by introducing Human Observation Providers POMDPs (HOP-POMDPs). We contribute an algorithm to learn human availability and accuracy online while the robot is executing its current task policy. We demonstrate that our algorithm is effective in approximating the true availability and accuracy of humans without depending on oracles to learn, thus increasing the tractability of deploying a robot that can occasionally ask for help." ] }
1307.7751
2950701039
In power systems, load curve data is one of the most important datasets that are collected and retained by utilities. The quality of load curve data, however, is hard to guarantee since the data is subject to communication losses, meter malfunctions, and many other impacts. In this paper, a new approach to analyzing load curve data is presented. The method adopts a new view, termed , on the load curve data by analyzing the periodic patterns in the data and re-organizing the data for ease of analysis. Furthermore, we introduce algorithms to build the virtual portrait load curve data, and demonstrate its application on load curve data cleansing. Compared to existing regression-based methods, our method is much faster and more accurate for both small-scale and large-scale real-world datasets.
In addition to the above methods, data mining techniques have also been developed to detect outliers, such as @math -nearest neighbor @cite_11 @cite_24 , @math -means @cite_16 @cite_1 , @math -medoids @cite_33 , density-based clustering @cite_9 , In general, these methods classify the observations with similar features, and find the observations that do not belong strongly to any cluster or far from other clusters. Nevertheless, most data mining techniques are designed for structured relational data, which may not align well for the need of outlier detection in load curve data. In addition, these methods are normally time consuming because they need a training process on a large dataset.
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_1", "@cite_24", "@cite_16", "@cite_11" ], "mid": [ "1582484699", "1969642980", "", "", "2084052726", "2129281431" ], "abstract": [ "1. Introduction to wavelets 2. Review of Fourier theory and filters 3. Orthonormal transforms of time series 4. The discrete wavelet transform 5. The maximal overlap discrete wavelet transform 6. The discrete wavelet packet transform 7. Random variables and stochastic processes 8. The wavelet variance 9. Analysis and synthesis of long memory processes 10. Wavelet-based signal estimation 11. Wavelet analysis of finite energy signals Appendix. Answers to embedded exercises References Author index Subject index.", "In many different application areas, e.g. sensor databases, location based services or face recognition systems, distances between odjects have to be computed based on vague and uncertain data. Commonly, the distances between these uncertain object descriptions are expressed by one numerical distance value. Based on such single-valued distance functions standard data mining algorithms can work without any changes. In this paper, we propose to express the similarity between two fuzzy objects by distance probability functions. These fuzzy distance functions assign a probability value to each possible distance value. By integrating these fuzzy distance functions directly into data mining algorithms, the full information provided by these functions is exploited. In order to demonstrate the benefits of this general approach, we enhance the density-based clustering algorithm DBSCAN so that it can work directly on these fuzzy distance functions. In a detailed experimental evaluation based on artificial and real-world data sets, we show the characteristics and benefits of our new approach.", "", "", "It has been almost thirty years since Shannon introduced the sampling theorem to communications theory. In this review paper we will attempt to present the various contributions made for the sampling theorems with the necessary mathematical details to make it self-contained. We will begin by a clear statement of Shannon's sampling theorem followed by its applied interpretation for time-invariant systems. Then we will review its origin as Whittaker's interpolation series. The extensions will include sampling for functions of more than one variable, random processes, nonuniform sampling, nonband-limited functions, implicit sampling, generalized functions (distributions), sampling with the function and its derivatives as suggested by Shannon in his original paper, and sampling for general integral transforms. Also the conditions on the functions to be sampled will be summarized. The error analysis of the various sampling expansions, including specific error bounds for the truncation, aliasing, jitter and parts of various other errors will be discussed and summarized. This paper will be concluded by searching the different recent applications of the sampling theorems in other fields, besides communications theory. These include optics, crystallography, time-varying systems, boundary value problems, spline approximation, special functions, and the Fourier and other discrete transforms.", "In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality." ] }
1307.7751
2950701039
In power systems, load curve data is one of the most important datasets that are collected and retained by utilities. The quality of load curve data, however, is hard to guarantee since the data is subject to communication losses, meter malfunctions, and many other impacts. In this paper, a new approach to analyzing load curve data is presented. The method adopts a new view, termed , on the load curve data by analyzing the periodic patterns in the data and re-organizing the data for ease of analysis. Furthermore, we introduce algorithms to build the virtual portrait load curve data, and demonstrate its application on load curve data cleansing. Compared to existing regression-based methods, our method is much faster and more accurate for both small-scale and large-scale real-world datasets.
Some of the above methods, especially the regression-based methods, have also been used for load forecasting. Nevertheless, load forecasting and load data cleansing are different applications, and their purposes are different. For load forecasting, as mentioned in @cite_14 , all historical data are trusted and used to forecast the load at a future point in time. For load cleansing, historical records are used to detect corrupted data at a historical point in time, and appropriate values may be needed to replace the corrupted data. Therefore, load cleansing and load forecasting belong to different phases of load analysis: load cleansing comes before load forecasting and provides accurate load information for the latter.
{ "cite_N": [ "@cite_14" ], "mid": [ "1993863450" ], "abstract": [ "Load curve data refers to the electric energy consumption recorded by meters at certain time intervals at delivery points or end user points, and contains vital information for day-to-day operations, system analysis, system visualization, system reliability performance, energy saving and adequacy in system planning. Unfortunately, it is unavoidable that load curves contain corrupted data and missing data due to various random failure factors in meters and transfer processes. This paper presents the B-Spline smoothing and Kernel smoothing based techniques to automatically cleanse corrupted and missing data. In implementation, a man-machine dialogue procedure is proposed to enhance the performance. The experiment results on the real British Columbia Transmission Corporation (BCTC) load curve data demonstrated the effectiveness of the presented solution." ] }
1307.7838
2951806500
We study bidding and pricing competition between two spiteful mobile network operators (MNOs) with considering their existing spectrum holdings. Given asymmetric-valued spectrum blocks are auctioned off to them via a first-price sealed-bid auction, we investigate the interactions between two spiteful MNOs and users as a three-stage dynamic game and characterize the dynamic game's equilibria. We show an asymmetric pricing structure and different market share between two spiteful MNOs. Perhaps counter-intuitively, our results show that the MNO who acquires the less-valued spectrum block always lowers his service price despite providing double-speed LTE service to users. We also show that the MNO who acquires the high-valued spectrum block, despite charing a higher price, still achieves more market share than the other MNO. We further show that the competition between two MNOs leads to some loss of their revenues. By investigating a cross-over point at which the MNOs' profits are switched, it serves as the benchmark of practical auction designs.
In wireless communications, the competition among MNOs have been addressed by many researchers @cite_13 -- @cite_16 . Yu and Kim @cite_13 studied price dynamics among MNOs. They also suggested a simple regulation that guarantees a Pareto optimal equilibrium point to avoid instability and inefficiency. Niyato and Hossain @cite_6 proposed a pricing model among MNOs providing different services to users. However, these works did not consider the spectrum allocation issue. More closely related to our paper are some recent works @cite_3 -- @cite_16 . The paper @cite_3 studied bandwidth and price competition (i.e., Bertrand competition) among MNOs. By taking into account MNOs' heterogeneity in leasing costs and users' heterogeneity in transmission power and channel conditions, Duan . presented a comprehensive analytical study of MNOs' spectrum leasing and pricing strategies in @cite_1 . In @cite_5 , a new allocation scheme is suggested by jointly considering MNOs' revenues and social welfare. X. Feng @cite_16 suggested a truthful double auction scheme for heterogeneous spectrum allocation. None of the prior results considered MNOs' existing spectrum holdings even if the value of spectrum could be varied depending on MNOs' existing spectrum holdings.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_6", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "2018061752", "1969502830", "2164988430", "2041138943", "2100999505", "" ], "abstract": [ "This paper presents a comprehensive analytical study of two competitive secondary operators' investment (i.e., spectrum leasing) and pricing strategies, taking into account operators' heterogeneity in leasing costs and users' heterogeneity in transmission power and channel conditions. We model the interactions between operators and users as a three-stage dynamic game, where operators simultaneously make spectrum leasing decisions in Stage I, and pricing decisions in Stage II, and then users make purchase decisions in Stage III. Using backward induction, we are able to completely characterize the dynamic game's equilibria. We show that both operators' investment and pricing equilibrium decisions process interesting threshold properties. For example, when the two operators' leasing costs are close, both operators will lease positive spectrum. Otherwise, one operator will choose not to lease and the other operator becomes the monopolist. For pricing, a positive pure strategy equilibrium exists only when the total spectrum investment of both operators is less than a threshold. Moreover, two operators always choose the same equilibrium price despite their heterogeneity in leasing costs. Each user fairly achieves the same service quality in terms of signal-to-noise ratio (SNR) at the equilibrium, and the obtained predictable payoff is linear in its transmission power and channel gain. We also compare the duopoly equilibrium with the coordinated case where two operators cooperate to maximize their total profit. We show that the maximum loss of total profit due to operators' competition is no larger than 25 percent. The users, however, always benefit from operators' competition in terms of their payoffs. We show that most of these insights are robust in the general SNR regime.", "Dynamic spectrum access can significantly improve the spectrum utilization. For wireless service providers, the emergence of dynamic spectrum access brings new opportunities and challenges. The flexible spectrum acquisition gives a particular provider the chance to easily adapt its system capacity to fit end users' demand. However, the competition among several providers for both spectrum and end users complicates the situation. In this paper, we propose a general three-layer spectrum market model for the future dynamic spectrum access system, in which the interaction among spectrum holder, wireless service providers and end users are considered. We study a duopoly situation, where two wireless service providers participate in bandwidth competition in spectrum purchasing and price competition to attract end users, with the aim of maximizing their own profit. We believe we are the first one to explicitly study the relation of these two competitions in dynamic spectrum market. We formulate the wireless service providers' competition as a non-cooperative two-stage game. We first analyze the static game when full information is available for providers. Under general assumptions about the price and demand functions, a unique pure Nash equilibrium is identified as the outcome of the game, which shows the stability of the market. We further evaluate the market efficiency of the equilibrium in a symmetric case, and show that the gap with the social optimal is bounded within a small constant ratio. When the market information is limited, we provide myopically optimal adjustment algorithms for the providers. With such strategies, short term price updating converges to the Nash equilibrium of the given subgame, while long term bandwidth updating converges to a point close to the Nash equilibrium of the full game.", "To provide seamless mobility with high-speed wireless connectivity, future generation wireless networks must support heterogeneous wireless access. Pricing schemes adopted by different service providers is crucial and will impact the decisions of users in selecting a network. In this article, we provide a comprehensive survey of the issues related to pricing in heterogeneous wireless networks and possible approaches to the solution of the pricing problem. First, we review the related work on pricing for homogeneous wireless networks in which a single wireless technology is available to the users. Then, we outline the major issues in designing resource allocation and pricing in heterogeneous wireless access networks. To this end, we propose two oligopolistic models for price competition among service providers in a heterogeneous wireless environment consisting of WiMax and WiFi access networks. A non-cooperative game is formulated to obtain the price for the service providers. Two different equilibria, namely, the Nash and the Stackelberg equilibria are considered as the solutions of the simultaneous-play and leader-follower price competitions, respectively.", "To accommodate users' ever-increasing traffic in wireless broadband services, the Federal Communications Commission (FCC) in the U.S. is considering allocating additional spectrum to the wireless market. There are two major directions: licensed (e.g. 3G) and unlicensed services (e.g. Wi-Fi). On the one hand, 3G service can realize a high spectrum efficiency and provide ubiquitous connection. On the other hand, the Wi-Fi service (often with limited coverage) can provide users with high-speed local connections, but is subject to uncontrollable interferences. Regarding spectrum allocation, prior studies only focused on revenue maximization. However, one of FCC's missions is to better improve all wireless users' utilities. This motivates us to design a spectrum allocation scheme that jointly considers social welfare and revenue. In this paper, we formulate the interactions among the FCC, typical 3G and Wi-Fi operators, and the endusers as a three-stage dynamic game and derive the equilibrium of the entire game. Compared to the benchmark case where the FCC only maximizes its revenue, the consideration of social welfare will encourage the FCC to allocate more spectrum to the service which lacks spectrum to better serve its users. Such consideration for the social welfare, to our delight, brings limited revenue loss for the FCC.", "Auction is widely applied in wireless communication for spectrum allocation. Most of prior works have assumed that all spectrums are identical. In reality, however, spectrums provided by different owners have distinctive characteristics in both spacial and frequency domains. Spectrum availability also varies in different geo-locations. Furthermore, frequency diversity may cause non-identical conflict relationships among spectrum buyers since different frequencies have distinct communication ranges. Under such a scenario, existing spectrum auction schemes cannot provide truthfulness or efficiency. In this paper, we propose a Truthful double Auction mechanism for HEterogeneous Spectrum, called TAHES, which allows buyers to explicitly express their personalized preferences for heterogeneous spectrums and also addresses the problem of interference graph variation. We prove that TAHES has nice economic properties including truthfulness, individual rationality and budget balance. Results from extensive simulation studies demonstrate the truthfulness, effectiveness and efficiency of TAHES.", "" ] }
1307.7466
1607106391
We investigate different approaches to integrating object recognition and planning in a tabletop manipulation domain with the set of objects used in the 2012 RoboCup@Work competition. Results of our preliminary experiments show that, with some approaches, close integration of perception and planning improves the quality of plans, as well as the computation times of feasible plans.
Various planning techniques have been used for efficient visual processing management in earlier studies @cite_10 @cite_5 @cite_6 @cite_9 . A survey of such works can be found in the context of the recent introduction of planning for perception in the context of cognitive robotics @cite_8 . The current report distinguishes itself in that it is an empirical investigation of embedding of perceptual processing into a task planning problem, rather than an application of planning to perceptual processing and also aims to investigate how this integration might also improve efficiency of planning. In that sense, a more relevant related work reports a Prolog-based decision making system that utilizes external computations for generating and evaluating perceptual hypotheses, such as the missing objects on a table @cite_1 , though it should be emphasized that the current report is an empirical investigation of different ways of embedding such external computations.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_10" ], "mid": [ "2135624041", "2014382194", "2063173509", "", "2165592437", "2118143795" ], "abstract": [ "Flexible, general-purpose robots need to autonomously tailor their sensing and information processing to the task at hand. We pose this challenge as the task of planning under uncertainty. In our domain, the goal is to plan a sequence of visual operators to apply on regions of interest (ROIs) in images of a scene, so that a human and a robot can jointly manipulate and converse about objects on a tabletop. We pose visual processing management as an instance of probabilistic sequential decision making, and specifically as a Partially Observable Markov Decision Process (POMDP). The POMDP formulation uses models that quantitatively capture the unreliability of the operators and enable a robot to reason precisely about the trade-offs between plan reliability and plan execution time. Since planning in practical-sized POMDPs is intractable, we partially ameliorate this intractability for visual processing by defining a novel hierarchical POMDP based on the cognitive requirements of the corresponding planning task. We compare our hierarchical POMDP planning system (HiPPo) with a non-hierarchical POMDP formulation and the Continual Planning (CP) framework that handles uncertainty in a qualitative manner. We show empirically that HiPPo and CP outperform the naive application of all visual operators on all ROIs. The key result is that the POMDP methods produce more robust plans than CP or the naive visual processing. In summary, visual processing problems represent a challenging domain for planning techniques and our hierarchical POMDP-based approach for visual processing management opens up a promising new line of research.", "The authors are interested in a knowledge-based technique (called program supervision) for managing the reuse of a modular set of programs. The focus of the paper is to analyse which reuse problems program supervision can solve. First, they propose a general definition for program supervision, a knowledge representation model, and a reasoning model. They then analyse a program supervision solution for reuse in terms of the structure of the programs to re-use and in terms of the effort for building a program supervision knowledge base. The paper concludes with what program supervision can do for program reuse from the points of view of the code developers, the experts, and the end-users.", "This paper describes and discusses the K-COPMAN (Knowledge-enabled Cognitive Perception for Manipulation) system, which enables autonomous robots to generate symbolic representations of perceived objects and scenes and to infer answers to complex queries that require the combination of perception and knowledge processing. Using K-COPMAN, the robot can solve inference tasks such as identifying items that are likely to be missing on a breakfast table. To the programmer K-COPMAN, is presented as a logic programming system that can be queried just like a symbolic knowledge base. Internally, K-COPMAN is realized through a data structure framework together with a library of state-of-the-art perception mechanisms for mobile manipulation in human environments. Key features of K-COPMAN are that it can make a robot environment-aware and that it supports goal-directed as well as passive perceptual processing. K-COPMAN is fully integrated into an autonomous mobile manipulation robot and is realized within the open-source robot library ROS.", "", "This article deals with the design of a system that automates the generation of image processing applications. Users describe tasks to perform on images and the system constructs a specific plan, which, after being executed, should yield the desired results. Our approach to this problem belongs to a more general category of systems for the supervision of a library of operators. The generation of an application is considered as the dynamic building of chains of image processing through the selection, parameter tuning and scheduling of existing operators. To develop such a system, we suggest to use a knowledge-rich resolution model and to integrate seven design rules. The Borg system has been developed following these prescriptions. It hinges on hierarchical, opportunistic and incremental planning by means of knowledge sources of the blackboard model, which enable to take into account the planning, evaluation and knowledge acquisition issues.", "This paper presents a new approach to the knowledge-based composition of processes for image interpretation and analysis. Its computer implementation in the VISIPLAN (Vision Planner) system provides a way of modeling the composition of image analysis processes within a unified, object-centered hierarchical planning framework. The approach has been tested and shown to be flexible in handling a reasonably broad class of multi-modality biomedical image analysis and interpretation problems. It provides a relatively general design or planning framework, within which problem specific image analysis and recognition processes can be generated more efficiently and effectively. In this way, generality is gained at the design and planning stages, even though the final implementation stage of interpretation processes is almost invariably problem- and domain-specific. >" ] }
1307.7477
2952666021
Gale and Sotomayor (1985) have shown that in the Gale-Shapley matching algorithm (1962), the proposed-to side W (referred to as women there) can strategically force the W-optimal stable matching as the M-optimal one by truncating their preference lists, each woman possibly blacklisting all but one man. As Gusfield and Irving have already noted in 1989, no results are known regarding achieving this feat by means other than such preference-list truncation, i.e. by also permuting preference lists. We answer Gusfield and Irving's open question by providing tight upper bounds on the amount of blacklists and their combined size, that are required by the women to force a given matching as the M-optimal stable matching, or, more generally, as the unique stable matching. Our results show that the coalition of all women can strategically force any matching as the unique stable matching, using preference lists in which at most half of the women have nonempty blacklists, and in which the average blacklist size is less than 1. This allows the women to manipulate the market in a manner that is far more inconspicuous, in a sense, than previously realized. When there are less women than men, we show that in the absence of blacklists for men, the women can force any matching as the unique stable matching without blacklisting anyone, while when there are more women than men, each to-be-unmatched woman may have to blacklist as many as all men. Together, these results shed light on the question of how much, if at all, do given preferences for one side a priori impose limitations on the set of stable matchings under various conditions. All of the results in this paper are constructive, providing efficient algorithms for calculating the desired strategies.
Bridging the gap between manipulation by a single woman and manipulation by all women is manipulation by a coalition of women. @cite_10 show that even when blacklists are allowed and even in many-to-many settings, if a coalition of women manipulates the (men-proposing) Gale-Shapley algorithm in a way that harms none of them, then no truthful woman is harmed either and no man gains. Coalitional manipulation is also studied by @cite_16 , who studies incentives for manipulation of the (men-proposing) Gale-Shapley algorithm by coalitions of men.
{ "cite_N": [ "@cite_16", "@cite_10" ], "mid": [ "2126927428", "2963788595" ], "abstract": [ "In 2003 there were 8,665 transplants of deceased donor kidneys for the approximately 60,000 patients waiting for such transplants in the United States. While waiting, 3,436 patients died. There were also 6,464 kidney transplants from living donors (Scientific Registry of Transplant Recipients web site). Live donation is an option for kidneys, since healthy people have two and can remain healthy with one. While it is illegal to buy or sell organs, there have started to be kidney exchanges involving two donor–patient pairs such that each (living) donor cannot give a kidney to the intended recipient because of blood type or immunological incompatibility, but each patient can receive a kidney from the other donor. So far these have been rare: as of December 2004, only five exchanges had been performed in the 14 transplant centers in New England. One reason there have been so few kidney exchanges is that there have not been databases of incompatible patient–donor pairs. Incompatible donors were simply sent home. (Databases are now being assembled not only in New England, but also in Ohio and Baltimore.) Lainie Friedman (1997) discussed the possibility of exchange between incompatible patient–donor pairs. Not only have a few such two-way exchanges been performed, but two three-way exchanges (in which the donor kidney from one pair is transplanted into the patient in a second pair, whose donor kidney goes to a third pair, whose donor kidney goes to the first pair) have been performed at Johns Hopkins. There have also been a number of “list exchanges” in which an incompatible patient– donor pair makes a donation to someone on the waiting list for a cadaver kidney, in return for the patient in the pair receiving high priority for a cadaver kidney when one becomes available.", "Lying in order to manipulate the Gale-Shapley matching algorithm has been studied by Dubins and Freedman (1981) and by Gale and Sotomayor (1985), and was shown to be generally more appealing to the proposed-to side (denoted as the women in Gale and Shapley's seminal paper (1962)) than to the proposing side (denoted as men there). It can also be shown that in the case of lying women, for every woman who is better off due to lying, there exists a man who is worse off. In this paper, we show that an even stronger dichotomy between the goals of the sexes holds, namely, if no woman is worse off then no man is better off, while a form of sisterhood between the lying and the \"innocent\" women also holds, namely, if none of the former is worse off, then neither is any of the latter. These results are robust: they generalize to the one-to-many variants of the algorithm and do not require the resulting matching to be stable (i.e. they hold even in out-of-equilibria situations). The machinery we develop in our proofs sheds new light on the structure of lying by women in the Gale-Shapley matching algorithm. This paper is based upon an undergraduate thesis (2007) by the first author." ] }
1307.7477
2952666021
Gale and Sotomayor (1985) have shown that in the Gale-Shapley matching algorithm (1962), the proposed-to side W (referred to as women there) can strategically force the W-optimal stable matching as the M-optimal one by truncating their preference lists, each woman possibly blacklisting all but one man. As Gusfield and Irving have already noted in 1989, no results are known regarding achieving this feat by means other than such preference-list truncation, i.e. by also permuting preference lists. We answer Gusfield and Irving's open question by providing tight upper bounds on the amount of blacklists and their combined size, that are required by the women to force a given matching as the M-optimal stable matching, or, more generally, as the unique stable matching. Our results show that the coalition of all women can strategically force any matching as the unique stable matching, using preference lists in which at most half of the women have nonempty blacklists, and in which the average blacklist size is less than 1. This allows the women to manipulate the market in a manner that is far more inconspicuous, in a sense, than previously realized. When there are less women than men, we show that in the absence of blacklists for men, the women can force any matching as the unique stable matching without blacklisting anyone, while when there are more women than men, each to-be-unmatched woman may have to blacklist as many as all men. Together, these results shed light on the question of how much, if at all, do given preferences for one side a priori impose limitations on the set of stable matchings under various conditions. All of the results in this paper are constructive, providing efficient algorithms for calculating the desired strategies.
@cite_13 show that in the absence of blacklists, there exists a stable mechanism that is computationally-hard to manipulate by a single participant. As our results yield a unique stable matching, they hold for any stable mechanism, and so show in a sense that their results do not carry over to the case of manipulation by an entire side of the market if even very small blacklists are allowed. @cite_15 gives a sufficient condition for uniqueness of a stable matching in the absence of blacklists; our results produce a unique stable matching, yet do not meet this sufficient condition, even if this condition is extended to a setting with blacklists by adding additional participants denoting an unmatched status.
{ "cite_N": [ "@cite_15", "@cite_13" ], "mid": [ "2115714210", "2162429744" ], "abstract": [ "Abstract A sufficient condition for uniqueness is identified on the preferences in the marriage problem, i.e. two-sided one-to-one matching with non transferable utility. For small economies this condition is also necessary. This class of preferences is broad and they are of particular relevance in economic applications.", "The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst the Gale-Shapley algorithm is computationally easy to manipulate, we prove that there exist stable marriage procedures which are NP-hard to manipulate. We also consider the relationship between voting theory and stable marriage procedures, showing that voting rules which are NP-hard to manipulate can be used to define stable marriage procedures which are themselves NP-hard to manipulate. Finally, we consider the issue that stable marriage procedures like Gale-Shapley favour one gender over the other, and we show how to use voting rules to make any stable marriage procedure gender neutral." ] }
1307.7192
1606909871
It is well known that the optimal convergence rate for stochastic optimization of smooth functions is @math , which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of @math . In this work, we consider a new setup for optimizing smooth functions, termed as Mixed Optimization , which allows to access both a stochastic oracle and a full gradient oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses to the full gradient oracle. We show that, with an @math calls to the full gradient oracle and an @math calls to the stochastic oracle, the proposed mixed optimization algorithm is able to achieve an optimization error of @math .
Unlike the optimization methods based on full gradients, the smoothness assumption was not exploited by most stochastic optimization methods. In fact, it was shown in @cite_18 that the @math convergence rate for stochastic optimization cannot be improved even when the objective function is smooth. This classical result is further confirmed by the recent studies of composite bounds for the first order optimization methods @cite_20 @cite_10 . The smoothness of the objective function is exploited extensively in mini-batch stochastic optimization @cite_15 @cite_9 , where the goal is not to improve the convergence rate but to reduce the variance in stochastic gradients and consequentially the number of times for updating the solutions @cite_21 . We finally note that the smoothness assumption coupled with the strong convexity of function is beneficial in stochastic setting and yields a geometric convergence in expectation using Stochastic Average Gradient (SAG) and Stochastic Dual Coordinate Ascent (SDCA) algorithms proposed in @cite_11 and @cite_7 , respectively.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_21", "@cite_15", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "", "1939652453", "2130062883", "2952012226", "2951488730", "", "2016384870", "2105875671" ], "abstract": [ "", "Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.", "Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.", "Traditional algorithms for stochastic optimization require projecting the solution at each iteration into a given domain to ensure its feasibility. When facing complex domains, such as positive semi-definite cones, the projection operation can be expensive, leading to a high computational cost per iteration. In this paper, we present a novel algorithm that aims to reduce the number of projections for stochastic optimization. The proposed algorithm combines the strength of several recent developments in stochastic optimization, including mini-batch, extra-gradient, and epoch gradient descent, in order to effectively explore the smoothness and strong convexity. We show, both in expectation and with a high probability, that when the objective function is both smooth and strongly convex, the proposed algorithm achieves the optimal @math rate of convergence with only @math projections. Our empirical study verifies the theoretical result.", "Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice.", "", "The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.", "We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly." ] }
1307.7249
2950859563
This paper examines the impact of system parameters such as access point density and bandwidth partitioning on the performance of randomly deployed, interference-limited, dense wireless networks. While much progress has been achieved in analyzing randomly deployed networks via tools from stochastic geometry, most existing works either assume a very large user density compared to that of access points which does not hold in a dense network, and or consider only the user signal-to-interference-ratio as the system figure of merit which provides only partial insight on user rate, as the effect of multiple access is ignored. In this paper, the user rate distribution is obtained analytically, taking into account the effects of multiple access as well as the SIR outage. It is shown that user rate outage probability is dependent on the number of bandwidth partitions (subchannels) and the way they are utilized by the multiple access scheme. The optimal number of partitions is lower bounded for the case of large access point density. In addition, an upper bound of the minimum access point density required to provide an asymptotically small rate outage probability is provided in closed form.
With an increasing network density, the task of optimally placing the APs in the Euclidean plane becomes difficult, if not impossible. Therefore, the APs will typically have an irregular, random deployment, which is expected to affect system performance. Recent research has showed that such randomly deployed cellular systems can be successfully analyzed by employing tools from stochastic geometry @cite_11 @cite_3 . While significant results have been achieved, most of these works assume @math , effectively ignoring UE distribution, and or consider only the user signal-to-interference ratio (SIR). Assumption @math does not hold in the case of dense networks, whereas SIR provides only partial insight on the achieved user rate as the effect of multiple access is neglected @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_11" ], "mid": [ "1994080576", "", "635250944" ], "abstract": [ "Imagine a world with more base stations than cell phones: this is where cellular technology is headed in 10-20 years. This mega-trend requires many fundamental differences in visualizing, modeling, analyzing, simulating, and designing cellular networks vs. the current textbook approach. In this article, the most important shifts are distilled down to seven key factors, with the implications described and new models and techniques proposed for some, while others are ripe areas for future exploration.", "", "Preface. Preface to Volume II. Contents of Volume II. Part IV Medium Access Control 1 Spatial Aloha: the Bipole Model 2 Receiver Selection in Spatial 3 Carrier Sense Multiple 4 Code Division Multiple Access in Cellular Networks Bibliographical Notes on Part IV. Part V Multihop Routing in Mobile ad Hoc Networks: 5 Optimal Routing 6 Greedy Routing 7 Time-Space Routing Bibliographical Notes on Part V. Part VI Appendix:Wireless Protocols and Architectures: 8 RadioWave Propagation 9 Signal Detection 10 Wireless Network Architectures and Protocols Bibliographical Notes on Part VI Bibliography Table of Notation Index." ] }
1307.7249
2950859563
This paper examines the impact of system parameters such as access point density and bandwidth partitioning on the performance of randomly deployed, interference-limited, dense wireless networks. While much progress has been achieved in analyzing randomly deployed networks via tools from stochastic geometry, most existing works either assume a very large user density compared to that of access points which does not hold in a dense network, and or consider only the user signal-to-interference-ratio as the system figure of merit which provides only partial insight on user rate, as the effect of multiple access is ignored. In this paper, the user rate distribution is obtained analytically, taking into account the effects of multiple access as well as the SIR outage. It is shown that user rate outage probability is dependent on the number of bandwidth partitions (subchannels) and the way they are utilized by the multiple access scheme. The optimal number of partitions is lower bounded for the case of large access point density. In addition, an upper bound of the minimum access point density required to provide an asymptotically small rate outage probability is provided in closed form.
A few recent works have attempted to address these issues. Specifically, the UE distribution is taken into account in @cite_12 @cite_0 by incorporating in the analysis the probability of an AP being inactive (no UE present within its cell). However, analysis considers only the SIR. In @cite_8 @cite_13 the UE distribution is employed for computation of user rates under time-division-multiple-access (TDMA) without considering the effect of SIR outage. In addition, TDMA may not be the best multiple access scheme under certain scenarios.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_12", "@cite_8" ], "mid": [ "", "2005108639", "1963737897", "2087240286" ], "abstract": [ "", "Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.", "Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits.", "In this paper, we adopt stochastic geometry theory to analyze the optimal macro micro BS (base station) density for energy-efficient heterogeneous cellular networks with QoS constraints. We first derive the upper and lower bounds of the optimal BS density for homogeneous scenarios and, based on these, we analyze the optimal BS density for heterogeneous networks. The optimal macro micro BS density can be calculated numerically through our analysis, and the closed-form approximation is also derived. Our results reveal the best type of BSs to be deployed for capacity extension, or to be switched off for energy saving. Specifically, if the ratio between the micro BS cost and the macro BS cost is lower than a threshold, which is a function of path loss and their transmit power, the micro BSs are preferred, i.e., deploy more micro BSs for capacity extension or switch off certain macro BSs for energy saving. Otherwise, the optimal choice is the opposite. Our work provides guidance for energy efficient cellular network planning and dynamic operation control.1" ] }
1307.7249
2950859563
This paper examines the impact of system parameters such as access point density and bandwidth partitioning on the performance of randomly deployed, interference-limited, dense wireless networks. While much progress has been achieved in analyzing randomly deployed networks via tools from stochastic geometry, most existing works either assume a very large user density compared to that of access points which does not hold in a dense network, and or consider only the user signal-to-interference-ratio as the system figure of merit which provides only partial insight on user rate, as the effect of multiple access is ignored. In this paper, the user rate distribution is obtained analytically, taking into account the effects of multiple access as well as the SIR outage. It is shown that user rate outage probability is dependent on the number of bandwidth partitions (subchannels) and the way they are utilized by the multiple access scheme. The optimal number of partitions is lower bounded for the case of large access point density. In addition, an upper bound of the minimum access point density required to provide an asymptotically small rate outage probability is provided in closed form.
Partitioning the available bandwidth and transmitting on one of the resulting subchannels (SCs), i.e., frequency-division-multiple-access (FDMA), has been shown in @cite_1 to be beneficial for the case of ad-hoc networks assuming a channel access scheme where each node transmits independently on a randomly selected SC. This decentralized scheme was employed in @cite_9 for modeling the uplink of a cellular network with frequency hopping channel access. However, this approach is inappropriate for a practical cellular network where scheduling decisions are made by the AP and transmissions are orthogonalized to eliminate intra-cell interference (no sophisticated processing at receivers is assumed that would allow for non-orthogonal transmissions). A straightforward modification of the bandwidth partitioning concept for the downlink cellular network was considered in @cite_10 where UEs are multiplexed via TDMA and transmission is performed on one, randomly selected SC. This simple scheme was shown to provide improved SIR performance, however, with no explicit indication of how many partitions should be employed or how performance would change by allowing more that one UEs transmitting at the same time slot on different SCs.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1" ], "mid": [ "2150166076", "2149165606", "2126158338" ], "abstract": [ "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.", "Spectrum sharing between wireless networks improves the efficiency of spectrum usage, and thereby alleviates spectrum scarcity due to growing demands for wireless broadband access. To improve the usual underutilization of the cellular uplink spectrum, this paper addresses spectrum sharing between a cellular uplink and a mobile ad hoc networks. These networks access either all frequency subchannels or their disjoint subsets, called spectrum underlay and spectrum overlay, respectively. Given these spectrum sharing methods, the capacity trade-off between the coexisting networks is analyzed based on the transmission capacity of a network with Poisson distributed transmitters. This metric is defined as the maximum density of transmitters subject to an outage constraint for a given signal-to-interference ratio (SIR). Using tools from stochastic geometry, the transmission-capacity trade-off between the coexisting networks is analyzed, where both spectrum overlay and underlay as well as successive interference cancellation (SIC) are considered. In particular, for small target outage probability, the transmission capacities of the coexisting networks are proved to satisfy a linear equation, whose coefficients depend on the spectrum sharing method and whether SIC is applied. This linear equation shows that spectrum overlay is more efficient than spectrum underlay. Furthermore, this result also provides insight into the effects of network parameters on transmission capacities, including link diversity gains, transmission distances, and the base station density. In particular, SIC is shown to increase the transmission capacities of both coexisting networks by a linear factor, which depends on the interference-power threshold for qualifying canceled interferers.", "This paper addresses the following question, which is of interest in the design of a multiuser decentralized network. Given a total system bandwidth of W Hz and a fixed data rate constraint of R bps for each transmission, how many frequency slots N of size W N should the band be partitioned into in order to maximize the number of simultaneous links in the network? Dividing the available spectrum results in two competing effects. On the positive side, a larger N allows for more parallel, non- interfering communications to take place in the same area. On the negative side, a larger N increases the SINR requirement for each link because the same information rate must be achieved over less bandwidth. Exploring this tradeoff and determining the optimum value of N in terms of the system parameters is the focus of the paper. Using stochastic geometry, the optimal SINR threshold - which directly corresponds to the optimal spectral efficiency - is derived for both the low SNR (power-limited) and high SNR (interference-limited) regimes. This leads to the optimum choice of the number of frequency bands N in terms of the path loss exponent, power and noise spectral density, desired rate, and total bandwidth." ] }
1307.7454
1661223625
We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2 <= ||A||_F^2 - ||Q_k||_F^2 <= (1+eps) ||A - A_k||_F^2 and where pi_ Q_k (A) is the projection of A onto the rowspace of Q_k then ||A - pi_ Q_k (A)||_F^2 <= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows.
The strongest version, (providing bounds) for some parameter @math , is some representation of a rank @math matrix @math such that @math for @math . Unless @math is sparse, then storing @math explicitly may require @math space, so that is why various representations of @math are used in its place. This can include decompositions similar to the SVD, e.g. a CUR decomposition @cite_15 @cite_5 @cite_8 where @math and where @math is small and dense, and @math and @math are sparse and skinny, or others @cite_0 where the middle matrix is still diagonal. The sparsity is often preserved by constructing the wrapper matrices (e.g. @math and @math ) from the original columns or rows of @math . There is an obvious @math space bound for any construction result in order to preserve the column and the row space.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "2101043704", "2141696759", "", "1998269045" ], "abstract": [ "We design a new distribution over poly(r e-1) x n matrices S so that for any fixed n x d matrix A of rank r, with probability at least 9 10, SAx2 = (1 pm e)Ax2 simultaneously for all x ∈ Rd. Such a matrix S is called a subspace embedding. Furthermore, SA can be computed in O(nnz(A)) + O(r2e-2) time, where nnz(A) is the number of non-zero entries of A. This improves over all previous subspace embeddings, which required at least Ω(nd log d) time to achieve this property. We call our matrices S sparse embedding matrices. Using our sparse embedding matrices, we obtain the fastest known algorithms for overconstrained least-squares regression, low-rank approximation, approximating all leverage scores, and lp-regression: to output an x' for which Ax'-b2 ≤ (1+e)minx Ax-b2 for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A)) + O(d3e-2) time, and another in O(nnz(A)log(1 e)) + O(d3log(1 e)) time. (Here O(f) = f ⋅ logO(1)(f).) to obtain a decomposition of an n x n matrix A into a product of an n x k matrix L, a k x k diagonal matrix D, and a n x k matrix W, for which F A - L D W ≤ (1+e)F A-Ak , where Ak is the best rank-k approximation, our algorithm runs in O(nnz(A)) + O(nk2 e-4log n + k3e-5log2n) time. to output an approximation to all leverage scores of an n x d input matrix A simultaneously, with constant relative error, our algorithms run in O(nnz(A) log n) + O(r3) time. to output an x' for which Ax'-bp ≤ (1+e)minx Ax-bp for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A) log n) + poly(r e-1) time, for any constant 1 ≤ p", "Principal components analysis and, more generally, the Singular Value Decomposition are fundamental data analysis tools that express a data matrix in terms of a sequence of orthogonal or uncorrelated vectors of decreasing importance. Unfortunately, being linear combinations of up to all the data points, these vectors are notoriously difficult to interpret in terms of the data and processes generating the data. In this article, we develop CUR matrix decompositions for improved data analysis. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and or actual rows of the data matrix. Because they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn (to the extent that the original data are). We present an algorithm that preferentially chooses columns and rows that exhibit high “statistical leverage” and, thus, in a very precise statistical sense, exert a disproportionately large “influence” on the best low-rank fit of the data matrix. By selecting columns and rows in this manner, we obtain improved relative-error and constant-factor approximation guarantees in worst-case analysis, as opposed to the much coarser additive-error guarantees of prior work. In addition, since the construction involves computing quantities with a natural and widely studied statistical interpretation, we can leverage ideas from diagnostic regression analysis to employ these matrix decompositions for exploratory data analysis.", "", "Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of “components.” Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an @math matrix @math and a rank parameter @math . In our first algorithm, @math is chosen, and we let @math , where @math is the Moore-Penrose generalized inverse of @math . In our second algorithm @math , @math , @math are chosen, and we let @math . ( @math and @math are matrices that consist of actual columns and rows, respectively, of @math , and @math is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least @math , @math , where @math is the “best” rank- @math approximation provided by truncating the SVD of @math , and where @math is the Frobenius norm of the matrix @math . The number of columns of @math and rows of @math is a low-degree polynomial in @math , @math , and @math . Both the Numerical Linear Algebra community and the Theoretical Computer Science community have studied variants of these matrix decompositions over the last ten years. However, our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple and they take time of the order needed to approximately compute the top @math singular vectors of @math . The technical crux of our analysis is a novel, intuitive sampling method we introduce in this paper called “subspace sampling.” In subspace sampling, the sampling probabilities depend on the Euclidean norms of the rows of the top singular vectors. This allows us to obtain provable relative-error guarantees by deconvoluting “subspace” information and “size-of- @math ” information in the input matrix. This technique is likely to be useful for other matrix approximation and data analysis problems." ] }
1307.7454
1661223625
We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2 <= ||A||_F^2 - ||Q_k||_F^2 <= (1+eps) ||A - A_k||_F^2 and where pi_ Q_k (A) is the projection of A onto the rowspace of Q_k then ||A - pi_ Q_k (A)||_F^2 <= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows.
Many of these algorithms are streaming algorithms. To the best of our understanding, the best streaming algorithm @cite_17 is due to Clarkson and Woodruff. All bounds assume each matrix entry requires @math bits. It is randomized and it constructs a decomposition of a rank @math matrix @math that satisfies @math , with probability at least @math . This provides a relative error construction bound of size @math bits. They also show an @math bits lower bound.
{ "cite_N": [ "@cite_17" ], "mid": [ "2059867647" ], "abstract": [ "We give near-optimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entries; we prove results for turnstile updates, given in an arbitrary order. We give the first lower bounds known for the space needed by the sketches, for a given estimation error e. We sharpen prior upper bounds, with respect to combinations of space, failure probability, and number of passes. The sketch we use for matrix A is simply STA, where S is a sign matrix. Our results include the following upper and lower bounds on the bits of space needed for 1-pass algorithms. Here A is an n x d matrix, B is an n x d' matrix, and c := d+d'. These results are given for fixed failure probability; for failure probability δ>0, the upper bounds require a factor of log(1 δ) more space. We assume the inputs have integer entries specified by O(log(nc)) bits, or O(log(nd)) bits. (Matrix Product) Output matrix C with F(ATB-C) ≤ e F(A) F(B). We show that Θ(ce-2log(nc)) space is needed. (Linear Regression) For d'=1, so that B is a vector b, find x so that Ax-b ≤ (1+e) minx' ∈ Reald Ax'-b. We show that Θ(d2e-1 log(nd)) space is needed. (Rank-k Approximation) Find matrix tAk of rank no more than k, so that F(A-tAk) ≤ (1+e) F A-Ak , where Ak is the best rank-k approximation to A. Our lower bound is Ω(ke-1(n+d)log(nd)) space, and we give a one-pass algorithm matching this when A is given row-wise or column-wise. For general updates, we give a one-pass algorithm needing [O(ke-2(n + d e2)log(nd))] space. We also give upper and lower bounds for algorithms using multiple passes, and a sketching analog of the CUR decomposition." ] }
1307.7454
1661223625
We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2 <= ||A||_F^2 - ||Q_k||_F^2 <= (1+eps) ||A - A_k||_F^2 and where pi_ Q_k (A) is the projection of A onto the rowspace of Q_k then ||A - pi_ Q_k (A)||_F^2 <= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows.
There is a wealth of literature on this problem; most recently two algorithms @cite_0 @cite_6 showed how to construct a decomposition of @math that has rank @math with error bound @math with constant probability in approximately @math time. We refer to these papers for a more thorough survey of the history of the area, many other results, and other similar approximate linear algebra applications. But we attempt to report many of the most important related results in Appendix .
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2101043704", "2949809202" ], "abstract": [ "We design a new distribution over poly(r e-1) x n matrices S so that for any fixed n x d matrix A of rank r, with probability at least 9 10, SAx2 = (1 pm e)Ax2 simultaneously for all x ∈ Rd. Such a matrix S is called a subspace embedding. Furthermore, SA can be computed in O(nnz(A)) + O(r2e-2) time, where nnz(A) is the number of non-zero entries of A. This improves over all previous subspace embeddings, which required at least Ω(nd log d) time to achieve this property. We call our matrices S sparse embedding matrices. Using our sparse embedding matrices, we obtain the fastest known algorithms for overconstrained least-squares regression, low-rank approximation, approximating all leverage scores, and lp-regression: to output an x' for which Ax'-b2 ≤ (1+e)minx Ax-b2 for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A)) + O(d3e-2) time, and another in O(nnz(A)log(1 e)) + O(d3log(1 e)) time. (Here O(f) = f ⋅ logO(1)(f).) to obtain a decomposition of an n x n matrix A into a product of an n x k matrix L, a k x k diagonal matrix D, and a n x k matrix W, for which F A - L D W ≤ (1+e)F A-Ak , where Ak is the best rank-k approximation, our algorithm runs in O(nnz(A)) + O(nk2 e-4log n + k3e-5log2n) time. to output an approximation to all leverage scores of an n x d input matrix A simultaneously, with constant relative error, our algorithms run in O(nnz(A) log n) + O(r3) time. to output an x' for which Ax'-bp ≤ (1+e)minx Ax-bp for an n x d matrix A and an n x 1 column vector b, we obtain an algorithm running in O(nnz(A) log n) + poly(r e-1) time, for any constant 1 ≤ p", "An \"oblivious subspace embedding (OSE)\" given some parameters eps,d is a distribution D over matrices B in R^ m x n such that for any linear subspace W in R^n with dim(W) = d it holds that Pr_ B D (forall x in W ||B x||_2 in (1 + - eps)||x||_2) > 2 3 We show an OSE exists with m = O(d^2 eps^2) and where every B in the support of D has exactly s=1 non-zero entries per column. This improves previously best known bound in [Clarkson-Woodruff, arXiv:1207.6365]. Our quadratic dependence on d is optimal for any OSE with s=1 [Nelson-Nguyen, 2012]. We also give two OSE's, which we call Oblivious Sparse Norm-Approximating Projections (OSNAPs), that both allow the parameter settings m = O(d eps^2) and s = polylog(d) eps, or m = O(d^ 1+gamma eps^2) and s=O(1 eps) for any constant gamma>0. This m is nearly optimal since m >= d is required simply to no non-zero vector of W lands in the kernel of B. These are the first constructions with m=o(d^2) to have s=o(d). In fact, our OSNAPs are nothing more than the sparse Johnson-Lindenstrauss matrices of [Kane-Nelson, SODA 2012]. Our analyses all yield OSE's that are sampled using either O(1)-wise or O(log d)-wise independent hash functions, which provides some efficiency advantages over previous work for turnstile streaming applications. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^ n x d with orthonormal columns and random sparse B, all singular values of BU lie in [1-eps, 1+eps] with good probability. Plugging OSNAPs into known algorithms for numerical linear algebra problems such as approximate least squares regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems." ] }
1307.7454
1661223625
We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2 <= ||A||_F^2 - ||Q_k||_F^2 <= (1+eps) ||A - A_k||_F^2 and where pi_ Q_k (A) is the projection of A onto the rowspace of Q_k then ||A - pi_ Q_k (A)||_F^2 <= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows.
Finally we mention a recent algorithm by Liberty @cite_2 which runs in @math time, maintains a matrix with @math rows in a row-wise streaming algorithm, and produces a matrix @math of rank at most @math so that for any unit vector @math of length @math satisfies @math . We examine a slight variation of this algorithm and describe bounds that it achieves in more familiar terms.
{ "cite_N": [ "@cite_2" ], "mid": [ "2951542269" ], "abstract": [ "We adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives the rows of a large matrix @math one after the other in a streaming fashion. It maintains a sketch matrix @math such that for any unit vector @math [ |Ax |^2 |Bx |^2 |Ax |^2 - |A |_ f ^2 .] Sketch updates per row in @math require @math operations in the worst case. A slight modification of the algorithm allows for an amortized update time of @math operations per row. The presented algorithm stands out in that it is: deterministic, simple to implement, and elementary to prove. It also experimentally produces more accurate sketches than widely used approaches while still being computationally competitive." ] }
1307.7332
2403454593
Crowdsourcing allows to instantly recruit workers on the web to annotate image, web page, or document databases. However, worker unreliability prevents taking a workers responses at face value. Thus, responses from multiple workers are typically aggregated to more reliably infer ground-truth answers. We study two approaches for crowd aggregation on multicategory answer spaces stochastic modeling based and deterministic objective function based. Our stochastic model for answer generation plausibly captures the interplay between worker skills, intentions, and task difficulties and allows us to model a broad range of worker types. Our deterministic objective based approach does not assume a model for worker response generation. Instead, it aims to maximize the average aggregate confidence of weighted plurality crowd decision making. In both approaches, we explicitly model the skill and intention of individual workers, which is exploited for improved crowd aggregation. Our methods are applicable in both unsupervised and semisupervised settings, and also when the batch of tasks is heterogeneous. As observed experimentally, the proposed methods can defeat tyranny of the masses, they are especially advantageous when there is a minority of skilled workers amongst a large crowd of unskilled and malicious workers.
Adversarial workers in the binary case were accounted for in @cite_11 and @cite_0 . In this work, we characterized adversarial behavior for a more generalized (multicategory) setting and proposed several realistic adversarial models. We also showed how we can retain the interpretation of negative weights as representing adversaries in generality from the binary @cite_0 to the multicategory case. Moreover, we showed that our approach exploits responses from (simple) adversaries to actually improve the overall performance. @cite_0 and @cite_21 consider other statistical methods, such as correlation-based rules and low rank approximation of matrices. These methods have been studied for binary classification tasks. Our objective-based approach generalizes the weighted majority theme of these papers to a multicategory case, incorporating honest workers, adversaries, and spammers. We note that recently, @cite_4 have tried to extend the low rank approximation approach to the case when the tasks do not have a ground truth answer and the answers (from a categorical space) can be subjective. In this case, schools of thought" are discovered via clustering and the average size of clusters for each task is representative of its ease (clarity).
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_4", "@cite_11" ], "mid": [ "2140890285", "2554839354", "2162815002", "2142518823" ], "abstract": [ "Crowdsourcing systems, in which tasks are electronically distributed to numerous \"information piece-workers\", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such crowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm significantly outperforms majority voting and, in fact, is asymptotically optimal through comparison to an oracle that knows the reliability of every worker.", "Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers”, have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, based on low-rank matrix approximation, significantly outperforms majority voting and, in fact, is order-optimal through comparison to an oracle that knows the reliability of every worker.", "Crowdsourcing has recently become popular among machine learning researchers and social scientists as an effective way to collect large-scale experimental data from distributed workers. To extract useful information from the cheap but potentially unreliable answers to tasks, a key problem is to identify reliable workers as well as unambiguous tasks. Although for objective tasks that have one correct answer per task, previous works can estimate worker reliability and task clarity based on the single gold standard assumption, for tasks that are subjective and accept multiple reasonable answers that workers may be grouped into, a phenomenon called schools of thought, existing models cannot be trivially applied. In this work, we present a statistical model to estimate worker reliability and task clarity without resorting to the single gold standard assumption. This is instantiated by explicitly characterizing the grouping behavior to form schools of thought with a rank-1 factorization of a worker-task groupsize matrix. Instead of performing an intermediate inference step, which can be expensive and unstable, we present an algorithm to analytically compute the sizes of different groups. We perform extensive empirical studies on real data collected from Amazon Mechanical Turk. Our method discovers the schools of thought, shows reasonable estimation of worker reliability and task clarity, and is robust to hyperparameter changes. Furthermore, our estimated worker reliability can be used to improve the gold standard prediction for objective tasks.", "Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used \"Majority Vote\" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
The adaptive compression algorithms proposed so far in the wireless sensor network mainly adapt compression for energy savings. Most of the algorithms proposed in the past mainly consider slowly changing natural phenomena, which intrinsically require relatively low sampling. Therefore, bandwidth conservation has got secondary focus compared to energy conservation. For example, in @cite_12 authors propose an adaptive compassion algorithm, wherein compression is adapted at the sensing node by analyzing the correlation in a centralized data store. Since the approach require central server to node communication, it is suitable for slowly changing phenomena e.g., soil moisture. However, we consider trajectory with as high as 2 Hz sampling rate, therefore, such technique may result in enormous node to base communication causing quick depletion of the sensor node battery.
{ "cite_N": [ "@cite_12" ], "mid": [ "2164680510" ], "abstract": [ "We propose a novel approach to reducing energy consumption in sensor networks using a distributed adaptive signal processing framework and efficient algorithm. While the topic of energy-aware routing to alleviate energy consumption in sensor networks has received attention recently (C. Toh, 2001; R. , 2002), in this paper, we propose an orthogonal approach to previous methods. Specifically, we propose a distributed way of continuously exploiting existing correlations in sensor data based on adaptive signal processing and distributed source coding principles. Our approach enables sensor nodes to blindly compress their readings with respect to one another without the need for explicit and energy-expensive intersensor communication to effect this compression. Furthermore, the distributed algorithm used by each sensor node is extremely low in complexity and easy to implement (i.e., one modulo operation), while an adaptive filtering framework is used at the data gathering unit to continuously learn the relevant correlation structures in the sensor data. Our simulations show the power of our proposed algorithms, revealing their potential to effect significant energy savings (from 10 -65 ) for typical sensor data corresponding to a multitude of sensor modalities." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
Some other adaptive compression algorithms, although do not require lots of inter-node communication, however, require large number of on-node processing. For example, in @cite_29 authors propose an adaptive wavelet compression algorithm for wireless sensor networks. In the proposed method each receiving sensor computes the compression ratio, and calculates the total energy dissipation (using both computation and communication energy models) to make a decision about whether to increases wavelet transform level or to keep the present level. Then, the sensor, runs wavelet compression with next transform level to compute the new compression ratio, computes new value of total energy dissipation and compares it with the old value. The above steps will be repeated if the new energy estimate is smaller than old estimate and wavelet transform level is less than some maximum allowed value. After this operation, the nodes transmit data to the central nodes applying the computed wavelet transform level. This method will involve enormous computation given that for each trajectory segment it has to iterate multiple times to determine the optimal transform level for the best compression and energy trade-off.
{ "cite_N": [ "@cite_29" ], "mid": [ "2112025575" ], "abstract": [ "In this paper we proposed a novel Adaptive Distributed Wavelet Compression (ADWC) algorithm for reducing energy consumption in a wireless sensor network, where each of the sensors has limited power. This algorithm is characterized by a distributed lifting factorization, which matching well with the transmission strategy employed in wireless sensor networks., it also present an adaptive algorithm to selects the optimal wavelet compression parameters to minimize total energy dissipation. The simulation results showed that these approaches can achieve significant energy savings without sacrificing the quality of the data reconstruction." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
Similar problem will be experienced in the algorithm proposed in @cite_16 , which employs a feedback approach in which the compression ratio is compared to a pre-determined threshold. The compression model used in the previous frame can be retained and used for the next frame, if compression ratio is greater than the predefined threshold. Otherwise, the adaptive operation of the system will produce a new compression model.
{ "cite_N": [ "@cite_16" ], "mid": [ "2109878464" ], "abstract": [ "Data compression techniques have extensive applications in power-constrained digital communication systems, such as in the rapidly-developing domain of wireless sensor network applications. This paper explores energy consumption tradeoffs associated with data compression, particularly in the context of lossless compression for acoustic signals. Such signal processing is relevant in a variety of sensor network applications, including surveillance and monitoring. Applying data compression in a sensor node generally reduces the energy consumption of the transceiver at the expense of additional energy expended in the embedded processor due to the computational cost of compression. This paper introduces a methodology for comparing data compression algorithms in sensor networks based on the figure of merit D E, where D is the amount of data (before compression) that can be transmitted under a given energy budget E for computation and communication. We develop experiments to evaluate, using this figure of merit, different variants of linear predictive coding. We also demonstrate how different models of computation applied to the embedded software design lead to different degrees of processing efficiency, and thereby have significant effect on the targeted figure of merit." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
A slightly different adaptive compression principle is applied in the algorithm proposed in @cite_18 . Authors design an on-line adaptive algorithm that dynamically makes compression decisions to accommodate the changing state of WSNs. In the algorithm, a queueing model is adopted to estimate the queueing behavior of sensors with the assistance of only local information of each sensor node. By using the queueing model, the algorithm predicts the compression effect on the average packet delay and performs compression only when it can reduce the packet delay. This algorithm is quite elegant since it does not require lots of on node processing and intra-node communications, however, this algorithm may not be suitable for any trajectory in general. Instead, this algorithm will be suitable for those trajectories, where objects keep stationary for substantial amount of time, therefore, compression will be applied only when they are moving. Note that our proposed method is more general. The compression ratio is adapted to the speed of the object, therefore, when the object is not moving maximum compression will be achieved, and as the object starts moving instead of maintaining a common compression ratio, we adapt compression ration to the speed.
{ "cite_N": [ "@cite_18" ], "mid": [ "2105564986" ], "abstract": [ "In this paper, architectures for two-dimensional and three-dimensional underwater sensor networks are discussed. A detailed overview on the current solutions for medium access control, network, and transport layer protocols are given and open research issues are discussed." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
Finally, in @cite_1 authors present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. However, the proposed algorithm is lossless, therefore, it is not robust to data loss of the wireless sensor network platform.
{ "cite_N": [ "@cite_1" ], "mid": [ "2074063434" ], "abstract": [ "Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs) since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
Elad in @cite_23 and in @cite_10 have optimized projection matrix to achieve better compression ratio. Elad has defined a new mutual coherence, which describes the correlation between the dictionary and projection matrix. The smaller the mutual coherence, the better the compression performance. Elad has minimized the mutual coherence with respect to the projection matrix - keeping the dictionary fixed. In addition to just optimizing the projection matrix, has optimized the dictionary simultaneously. In particular, Julio uses recently proposed K-SVD algorithm proposed in @cite_37 to learn dictionary and then jointly optimize the dictionary and projection matrix by maximizing the number of orthogonal columns in their product. We use SPAMS to learn the dictionary, which is different to K-SVD. In addition, in order to optimize the projection matrix we obtain a special singular value decomposition of the dictionary, which naturally produces low coherence projection matrix and dictionary pair. We contrasted our work with that of Elad's, however, we had difficulties to run Julio's method for our trajectory dataset.
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_23" ], "mid": [ "", "2102129292", "2134033146" ], "abstract": [ "", "Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.", "Compressed sensing (CS) offers a joint compression and sensing processes, based on the existence of a sparse representation of the treated signal and a set of projected measurements. Work on CS thus far typically assumes that the projections are drawn at random. In this paper, we consider the optimization of these projections. Since such a direct optimization is prohibitive, we target an average measure of the mutual coherence of the effective dictionary, and demonstrate that this leads to better CS reconstruction performance. Both the basis pursuit (BP) and the orthogonal matching pursuit (OMP) are shown to benefit from the newly designed projections, with a reduction of the error rate by a factor of 10 and beyond." ] }
1307.6923
1782137315
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
@cite_8 authors propose a trajectory compression algorithm which uses various line simplification methods, for example, Dead-Reckoning and the Douglas-Peuker algorithm, and a variant of a CG-based optimal algorithm for polyline reduction. In particular, the authors also propose a hybrid approachm, which combines some of the above methods. Note that out of the three methods, Douglas-Peuker is most popular. In our previous work, we have already shown that the non-optimized version of our projection matrix already performs better than the improved Douglas-Peuker method proposed by @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_8" ], "mid": [ "1600482701", "1585795567" ], "abstract": [ "Moving object data handling has received a fair share of attention over recent years in the spatial database community. This is understandable as positioning technology is rapidly making its way into the consumer market, not only through the already ubiquitous cell phone but soon also through small, on-board positioning devices in many means of transport and in other types of portable equipment. It is thus to be expected that all these devices will start to generate an unprecedented data stream of time-stamped positions. Sooner or later, such enormous volumes of data will lead to storage, transmission, computation, and display challenges. Hence, the need for compression techniques.", "This work addresses the problem of balancing the trade-off between the energy cost due to communication and the accuracy of the tracking-based trajectories’ detection and representation in Wireless Sensor Networks (WSNs) settings. We consider some of the approaches used by the Moving Objects Databases (MOD) and Computational Geometry (CG) communities, and we demonstrate that with appropriate adaptation, they can yield significant benefits in terms of energy savings and, consequently, lifetime of a given WSN. Towards that, we developed distributed variations of three approaches for spatio-temporal data reduction – two heuristics (Dead-Reckoning and the Douglas-Peuker algorithm), and a variant of a CG-based optimal algorithm for polyline reduction. In addition, we examine different policies for managing the buffer used by the individual tracking nodes for storing the partial trajectory data. Lastly, we investigated the potential benefits of combining the different data-reduction approaches into ”hybrid” ones during tracking of a particular object’s trajectory. Our experiments demonstrate that the proposed methodologies can significantly reduce the network-wide energy expenses due to communication and increase the network lifetime." ] }
1307.6488
2952425272
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.
To achieve sustained performance on hybrid supercomputers and reduce programming cost, various programming frameworks and tools have been developed, e.g., Merge @cite_2 (a library based framework for heterogeneous multi-core systems), Zippy @cite_51 (a framework for parallel execution of codes on multiple GPUs), BSGP @cite_16 (a new programming language for general purpose computation on the GPU), and CUDA-lite @cite_31 (an enhancement to CUDA that transforms code based on annotations). Efforts are also underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs @cite_18 and for automatic translation of OpenMP parallelized codes to CUDA @cite_27 . Finally, OpenACC is slated to provide OpenMP-like annotations for C and Fortran code.
{ "cite_N": [ "@cite_18", "@cite_27", "@cite_2", "@cite_31", "@cite_16", "@cite_51" ], "mid": [ "2083056254", "2170634604", "2109426995", "", "2123372783", "1996488762" ], "abstract": [ "GPUs are a class of specialized parallel architectures with tremendous computational power. The new Compute Unified Device Architecture (CUDA) programming model from NVIDIA facilitates programming of general purpose applications on their GPUs. However, manual development of high-performance parallel code for GPUs is still very challenging. In this paper, a number of issues are addressed towards the goal of developing a compiler framework for automatic parallelization and performance optimization of affine loop nests on GPGPUs: 1) approach to program transformation for efficient data access from GPU global memory, using a polyhedral compiler model of data dependence abstraction and program transformation; 2) determination of optimal padding factors for conflict-minimal data access from GPU shared memory; and 3) model-driven empirical search to determine optimal parameters for unrolling and tiling. Experimental results on a number of kernels demonstrate the effectiveness of the compiler optimization approaches developed.", "GPGPUs have recently emerged as powerful vehicles for general-purpose high-performance computing. Although a new Compute Unified Device Architecture (CUDA) programming model from NVIDIA offers improved programmability for general computing, programming GPGPUs is still complex and error-prone. This paper presents a compiler framework for automatic source-to-source translation of standard OpenMP applications into CUDA-based GPGPU applications. The goal of this translation is to further improve programmability and make existing OpenMP applications amenable to execution on GPGPUs. In this paper, we have identified several key transformation techniques, which enable efficient GPU global memory access, to achieve high performance. Experimental results from two important kernels (JACOBI and SPMUL) and two NAS OpenMP Parallel Benchmarks (EP and CG) show that the described translator and compile-time optimizations work well on both regular and irregular applications, leading to performance improvements of up to 50X over the unoptimized translation (up to 328X over serial).", "In this paper we propose the Merge framework, a general purpose programming model for heterogeneous multi-core systems. The Merge framework replaces current ad hoc approaches to parallel programming on heterogeneous platforms with a rigorous, library-based methodology that can automatically distribute computation across heterogeneous cores to achieve increased energy and performance efficiency. The Merge framework provides (1) a predicate dispatch-based library system for managing and invoking function variants for multiple architectures; (2) a high-level, library-oriented parallel language based on map-reduce; and (3) a compiler and runtime which implement the map-reduce language pattern by dynamically selecting the best available function implementations for a given input and machine configuration. Using a generic sequencer architecture interface for heterogeneous accelerators, the Merge framework can integrate function variants for specialized accelerators, offering the potential for to-the-met al performance for a wide range of heterogeneous architectures, all transparent to the user. The Merge framework has been prototyped on a heterogeneous platform consisting of an Intel Core 2 Duo CPU and an 8-core 32-thread Intel Graphics and Media Accelerator X3000, and a homogeneous 32-way Unisys SMP system with Intel Xeon processors. We implemented a set of benchmarks using the Merge framework and enhanced the library with X3000 specific implementations, achieving speedups of 3.6x -- 8.5x using the X3000 and 5.2x -- 22x using the 32-way system relative to the straight C reference implementation on a single IA32 core.", "", "We present BSGP, a new programming language for general purpose computation on the GPU. A BSGP program looks much the same as a sequential C program. Programmers only need to supply a bare minimum of extra information to describe parallel processing on GPUs. As a result, BSGP programs are easy to read, write, and maintain. Moreover, the ease of programming does not come at the cost of performance. A well-designed BSGP compiler converts BSGP programs to kernels and combines them using optimally allocated temporary streams. In our benchmark, BSGP programs achieve similar or better performance than well-optimized CUDA programs, while the source code complexity and programming time are significantly reduced. To test BSGP's code efficiency and ease of programming, we implemented a variety of GPU applications, including a highly sophisticated X3D parser that would be extremely difficult to develop with existing GPU programming languages.", "Due to its high performance cost ratio, a GPU cluster is an attractive platform for large scale general-purpose computation and visualization applications. However, the programming model for high performance general-purpose computation on GPU clusters remains a complex problem. In this paper, we introduce the Zippy frame-work, a general and scalable solution to this problem. It abstracts the GPU cluster programming with a two-level parallelism hierarchy and a non-uniform memory access (NUMA) model. Zippy preserves the advantages of both message passing and shared-memory models. It employs global arrays (GA) to simplify the communication, synchronization, and collaboration among multiple GPUs. Moreover, it exposes data locality to the programmer for optimal performance and scalability. We present three example applications developed with Zippy: sort-last volume rendering, Marching Cubes isosurface extraction and rendering, and lattice Boltzmann flow simulation with online visualization. They demonstrate that Zippy can ease the development and integration of parallel visualization, graphics, and computation modules on a GPU cluster." ] }
1307.6488
2952425272
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.
Jacobsen @cite_43 extended this model by adding inter-node communication via MPI @. They followed the approach described in Micik @cite_42 and overlapped the communication with computations as well as GPU-host with host-host data exchange. However, they did not take advantage of the full-duplex nature of the PCI-Express bus, which would have decreased the time spent for communication. Their computational model also divides the domain along the slowest varying dimension only, and this approach is not suitable for all numerical problems. For example, for large computational domains, the size of the ghost zone becomes noticeable in comparison to the computed part of the domain, and the communication cost becomes larger than the computational cost, which can be observed in the non-linear scaling of their model.
{ "cite_N": [ "@cite_43", "@cite_42" ], "mid": [ "2108266785", "2620842051" ], "abstract": [ "Modern graphics processing units (GPUs) with many-core architectures have emerged as general-purpose parallel computing platforms that can accelerate simulation science applications tremendously. While multiGPU workstations with several TeraFLOPS of peak computing power are available to accelerate computational problems, larger problems require even more resources. Conventional clusters of central processing units (CPU) are now being augmented with multiple GPUs in each compute-node to tackle large problems. The heterogeneous architecture of a multi-GPU cluster with a deep memory hierarchy creates unique challenges in developing scalable and efficient simulation codes. In this study, we pursue mixed MPI-CUDA implementations and investigate three strategies to probe the efficiency and scalability of incompressible flow computations on the Lincoln Tesla cluster at the National Center for Supercomputing Applications (NCSA). We exploit some of the advanced features of MPI and CUDA programming to overlap both GPU data transfer and MPI communications with computations on the GPU. We sustain approximately 2.4 TeraFLOPS on the 64 nodes of the NCSA Lincoln Tesla cluster using 128 GPUs with a total of 30,720 processing elements. Our results demonstrate that multi-GPU clusters can substantially accelerate computational fluid dynamics (CFD) simulations.", "A connector for mounting on a vertical main frame has an elongate body portion containing a protector field and an elongate jumper field extending along one side. The connector is mounted on the frame member by a bracket which mounts the connector so that the body portion extends at an angle to the plane of the main frame. With an appropriate angle, a useful space is provided between connectors, for access, while removal and replacement of protector modules, and entire connectors, is readily available. One particularly appropriate angle is about 53 DEG , while another is about 70 DEG , although these can vary. A test field can also extend along one side, adjacent to the jumper field." ] }
1307.6488
2952425272
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.
Notable work on an example stencil application was selected as a finalist of the Gordon Bell Prize in SC 2011 as the first peta-scale result @cite_22 . demonstrated very high performance of 1.017 PFlop s in single precision using 4,000 GPUs along with 16,000 CPU cores on TSUBAME 2.0. Nevertheless, a set of new and more advanced optimization techniques introduced in the framework as well as its capabilities to generate highly efficient multi-GPU stencil computing codes from a high-level problem description make this framework even more attractive for users of large-scale hybrid systems.
{ "cite_N": [ "@cite_22" ], "mid": [ "2104853465" ], "abstract": [ "Many numerical codes now under development to solve Einstein's equations of general relativity in @math -dimensional spacetimes employ the standard ADM form of the field equations. This form involves evolution equations for the raw spatial metric and extrinsic curvature tensors. Following Shibata and Nakamura, we modify these equations by factoring out the conformal factor and introducing three connection functions.'' The evolution equations can then be reduced to wave equations for the conformal metric components, which are coupled to evolution equations for the connection functions. We evolve small amplitude gravitational waves and make a direct comparison of the numerical performance of the modified equations with the standard ADM equations. We find that the modified form exhibits much improved stability." ] }
1307.5967
1741551155
Two central topics of study in combinatorics are the so-called evolution of random graphs, introduced by the seminal work of Erd o s and R 'enyi, and the family of @math -free graphs, that is, graphs which do not contain a subgraph isomorphic to a given (usually small) graph @math . A widely studied problem that lies at the interface of these two areas is that of determining how the structure of a typical @math -free graph with @math vertices and @math edges changes as @math grows from @math to @math . In this paper, we resolve this problem in the case when @math is a clique, extending a classical result of Kolaitis, Pr "omel, and Rothschild. In particular, we prove that for every @math , there is an explicit constant @math such that, letting @math , the following holds for every positive constant @math . If @math , then almost all @math -free @math -vertex graphs with @math edges are @math -partite, whereas if @math , then almost all of them are not @math -partite.
Let us remark here that a statement that is even stronger than Conjecture was proved by Osthus, Pr "o mel, and Taraz @cite_3 in the case when @math is a cycle of odd length. More precisely, the following was shown in @cite_3 . Let @math be an integer, let @math be an arbitrary positive constant, and let [ t_ = t_ (n) = ( -1 ( n 2 )^ n )^ 1 -1 . ] If @math , then almost all graphs in @math are not bipartite and if @math , then almost all of them are bipartite.
{ "cite_N": [ "@cite_3" ], "mid": [ "2023104754" ], "abstract": [ "Denote by @math the class of all triangle-free graphs on n vertices and m edges. Our main result is the following sharp threshold, which answers the question for which densities a typical triangle-free graph is bipartite. Fix e > 0 and let @math . If n 2 ≤ m ≤ (1 − e) t3, then almost all graphs in @math are not bipartite, whereas if m ≥ (1 + e)t3, then almost all of them are bipartite. For m ≥ (1 + e)t3, this allows us to determine asymptotically the number of graphs in @math . We also obtain corresponding results for Cl-free graphs, for any cycle Cl of fixed odd length." ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
One of the oldest references on mining television content is a DARPA-sponsored workshop in 1999 with a topic detection and tracking challenge @cite_10 . Higher-level applications have been emerging in recent years. For example, describe a system for finding web pages related to television content, and test different methods to synthesize a web search query from a television transcript. classify videos based on a transcription obtained from speech recognition. describe a system to rate the credibility of information items on television by looking at how often the same image is described in a similar way by more than one news source.
{ "cite_N": [ "@cite_10" ], "mid": [ "1521682831" ], "abstract": [ "This paper describes the creation and content of the TDT-2 corpus in the context of the TDT-2 research project it supports and in comparison to previous and subsequent efforts" ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
We use closed captions provided by a software developed by us and recently presented in the software demonstration session of SIGIR @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2171041826" ], "abstract": [ "IntoNow is a mobile application that provides a second-screen experience to television viewers. IntoNow uses the microphone of the companion device to sample the audio coming from the TV set, and compares it against a database of TV shows in order to identify the program being watched. The system we demonstrate is activated by IntoNow for specific types of shows. It retrieves information related to the program the user is watching by using closed captions, which are provided by each broadcasting network along the TV signal. It then matches the stream of closed captions in real-time against multiple sources of content. More specifically, during news programs it displays links to online news articles and the profiles of people and organizations in the news, and during music shows it displays links to songs. The matching models are machine-learned from editorial judgments, and tuned to achieve approximately 90 precision." ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
The closed captions are streams of plain text that we process through a series of steps. First, to segment the text stream into sentences we use a series of heuristics which include detecting a change of speaker, conventionally signaled by a text marker ( @math ''), using the presence of full stops, and using time-based rules. We remark that there exist methods to join sentences into passages @cite_6 @cite_8 , but for our analysis we use single sentences as basic units of content, and we only group them when they match to the same news item, as described in .
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "125151323", "1607558779" ], "abstract": [ "Large volumes of information in video format are being created and made available from a number of application areas, including movies, broadcast TV, CCTV, education video materials, and so on. As this information is increasingly in digital format, this creates the opportunity and then the demand for content-based access to such material. One particular kind of video information that we are interested in is broadcast TV news and in this paper we report on our work on developing content-based access to broadcast TV news. Our work is carried out within the context of the Fischlar system, developed to allow content access to large volumes of digital video information. We report our work on Fischlar-News which provides text search based on closed caption information as well as our on-going work on segmenting TV News programmes and providing personalised intelligent access to TV news stories, on fixed as well as mobile platforms.", "In this paper, we introduce and evaluate two novel approaches, one using video stream and the other using close-caption text stream, for segmenting TV news into stories. The segmentation of the video stream into stories is achieved by detecting anchor person shots and the text stream is segmented into stories using a Latent Dirichlet Allocation (LDA) based approach. The benefit of the proposed LDA based approach is that along with the story segmentation it also provides the topic distribution associated with each segment. We evaluated our techniques on the TRECVid 2003 benchmark database and found that though the individual systems give comparable results, a combination of the outputs of the two systems gives a significant improvement over the performance of the individual systems." ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
Second, we recognize and extract named entities by using a named entity tagger that works in two steps: entity resolution @cite_13 and aboutness'' ranking @cite_4 . We focus on the type in the remainder of this paper, and whenever we find a given entity in the closed captions of a news provider, we count a of that person by the provider.
{ "cite_N": [ "@cite_13", "@cite_4" ], "mid": [ "1594128868", "2051082414" ], "abstract": [ "Ambiguity of entity mentions and concept references is a challenge to mining text beyond surface-level keywords. We describe an effective method of disambiguating surface forms and resolving them to Wikipedia entities and concepts. Our method employs an extensive set of features mined from Wikipedia and other large data sources, and combines the features using a machine learning approach with automatically generated training data. Based on a manually labeled evaluation set containing over 1000 news articles, our resolution model has 85 precision and 87.8 recall. The performance is significantly better than three baselines based on traditional context similarities or sense commonness measurements. Our method can be applied to other languages and scales well to new entities and concepts.", "Capturing the \"aboutness\" of documents has been a key research focus throughout the history of automated textual information processing. In this work, we represent aboutness using words and phrases that best reflect the central topics of a document. We present a machine learning approach that learns to score and rank words and phrases in a document according to their relevance to the document. We use implicit user feedback available in search engine click logs to characterize the user-perceived notion of term relevance. Using a small set of manually generated training data, we show that the surrogate training data from click logs correlates well with this data, thus eliminating the need to create data for training manually which is both expensive and fundamentally difficult to obtain for such a task. Further, we use a diverse set of features in our learning model that capitalize heavily on the structural and visual properties of web documents. In our extensive experimentation, we pay particular attention to tail web pages and show that our approach trained on mainly head web pages generalizes and performs well on all kinds of documents. In several evaluation methods using manually generated summaries and term relevance judgments, our system shows 25 improvement over other aboutness solutions." ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
Third, we apply the Stanford NLP tagger @cite_1 to perform part-of-speech tagging and dependency parsing. Further details on the tag set are available in the manual http: nlp.stanford.edu software dependencies_manual.pdf As a last step of the text pre-processing, we apply sentiment analysis to each sentence by using SentiStrength @cite_3 .
{ "cite_N": [ "@cite_1", "@cite_3" ], "mid": [ "1996430422", "2028904519" ], "abstract": [ "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24 accuracy on the Penn Treebank WSJ, an error reduction of 4.4 on the best previous single automatically learned tagging result.", "A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6p accuracy and negative emotion with 72.8p accuracy, both based upon strength scales of 1–5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches. © 2010 Wiley Periodicals, Inc." ] }
1307.4879
2949472382
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
We match the processed captions to recent news stories, which are obtained from a major online news aggregator. Captions are matched in the same genre, e.g., sentences in are matched to online news in the section of the news aggregator. News in the website that are older than three days are ignored. The matching task is the same as the one described by , but the approach is based on supervised learning rather than web searches.More details can be found in @cite_7 . The matching is performed in two steps. In the first step, a per-genre classification model trained on thousands of examples labeled by editors is applied. In this model, the two classes are same story'' and different story'' and each example consists of a sentence, a news story, and a class label. The features for the classifier are computed from each sentence-story pair by applying the named entity tagger described in the previous section on both elements of the pair, and then by looking at entity co-occurrences. The models are fine-tuned to have high precision.
{ "cite_N": [ "@cite_7" ], "mid": [ "2171041826" ], "abstract": [ "IntoNow is a mobile application that provides a second-screen experience to television viewers. IntoNow uses the microphone of the companion device to sample the audio coming from the TV set, and compares it against a database of TV shows in order to identify the program being watched. The system we demonstrate is activated by IntoNow for specific types of shows. It retrieves information related to the program the user is watching by using closed captions, which are provided by each broadcasting network along the TV signal. It then matches the stream of closed captions in real-time against multiple sources of content. More specifically, during news programs it displays links to online news articles and the profiles of people and organizations in the news, and during music shows it displays links to songs. The matching models are machine-learned from editorial judgments, and tuned to achieve approximately 90 precision." ] }
1307.4567
1532315552
The trend towards highly parallel multi-processing is ubiquitous in all modern computer architectures, ranging from handheld devices to large-scale HPC systems; yet many applications are struggling to fully utilise the multiple levels of parallelism exposed in modern high-performance platforms. In order to realise the full potential of recent hardware advances, a mixed-mode between shared-memory programming techniques and inter-node message passing can be adopted which provides high-levels of parallelism with minimal overheads. For scientific applications this entails that not only the simulation code itself, but the whole software stack needs to evolve. In this paper, we evaluate the mixed-mode performance of PETSc, a widely used scientific library for the scalable solution of partial differential equations. We describe the addition of OpenMP threaded functionality to the library, focusing on sparse matrix-vector multiplication. We highlight key challenges in achieving good parallel performance, such as explicit communication overlap using task-based parallelism, and show how to further improve performance by explicitly load balancing threads within MPI processes. Using a set of matrices extracted from Fluidity, a CFD application code which uses the library as its linear solver engine, we then benchmark the parallel performance of mixed-mode PETSc across multiple nodes on several modern HPC architectures. We evaluate the parallel scalability on Uniform Memory Access (UMA) systems, such as the Fujitsu PRIMEHPC FX10 and IBM BlueGene Q, as well as a Non-Uniform Memory Access (NUMA) Cray XE6 platform. A detailed comparison is performed which highlights the characteristics of each particular architecture, before demonstrating efficient strong scalability of sparse matrix-vector multiplication with significant speedups over the pure-MPI mode.
Sparse matrix multiplication is one of the most heavily used kernels in scientific computing and has therefore received attention from several groups @cite_3 @cite_13 @cite_7 @cite_4 . Multiple storage formats, optimisation strategies and even auto-tuning frameworks exist to improve SpMV performance on a wide range of multi-core architectures @cite_13 . On modern HPC architectures hybrid programming methods are being investigated to better utilise the hierarchical hardware design by reducing communication needs, memory consumption and improved load balance @cite_4 . In particular, task-based threading methods have been highlighted by several researchers, where dedicated threads can be used to overlap MPI communication with local work @cite_5 @cite_4 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_3", "@cite_5", "@cite_13" ], "mid": [ "1992303919", "", "2128853364", "1975116854", "", "2103877122" ], "abstract": [ "We evaluate optimized parallel sparse matrix-vector operations for several representative application areas on widespread multicore-based cluster configurations. First the single-socket baseline performance is analyzed and modeled with respect to basic architectural properties of standard multicore chips. Beyond the single node, the performance of parallel sparse matrix-vector operations is often limited by communication overhead. Starting from the observation that nonblocking MPI is not able to hide communication cost using standard MPI implementations, we demonstrate that explicit overlap of communication and computation can be achieved by using a dedicated communication thread, which may run on a virtual core. Moreover we identify performance benefits of hybrid MPI OpenMP programming due to improved load balancing even without explicit communication overlap. We compare performance results for pure MPI, the widely used \"vector-like\" hybrid programming strategies, and explicit overlap on a modern multicore-based cluster and a Cray XE6 system.", "", "Sparse matrix-vector multiplication (SpMV) is of singular importance in sparse linear algebra. In contrast to the uniform regularity of dense linear algebra, sparse operations encounter a broad spectrum of matrices ranging from the regular to the highly irregular. Harnessing the tremendous potential of throughput-oriented processors for sparse operations requires that we expose substantial fine-grained parallelism and impose sufficient regularity on execution paths and memory access patterns. We explore SpMV methods that are well-suited to throughput-oriented architectures like the GPU and which exploit several common sparsity classes. The techniques we propose are efficient, successfully utilizing large percentages of peak bandwidth. Furthermore, they deliver excellent total throughput, averaging 16 GFLOP s and 10 GFLOP s in double precision for structured grid and unstructured mesh matrices, respectively, on a GeForce GTX 285. This is roughly 2.8 times the throughput previously achieved on Cell BE and more than 10 times that of a quad-core Intel Clovertown system.", "In this paper, we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided, and thus unsuccessful attempts for optimization. In order to gain an insight into the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. In addition, we investigate the parallel version of the kernel and report on the corresponding performance results and their relation to each architecture's specific multithreaded configuration. Based on our experiments, we extract useful conclusions that can serve as guidelines for the optimization process of both single and multithreaded versions of the kernel.", "", "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms." ] }
1307.4798
2159801337
As the rate of content production grows, we must make a staggering number of daily decisions about what information is worth acting on. For any flourishing online social media system, users can barely keep up with the new content shared by friends. How does the user-interface design help or hinder users' ability to find interesting content? We analyze the choices people make about which information to propagate on the social media sites Twitter and Digg. We observe regularities in behavior which can be attributed directly to cognitive limitations of humans, resulting from the different visibility policies of each site. We quantify how people divide their limited attention among competing sources of information, and we show how the user-interface design can mediate information spread.
The relation of human attention to individual and consumer choice has been well studied over many years, although generally in the context of controlled laboratory experiments @cite_1 @cite_21 @cite_0 @cite_18 . Attention has also been invoked to explain online social behavior @cite_10 . For example, the collective shifts in popularity between events and topics over time has been referred to as collective attention" @cite_2 @cite_15 @cite_11 @cite_19 @cite_14 @cite_9 . Previous efforts have been made to better understand how individuals utilize their perceptive abilities to process incoming information, such as @cite_6 @cite_8 , which showed that users concentrate on content near the top of the screen. Fewer studies have addressed divided attention, the phenomenon that as the number of information sources grow, people allocate less attention to each source. A study of conversations between Twitter users found that people limit themselves to 150 or so conversation partners @cite_12 . Twitter users were shown to divide their attention among all incoming messages, regardless of the content or quality of the underlying messages @cite_22 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "", "2041161771", "", "2404406225", "2072606289", "2147775745", "2094136133", "2019658628", "2165757809", "1615730030", "2058465497", "2007590415", "2014790997", "1928223220", "2105549576" ], "abstract": [ "", "Online popularity has an enormous impact on opinions, culture, policy, and profits. We provide a quantitative, large scale, temporal analysis of the dynamics of online content popularity in two massive model systems: the Wikipedia and an entire country's Web space. We find that the dynamics of popularity are characterized by bursts, displaying characteristic features of critical systems such as fat-tailed distributions of magnitude and interevent time. We propose a minimal model combining the classic preferential popularity increase mechanism with the occurrence of random popularity shifts due to exogenous factors. The model recovers the critical features observed in the empirical analysis of the systems analyzed here, highlighting the key factors needed in the description of popularity dynamics.", "", "Microblogging environments such as Twitter present a modality for interacting with information characterized by exposure to information “streams”. In this work, we examine what information in that stream is attended to, and how that attention corresponds to other aspects of microblog consumption and participation. To do this, we measured eye gaze, memory for content, interest ratings, and intended behavior of active Twitter users as they read their tweet steams. Our analyses focus on three sets of alignments: first, whether attention corresponds to other measures of user cognition such as memory (e.g., do people even remember what they attend to?); second, whether attention corresponds to behavior (e.g., are users likely to retweet content that is given the most attention); and third, whether attention corresponds to other attributes of the content and its presentation (e.g., do links attract attention?). We show a positive but imperfect alignment between user attention and other measures of user cognition like memory and interest, and between attention and behaviors like retweeting. To the third alignment, we show that the relationship between attention and attributes of tweets, such as whether it contains a link or is from a friend versus an organization, are complicated and in some cases counterintuitive. We discuss findings in relation to large scale phenomena like information diffusion and also suggest design directions to help maximize user attention in microblog environments.", "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.", "Limited consumer attention limits product market competition: prices are stochastically lower the more attention is paid. Ads compete to be the lowest price in a sector but compete for attention with ads from other sectors: equilibrium ad shares follow a CES form. When a sector gets more proÞtable, its advertising expands: others lose ad market share. The \"information hump\" shows highest ad levels for intermediate attention levels. The Information Age takes off when the number of viable sectors grows, but total ad volume reaches an upper limit. Overall, advertising is excessive, though the allocation across sectors is optimal.", "", "An understanding of how people allocate their visual attention when viewing Web pages is very important for Web authors, interface designers, advertisers and others. Such knowledge opens the door to a variety of innovations, ranging from improved Web page design to the creation of compact, yet recognizable, visual representations of long pages. We present an eye-tracking study in which 20 users viewed 361 Web pages while engaged in information foraging and page recognition tasks. From this data, we describe general location-based characteristics of visual attention for Web pages dependent on different tasks and demographics, and generate a model for predicting the visual attention that individual page elements may receive. Finally, we introduce the concept of fixation impact, a new method for mapping gaze data to visual scenes that is motivated by findings in vision research.", "While limited attention has been analyzed in a variety of economic and psychological settings, its impact on financial markets is not well understood. In this paper, we examine individual NYSE specialist portfolios and test whether liquidity provision is affected as specialists allocate their attention across stocks. Our results indicate that specialists allocate effort toward their most active stocks during periods of increased activity, resulting in less frequent price improvement and increased transaction costs for their remaining assigned stocks. Thus, the allocation of effort due to limited attention has a significant impact on liquidity provision in securities markets.", "In our modern society, people are daily confronted with an increasing amount of information of any kind. As a consequence, the attention capacities and processing abilities of individuals often saturate. People, therefore, have to select which elements of their environment they are focusing on, and which are ignored. Moreover, recent work shows that individuals are naturally attracted by what other people are interested in. This imitative behaviour gives rise to various herding phenomena, such as the spread of ideas or the outbreak of commercial trends, turning the understanding of collective attention an important issue of our society. In this article, we propose an individual-based model of collective attention. In a situation where a group of people is facing a steady flow of novel information, the model naturally reproduces the log-normal distribution of the attention each news item receives, in agreement with empirical observations. Furthermore, the model predicts that the popularity of a news item strongly depends on the number of concurrent news appearing at approximately the same moment. We confirmed this prediction by means of empirical data extracted from the website digg.com. This result can be interpreted from the point of view of competition between the news for the limited attention capacity of individuals. The proposed model, therefore, provides new elements to better understand the dynamics of collective attention in an information-rich world.", "The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.", "Online peer production systems have enabled people to coactively create, share, classify, and rate content on an unprecedented scale. This paper describes strong macroscopic regularities in how people contribute to peer production systems, and shows how these regularities arise from simple dynamical rules. First, it is demonstrated that the probability a person stops contributing varies inversely with the number of contributions he has made. This rule leads to a power law distribution for the number of contributions per person in which a small number of very active users make most of the contributions. The rule also implies that the power law exponent is proportional to the effort required to contribute, as justified by the data. Second, the level of activity per topic is shown to follow a lognormal distribution generated by a stochastic reinforcement mechanism. A small number of very popular topics thus accumulate the vast majority of contributions. These trends are demonstrated to hold across hundreds of millions of contributions to four disparate peer production systems of differing scope, interface style, and purpose.", "If the Web and the Net can be viewed as spaces in which we will increasingly live our lives, the economic laws we will live under have to be natural to this new space.", "Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100–200 stable relationships. Thus, the ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.", "We show through an analysis of a massive data set from YouTube that the productivity exhibited in crowdsourcing exhibits a strong positive dependence on attention, measured by the number of downloads. Conversely, a lack of attention leads to a decrease in the number of videos uploaded and the consequent drop in productivity, which in many cases asymptotes to no uploads whatsoever. Moreover, short-term contributors compare their performance to the average contributor's performance while long-term contributors compare it to their own media." ] }
1307.4046
2950526613
We present the design and implementation of the PeerShare, a system that can be used by applications to securely distribute sensitive data to social contacts of a user. PeerShare incorporates a generic framework that allows different applications to distribute data with different security requirements. By using interfaces available from existing popular social networks. PeerShare is designed to be easy to use for both end users as well as developers of applications. PeerShare can be used to distribute shared keys, public keys and any other data that need to be distributed with authenticity and confidentiality guarantees to an authorized set of recipients, specified in terms of social relationships. We have used already in three different applications and plan to make it available for developers.
@cite_1 presents a generic cryptographic framework that allows social relations establishment and resource sharing with user anonymity, secrecy of resources, privacy of social relations and access control secured. Unlike that uses social network specified sharing policies, it requires users to explicitly establish social relationships with other users which makes it less intuitive for users in real deployments.
{ "cite_N": [ "@cite_1" ], "mid": [ "2400522057" ], "abstract": [ "We present a cryptographic framework to achieve access control, privacy of social relations, secrecy of resources, and anonymity of users in social networks. We illustrate our technique on a core API for social networking, which includes methods for establishing social relations and for sharing resources. The cryptographic protocols implementing these methods use pseudonyms to hide user identities, signatures on these pseudonyms to establish social relations, and zero-knowledge proofs of knowledge of such signatures to demonstrate the existence of social relations without sacrificing user anonymity. As we do not put any constraints on the underlying social network, our framework is generally applicable and, in particular, constitutes an ideal plug-in for decentralized social networks. We analyzed the security of our protocols by developing formal definitions of the aforementioned security properties and by verifying them using ProVerif, an automated theorem prover for cryptographic protocols. Finally, we built a prototypical implementation and conducted an experimental evaluation to demonstrate the efficiency and the scalability of our framework." ] }
1307.3900
2952266031
We construct frames of wavepackets produced by parabolic dilation, rotation and translation of (a finite sum of) Gaussians and give asymptotics on the analogue of Daubechies frame criterion. We show that the coefficients in the corresponding approximate expansion decay fast away from the wavefront set of the original data.
The Gaussian wavepackets that we treat in this article share the geometric features of other parabolic wavepackets including curvelets and shearlets. Specifically, the atoms are concentrated on elongated regions obeying the relation: @math @math and oscillate across their ridge. The packets in are produced exactly by applying a parabolic dilation, rotation and translation to a single window function. In constrast, the classic Gaussian wavepackets from @cite_16 @cite_39 @cite_6 use an additional frequency cut-off to enforce vanishing moments. The curvelets from @cite_5 are produced by parabolic dilation and rotation of a slightly different window at each scale. In the case of shearlets, rotations are replaced by the shear maps @math . Wave packets based on tilings of frequency space are discussed in @cite_35 @cite_20 @cite_41 @cite_11 .
{ "cite_N": [ "@cite_35", "@cite_41", "@cite_6", "@cite_39", "@cite_5", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "1964102307", "2963819708", "2007849448", "1986634257", "2069912449", "", "", "1971692776" ], "abstract": [ "Abstract In this article, we develop a general method for constructing wavelets | det A j | 1 2 ψ ( A j x − x j , k ) : j ∈ J , k ∈ K on irregular lattices of the form X = x j , k ∈ R d : j ∈ J , k ∈ K , and with an arbitrary countable family of invertible d × d matrices A j ∈ G L d ( R ) : j ∈ J that do not necessarily have a group structure. This wavelet construction is a particular case of general atomic frame decompositions of L 2 ( R d ) developed in this article, that allow other time frequency decompositions such as nonharmonic Gabor frames with nonuniform covering of the Euclidean space R d . Possible applications include image and video compression, speech coding, image and digital data transmission, image analysis, estimations and detection, and seismology.", "Abstract In this article we construct affine systems that provide a simultaneous atomic decomposition for a wide class of functional spaces including the Lebesgue spaces L p ( R d ) , 1 p + ∞ . The novelty and difficulty of this construction is that we allow for non-lattice translations. We prove that for an arbitrary expansive matrix A and any set Λ —satisfying a certain spreadness condition but otherwise irregular—there exists a smooth window whose translations along the elements of Λ and dilations by powers of A provide an atomic decomposition for the whole range of the anisotropic Triebel–Lizorkin spaces. The generating window can be either chosen to be bandlimited or to have compact support. To derive these results we start with a known general “painless” construction that has recently appeared in the literature. We show that this construction extends to Besov and Triebel–Lizorkin spaces by providing adequate dual systems.", "We discuss how techniques from multiresolution analysis and phase space transforms can be exploited in solving a general class of evolution equations with limited smoothness. We have wave propagation in media of limited smoothness in mind. The frame that appears naturally in this context belongs to the family of frames of curvelets. The construction considered here implies a full-wave description on the one hand but reveals the geometrical properties derived from the propagation of singularities on the other hand. The approach and analysis we present (i) aids in the understanding of the notion of scale in the wavefield and how this interacts with the configuration or medium, (ii) admits media of limited smoothness, viz. with Holder regularity s ≥ 2, and (iii) suggests a novel computational algorithm that requires solving for the mentioned geometry on the one hand and solving a matrix Volterra integral equation of the second kind on the other hand. The Volterra equation can be solved by recursion—as in the...", "", "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.", "", "", "In this paper we present a construction of frames generated by a single band-limited function for decomposition smoothness spaces on ( R ^d ) of modulation and Triebel–Lizorkin type. A perturbation argument is then used to construct compactly supported frame generators." ] }
1307.3900
2952266031
We construct frames of wavepackets produced by parabolic dilation, rotation and translation of (a finite sum of) Gaussians and give asymptotics on the analogue of Daubechies frame criterion. We show that the coefficients in the corresponding approximate expansion decay fast away from the wavefront set of the original data.
While the differences between the various parabolic wavepackets are very relevant in practice, their asymptotic properties are similar. The Gaussian wavepackets in are an example of from @cite_22 and therefore they provide the same sparse approximation properties as curvelets for functions with discontinuities along smooth edges @cite_5 . The notion of curvelet molecules and the related notion of in @cite_4 are in turn particular cases of the notion of , as recently introduced in @cite_10 . In that article it is shown that all expansions with adequate systems of parabolic molecules share similar asymptotic properties. As a consequence, our results on the decay of frame coefficients away from the wavefront set can be transferred to other parabolic systems.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_22", "@cite_10" ], "mid": [ "2069912449", "2014172718", "2168141504", "" ], "abstract": [ "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.", "Traditional methods of time-frequency and multiscale analysis, such as wavelets and Gabor frames, have been successfully employed for representing most classes of pseudodifferential operators. However, these methods are not equally effective in dealing with Fourier Integral Operators in general. In this article, we show that the shearlets, recently introduced by the authors and their collaborators, provide very efficient representations for a large class of Fourier Integral Operators. The shearlets are an affine-like system of well-localized waveforms at various scales, locations and orientations, which are particularly efficient in representing anisotropic functions. Using this approach, we prove that the matrix representation of a Fourier Integral Operator with respect to a Parseval frame of shearlets is sparse and well-organized. This fact recovers a similar result recently obtained by Candes and Demanet using curvelets, which illustrates the benefits of directional multiscale representations (such as curvelets and shearlets) in the study of those functions and operators where traditional multiscale methods are unable to provide the appropriate geometric analysis in the phase space.", "This paper argues that curvelets provide a powerful tool for representing very general linear symmetric systems of hyperbolic differential equations. Curvelets are a recently developed multiscale system [7, 9] in which the elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle width length2 at fine scales. We prove that for a wide class of linear hyperbolic differential equations, the curvelet representation of the solution operator is both optimally sparse and well organized. * It is sparse in the sense that the matrix entries decay nearly exponentially fast (i.e., faster than any negative polynomial) * and well organized in the sense that the very few nonnegligible entries occur near a few shifted diagonals. Indeed, we show that the wave group maps each curvelet onto a sum of curveletlike waveforms whose locations and orientations are obtained by following the different Hamiltonian flows - hence the diagonal shifts in the curvelet representation. A physical interpretation of this result is that curvelets may be viewed as coherent waveforms with enough frequency localization so that they behave like waves but at the same time, with enough spatial localization so that they simultaneously behave like particles.", "" ] }
1307.3900
2952266031
We construct frames of wavepackets produced by parabolic dilation, rotation and translation of (a finite sum of) Gaussians and give asymptotics on the analogue of Daubechies frame criterion. We show that the coefficients in the corresponding approximate expansion decay fast away from the wavefront set of the original data.
Closest to our work is the construction of curvelet-type expansions in @cite_36 . Whereas we construct frames by using an analogue of Daubechies criterion for wavelets, the curvelet-type frames for @math in @cite_36 are obtained by using a perturbation argument. The main focus of that work is the production of compactly-supported atoms, but the results also apply to sums of modulated Gaussians. As mentioned in [Section 6] rani12 , the perturbation argument yields a frame where each element is a linear combination of dilated, rotated and translated Gaussians. However, the coefficients in that linear combination depend on the indices of the particular frame element, but the number of terms is proved to be uniformly bounded. By contrast, the packets in consist exactly of parabolic dilations, rotations and translations of a function, which may be taken to be a fixed linear combination of Gaussians.
{ "cite_N": [ "@cite_36" ], "mid": [ "2034276301" ], "abstract": [ "We study a flexible method for constructing curvelet-type frames. These curvelet-type systems have the same sparse representation properties as curvelets for appropriate classes of smooth functions, and the flexibility of the method allows us to give a constructive description of how to construct curvelet-type systems with a prescribed nature such as compact support in direct space. The method consists of using the machinery of almost diagonal matrices to show that a system of curvelet molecules which is sufficiently close to curvelets constitutes a frame for curvelet-type spaces. Such a system of curvelet molecules can then be constructed using finite linear combinations of shifts and dilates of a single function with sufficient smoothness and decay." ] }
1307.3900
2952266031
We construct frames of wavepackets produced by parabolic dilation, rotation and translation of (a finite sum of) Gaussians and give asymptotics on the analogue of Daubechies frame criterion. We show that the coefficients in the corresponding approximate expansion decay fast away from the wavefront set of the original data.
The Gaussian wavepackets with parabolic scaling in provide an optimal representation of functions with singularities along smooth curves @cite_5 , sparsify Fourier wave propagators @cite_22 , and are particularly useful in geophysics @cite_3 . For other kinds of applications, other geometries are more adequate. For example, in @cite_27 it is shown that a dictionary of wave-atoms with the scaling @math @math '' is optimal for the representation of oscillatory patterns (texture).
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_22", "@cite_3" ], "mid": [ "2069912449", "2162547327", "2168141504", "2100864161" ], "abstract": [ "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior.", "We introduce atoms\" as a variant of 2D wavelet packets obeying the parabolic scaling wavelength (diameter) 2 . We prove that warped oscillatory functions, a toy model for texture, have a signicantly sparser expansion in wave atoms than in other xed standard representations like wavelets, Gabor atoms, or curvelets. We propose a novel algorithm for a tight frame of wave atoms with redundancy two, directly in the frequency plane, by the \" technique. We also propose variants of the basic transform for applications in image processing, including an orthonormal basis, and a shift-invariant tight frame with redundancy four. Sparsity and denoising experiments on both seismic and ngerprint images demonstrate the potential of the tool introduced.", "This paper argues that curvelets provide a powerful tool for representing very general linear symmetric systems of hyperbolic differential equations. Curvelets are a recently developed multiscale system [7, 9] in which the elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle width length2 at fine scales. We prove that for a wide class of linear hyperbolic differential equations, the curvelet representation of the solution operator is both optimally sparse and well organized. * It is sparse in the sense that the matrix entries decay nearly exponentially fast (i.e., faster than any negative polynomial) * and well organized in the sense that the very few nonnegligible entries occur near a few shifted diagonals. Indeed, we show that the wave group maps each curvelet onto a sum of curveletlike waveforms whose locations and orientations are obtained by following the different Hamiltonian flows - hence the diagonal shifts in the curvelet representation. A physical interpretation of this result is that curvelets may be viewed as coherent waveforms with enough frequency localization so that they behave like waves but at the same time, with enough spatial localization so that they simultaneously behave like particles.", "Curvelets are plausible candidates for simultaneous compressionofseismicdata,theirimages,andtheimagingoperator itself. We show that with curvelets, the leading-order approximation in angular frequency, horizontal wavenumber, and migrated location to common-offset CO Kirchhoff depth migration becomes a simple transformation of coordinates of curvelets in the data, combined with amplitude scaling. This transformation is calculated using map migration, whichemploysthelocalslopesfromthecurveletdecomposition of the data. Because the data can be compressed using curvelets, the transformation needs to be calculated for relatively few curvelets only. Numerical examples for homogeneous media show that using the leading-order approximationonlyprovidesagoodapproximationtoCOmigrationfor moderate propagation times.As the traveltime increases and raysdivergebeyondthespatialsupportofacurvelet;however, the leading-order approximation is no longer accurate enough. This shows the need for correction beyond leading order,evenforhomogeneousmedia." ] }
1307.3621
2952349643
We consider a problem which has received considerable attention in systems literature because of its applications to routing in delay tolerant networks and replica placement in distributed storage systems. In abstract terms the problem can be stated as follows: Given a random variable @math generated by a known product distribution over @math and a target value @math , output a non-negative vector @math , with @math , which maximizes the probability of the event @math . This is a challenging non-convex optimization problem for which even computing the value @math of a proposed solution vector @math is #P-hard. We provide an additive EPTAS for this problem which, for constant-bounded product distributions, runs in @math time and outputs an @math -approximately optimal solution vector @math for this problem. Our approach is inspired by, and extends, recent structural results from the complexity-theoretic study of linear threshold functions. Furthermore, in spite of the objective function being non-smooth, we give a PTAS while previous work for such objective functions has typically led to a PTAS. We believe our techniques may be applicable to get unicriterion PTAS for other non-smooth objective functions.
Previous Work on the Problem. The stochastic design problem (P) stated above was formulated explicitly in the work of @cite_4 . That work was motivated by the problem of routing in Delay Tolerant Networks @cite_44 . These networks are characterized by a lack of consistent end-to-end paths, due to interruptions that may be either planned or unplanned, and selecting routing paths is considered to be one of the most challenging problems. The authors of @cite_4 reduce the route selection problem to Problem (P) in a range of settings of interest, and study the structure of the optimal partition as well as its computational complexity, albeit with inconclusive theoretical results.
{ "cite_N": [ "@cite_44", "@cite_4" ], "mid": [ "2162076967", "2096841105" ], "abstract": [ "We formulate the delay-tolerant networking routing problem, where messages are to be moved end-to-end across a connectivity graph that is time-varying but whose dynamics may be known in advance. The problem has the added constraints of finite buffers at each node and the general property that no contemporaneous end-to-end path may ever exist. This situation limits the applicability of traditional routing approaches that tend to treat outages as failures and seek to find an existing end-to-end path. We propose a framework for evaluating routing algorithms in such environments. We then develop several algorithms and use simulations to compare their performance with respect to the amount of knowledge they require about network topology. We find that, as expected, the algorithms using the least knowledge tend to perform poorly. We also find that with limited additional knowledge, far less than complete global knowledge, efficient algorithms can be constructed for routing in such environments. To the best of our knowledge this is the first such investigation of routing issues in DTNs.", "We consider the problem of routing in a delay tolerant network (DTN) in the presence of path failures. Previous work on DTN routing has focused on using precisely known network dynamics, which does not account for message losses due to link failures, buffer overruns, path selection errors, unscheduled delays, or other problems. We show how to split, replicate, and erasure code message fragments over multiple delivery paths to optimize the probability of successful message delivery. We provide a formulation of this problem and solve it for two cases: a 0 1 (Bernoulli) path delivery model where messages are either fully lost or delivered, and a Gaussian path delivery model where only a fraction of a message may be delivered. Ideas from the modern portfolio theory literature are borrowed to solve the underlying optimization problem. Our approach is directly relevant to solving similar problems that arise in replica placement in distributed file systems and virtual node placement in DHTs. In three different simulated DTN scenarios covering a wide range of applications, we show the effectiveness of our approach in handling failures." ] }
1307.3184
2111875278
Current discrete randomness and information conservation inequalities are over total recursive functions, i.e. restricted to deterministic processing. This restriction implies that an algorithm can break algorithmic randomness conservation inequalities. We address this issue by proving tight bounds of randomness and information conservation with respect to recursively enumerable transformations, i.e. processing by algorithms. We also show conservation of randomness of finite strings with respect to enumerable distributions, i.e. semicomputable semi-measures.
This work is resultant from my trip to Montpellier with Alexander Shen and P ' e ter G ' a cs. Kolmogorov complexity was introduced independently in @cite_2 @cite_0 @cite_3 . For a detailed history of Algorithmic Information Theory, we refer to @cite_6 . @cite_5 introduced laws of information non-growth over deterministic functions and later revisited in @cite_1 . The definition of @math and theorem relies on modified arguments of Section 2 in @cite_1 . An extension of rarity to semi-measures can be found in the recent work of @cite_4 and also can seen in the work of @cite_1 . @cite_8 contains an extended survey of randomness conservation inequalities and also describes properties of the rarity term @math used in this article.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_2", "@cite_5" ], "mid": [ "2052686417", "32115684", "2041517255", "1638203394", "2020311636", "", "2005097301", "123681027" ], "abstract": [ "The notion of Kolmogorov-Martin-Lof Random sequences is extended from computable to enumerable distributions. This allows definitions of various other properties, such as mutual information in infinite sequences. Enumerable distributions (as well as distributions faced in some finite multi-party settings) are semi measures, handling those requires care.", "", "The article further develops Kolmogorov's algorithmic complexity theory. The definition of randomness is modified to satisfy strong invariance properties (conservation inequalities). This allows definitions of concepts such as mutual information in individual infinite sequences. Applications to several areas, like probability theory, theory of algorithms, intuitionistic logic are considered. These theories are simplified substantially with the postulate that the objects they consider are independent of (have small mutual information with) any sequence specified by a mathematical property.", "The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.", "A new definition of program-size complexity is made. H(A,B C,D) is defined to be the size in bits of the shortest self-delimiting program for calculating strings A and B if one is given a minimal-size self-delimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be self-delimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal properties of the entropy concept of information theory. For example, H(A,B) = H(A) + H(B A) - 0(1). Also, if a program of length k is assigned measure 2 -k, then H(A) = -log2 (the probability that the standard universal computer will calculate A) - - 0(1).", "", "A method and apparatus for cutting resilient foamed synthetic plastics filter material by pressing a die against a block of such material to provide cuts which are interleaved extending from opposite faces of the block the cuts at each such face being joined by a curved cut conforming to the desired exterior surface shape of the cut block when it is stretched to provide a corrugated configuration.", "" ] }
1307.3195
2129816063
In this paper we propose an architecture for specifying the interaction of non-player characters (NPCs) in the game-world in a way that abstracts common tasks in four main conceptual components, namely perception, deliberation, control, action. We argue that this architecture, inspired by AI research on autonomous agents and robots, can offer a number of benefits in the form of abstraction, modularity, re-usability and higher degrees of personalization for the behavior of each NPC. We also show how this architecture can be used to tackle a simple scenario related to the navigation of NPCs under incomplete information about the obstacles that may obstruct the various way-points in the game, in a simple and effective way.
There is a variety of work that aims for action-based AI for NPCs. Traditionally, a combination of scripts and finite state machine are used in interactive games for controlling NPCs. These methods, even if fairly limited, allow the game designer to control every aspect of the NPCs actions. This approach has been employed in different types of succesfull videogames such as the Role Playing Game (RPG) or the First Person Shooter (FPS) . Scripts are written off-line in a high-level language and are used to define simple behaviors for NPCs. Procedural script generation has been proposed in @cite_5 by using simple patter templates which are tuned and combined by hand. Complex behaviors can be developed in a short amount of time but many of the intricacies of the classical scripting approach are not solved and it remains difficult to manage the NPCs as the complexity of the virtual world increases.
{ "cite_N": [ "@cite_5" ], "mid": [ "1526423669" ], "abstract": [ "Recently, some researchers have argued that generative design patterns (GDPs) can leverage the obvious design re-use that characterizes traditional design patterns into code re-use. This work provides additional evidence that GDPs are both useful and productive. Specifically, the current state-of-the-art in the domain of computer games is to script individual game objects to provide the desired interactions for each game adventure. We use BioWare Corp.'s popular Neverwinter Nights computer role-playing game to show how GDPs can be used to generate game scripts. This is a particularly good domain for GDPs, since game designers often have little or no programming skills. We demonstrate our approach using a new GDP tool called ScriptEase." ] }
1307.3625
2058829124
The degree distribution is an important characteristic of complex networks. In many applications, quantification of degree distribution in the form of a fixed-length feature vector is a necessary step. On the other hand, we often need to compare the degree distribution of two given networks and extract the amount of similarity between the two distributions. In this paper, we propose a novel method for quantification of the degree distributions in complex networks. Based on this quantification method,a new distance function is also proposed for degree distributions, which captures the differences in the overall structure of the two given distributions. The proposed method is able to effectively compare networks even with different scales, and outperforms the state of the art methods considerably, with respect to the accuracy of the distance function.
Janssen et. al., @cite_28 propose another approach for quantification of degree distributions. In this method, the degree distribution is divided into eight equal-sized regions and the sum of degree probabilities in each region is extracted as distribution percentiles. This method is sensitive to the range of node degrees and also to outlier values of degrees. We recall this technique as ''Percentiles'' and we include it in baseline methods, along with ''KS-test'' and ''Power-law'' (the power-law exponent) to evaluate our proposed distance metric, called ''DDQC''.
{ "cite_N": [ "@cite_28" ], "mid": [ "2044028731" ], "abstract": [ "Several network models have been proposed to explain the link structure observed in online social networks. This paper addresses the problem of choosing the model that best fits a given real-world network. We implement a model-selection method based on unsupervised learning. An alternating decision tree is trained using synthetic graphs generated according to each of the models under consideration. We use a broad array of features, with the aim of representing different structural aspects of the network. Features include the frequency counts of small subgraphs (graphlets) as well as features capturing the degree distribution and small-world property. Our method correctly classifies synthetic graphs, and is robust under perturbations of the graphs. We show that the graphlet counts alone are sufficient in separating the training data, indicating that graphlet counts are a good way of capturing network structure. We tested our approach on four Facebook graphs from various American universities. The models th..." ] }
1307.2893
1492766965
We introduce a new model of competition on growing networks. This extends the preferential attachment model, with the key property that node choices evolve simultaneously with the network. When a new node joins the network, it chooses neighbours by preferential attachment, and selects its type based on the number of initial neighbours of each type. The model is analysed in detail, and in particular, we determine the possible proportions of the various types in the limit of large networks. An important qualitative feature we find is that, in contrast to many current theoretical models, often several competitors will coexist. This matches empirical observations in many real-world networks.
In marketing, competing companies fight for customers. In essence, our model describes word-of-mouth recommendations, and thus it should be compared to other models which study the effect of such personal recommendations. A related model of word-of-mouth learning was studied by Banerjee and Fudenberg @cite_32 , where successive generations of agents make choices between two alternatives, with new agents sampling the choices of old ones. However, they considered the limit of a continuum of agents with no network structure, in contrast to our setup, where this is explicitly modeled. Furthermore, they assume that one of the two alternatives is ex-ante better'' than the other, and focus on whether or not the agents can learn this via word-of-mouth communication. See also @cite_17 @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_32", "@cite_17" ], "mid": [ "2122489728", "2063313621", "2096605213" ], "abstract": [ "This paper studies the way that word-of-mouth communication aggregates the information of individual agents. We find that the structure of the communication process determines whether all agents end up making identical choices, with less communication making this conformity more likely. Despite the players' naive decision rules and the stochastic decision environment, word-of-mouth communication may lead all players to adopt the action that is on average superior. These socially efficient outcomes tend to occur when each agent samples only a few others.", "We give an explicit construction of the weak local limit of a class of preferential attachment graphs. This limit contains all local information and allows several computations that are otherwise hard, for example, joint degree distributions and, more generally, the limiting distribution of subgraphs in balls of any given radius @math around a random vertex in the preferential attachment graph. We also establish the finite-volume corrections which give the approach to the limit.", "This paper studies agents who consider the experiences of their neighbors in deciding which of two technologies to use. We analyze two learning environments, one in which the same technology is optimal for all players and another in which each technology is better for some of them. In both environments, players use exogenously specified rules of thumb that ignore historical data but may incorporate a tendency to use the more popular technology. In some cases these naive rules can lead to fairly efficient decisions in the long run, but adjustment can be slow when a superior technology is first introduced." ] }
1307.2893
1492766965
We introduce a new model of competition on growing networks. This extends the preferential attachment model, with the key property that node choices evolve simultaneously with the network. When a new node joins the network, it chooses neighbours by preferential attachment, and selects its type based on the number of initial neighbours of each type. The model is analysed in detail, and in particular, we determine the possible proportions of the various types in the limit of large networks. An important qualitative feature we find is that, in contrast to many current theoretical models, often several competitors will coexist. This matches empirical observations in many real-world networks.
The power of word-of-mouth has been a widely studied topic in the past half century, with research confirming the strong influence of word-of-mouth communication on consumer behavior @cite_5 @cite_53 @cite_50 @cite_33 @cite_47 . This research generally supports the assertion that word-of-mouth is more influential than external marketing efforts, such as advertising. In the current information age, online feedback mechanisms have changed the way customers share opinions about products and services @cite_1 , and online social networks are being exploited for viral marketing purposes @cite_18 . Nevertheless, traditional word-of-mouth recommendation networks still have a very important effect, and companies are advised to take advantage of this through their marketing efforts, e.g., via facilitating referrals @cite_45 @cite_24 . Due to the ever-changing ways individuals interact, it is important to analyze models---such as the one introduced in this paper---that study the interplay between how individuals interact and the effects of word-of-mouth communication in the given setting.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_53", "@cite_1", "@cite_24", "@cite_45", "@cite_50", "@cite_5", "@cite_47" ], "mid": [ "1994473607", "2040686047", "2567923580", "1562400265", "", "121399320", "2099594827", "2115926002", "1495750374" ], "abstract": [ "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.", "Marketing practitioners and theorists routinely cite the power of the personal referral on customer behaviour. However, relatively few companies have tried to harness the power of word of mouth (WOM). Scholars have been pondering WOM over 2400 years, although modern marketing research into WOM started only relatively recently, in the post-war 1940s. WOM can be characterized by valence, focus, timing, solicitation and degree of management intervention. Most recent WOM research has been conducted from a customer-to-customer perspective, even though WOM is found in other contexts such as influence, employee and recruitment markets. Marketing research into WOM has attempted to answer two questions. What are the antecedents of WOM? What are the consequences of WOM? This paper integrates that research into a contingency model and attempts to identify researchable gaps in our knowledge.", "The spread of new ideas, behaviors or technologies has been extensively studied using epidemic models. Here we consider a model of diffusion where the individualsʼ behavior is the result of a strategic choice. We study a simple coordination game with binary choice and give a condition for a new action to become widespread in a random network. We also analyze the possible equilibria of this game and identify conditions for the coexistence of both strategies in large connected sets. Finally we look at how can firms use social networks to promote their goals with limited information. Our results differ strongly from the one derived with epidemic models and show that connectivity plays an ambiguous role: while it allows the diffusion to spread, when the network is highly connected, the diffusion is also limited by high-degree nodes which are very stable.", "Online feedback mechanisms harness the bidirectional communication capabilities of the Internet to engineer large-scale, word-of-mouth networks. Best known so far as a technology for building trust and fostering cooperation in online marketplaces, such as eBay, these mechanisms are poised to have a much wider impact on organizations. Their growing popularity has potentially important implications for a wide range of management activities such as brand building, customer acquisition and retention, product development, and quality assurance. This paper surveys our progress in understanding the new possibilities and challenges that these mechanisms represent. It discusses some important dimensions in which Internet-based feedback mechanisms differ from traditional word-of-mouth networks and surveys the most important issues related to their design, evaluation, and use. It provides an overview of relevant work in game theory and economics on the topic of reputation. It discusses how this body of work is being extended and combined with insights from computer science, management science, sociology, and psychology to take into consideration the special properties of online environments. Finally, it identifies opportunities that this new area presents for operations research management science (OR MS) research.", "", "For this segment…Aim to… Example: Telecommunications CompanyPayoff : Telecommunications CompanyAffl uents Make them Champi-ons by encouraging them to refer more new customers while maintaining their highly valuable pur-chasing behavior.Sent Affl uents direct-mail promotions off ering a @math 190, a 388 increase.Advocates Turn them into Champions by increasing their CLV without compromis-ing their CRV.Focused on cross-selling and up-selling company’s products; for example, by off ering bundled products and giving discounts to customers signing one-year contracts.Average CLV increased approximately $110, a 61 improvement.Misers Move them to any other segment by persuading them to buy more products", "The effects of word-of-mouth (WOM) communications and specific attribute information on product evaluations were investigated. A face-to-face WOM communication was more persuasive than a printed format (experiment 1). Although a strong WOM effect was found, this effect was reduced or eliminated when a prior impression of the target brand was available from memory or when extremely negative attribute information was presented (experiment 2). The results suggest that diverse, seemingly unrelated judgmental phenomena--such as the vividness effect, the perseverance effect, and the negativity effect--can be explained through the accessibility-diagnosticity model. Copyright 1991 by the University of Chicago.", "This paper analyzes a model of rational word-of-mouth learning, in which successive generations of agents make once-and-for-all choices between two alternatives. Before making a decision, each new agent samples N old ones and asks them which choice they used and how satisfied they were with it. If (a) the sampling rule is “unbiased” in the sense that the samples are representative of the overall population, (b) each player samples two or more others, and (c) there is any information at all in the payoff observations, then in the long run every agent will choose the same thing. If in addition the payoff observation is sufficiently informative, the long-run outcome is efficient. We also investigate a range of biased sampling rules, such as those that over-represent popular or successful choices, and determine which ones favor global convergence towards efficiency.", "Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties)." ] }
1307.2893
1492766965
We introduce a new model of competition on growing networks. This extends the preferential attachment model, with the key property that node choices evolve simultaneously with the network. When a new node joins the network, it chooses neighbours by preferential attachment, and selects its type based on the number of initial neighbours of each type. The model is analysed in detail, and in particular, we determine the possible proportions of the various types in the limit of large networks. An important qualitative feature we find is that, in contrast to many current theoretical models, often several competitors will coexist. This matches empirical observations in many real-world networks.
In epidemiology, pathogens fight for survival, and a central topic is the spread of diseases @cite_29 @cite_39 . In classic models of epidemic spreading, individuals are characterized by the stage of the disease in them: they can be susceptible, infected, or recovered removed, leading to the SIR, SIRS and SIS models. The main object of study is the epidemic threshold, i.e., under what conditions does the disease die out or take over the population. An important finding is that the network structure underlying the population of individuals greatly affects the epidemic threshold; in particular, on scale-free networks the epidemic threshold vanishes, and diseases can spread even when infection probabilities are tiny @cite_0 @cite_8 @cite_44 @cite_13 .
{ "cite_N": [ "@cite_8", "@cite_29", "@cite_39", "@cite_0", "@cite_44", "@cite_13" ], "mid": [ "2110919848", "114870970", "1606697907", "2038195874", "2021884758", "2030539428" ], "abstract": [ "The spread of diseases in human and other populations can be described in terms of networks, where individuals are represented by nodes and modes of contact by edges. Similar models can be applied to the spread of viruses on the Internet. In their Perspective, Lloyd and May discuss the similarities and differences between the dynamics of computer viruses and infections of human and other populations.", "", "Part 1 Microparasites: biology of host-microparasite associations the basic model - statics static aspects of eradication and control the basic model - dynamics dynamic aspects of eradication and control beyond the basic model - empirical evidence of inhomogeneous mixing age-related transmission rates genetic heterogeneity social heterogeneity and sexually transmitted diseases spatial and other kinds of heterogeneity endemic infections in developing countries indirectly transmitted microparasites. Part 2 Macroparasites: biology of host-macroparasite associations the basic model - statics the basic model - dynamics acquired immunity heterogeneity within the human community indirectly transmitted helminths experimental epidemiology parasites, genetic variability, and drug resistance the ecology and genetics of host-parasite associations.", "The Internet has a very complex connectivity recently modeled by the class of scale-free networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from computer virus infections and find the average lifetime and persistence of viral strains on the Internet. We define a dynamical model for the spreading of infections on scale-free networks, finding the absence of an epidemic threshold and its associated critical behavior. This new epidemiological framework rationalizes data of computer viruses and could help in the understanding of other spreading phenomena on communication and social networks.", "We discuss properties of infection processes on scale-free networks, relating them to the node-connectivity distribution that characterizes the network. Considering the epidemiologically important case of a disease that confers permanent immunity upon recovery, we derive analytic expressions for the final size of an epidemic in an infinite closed population and for the dependence of infection probability on an individual’s degree of connectivity within the population. As in an earlier study @R. Pastor-Satorras and A. Vesipignani, Phys. Rev. Lett. 86, 3200 2001!; Phys. Rev. E. 63, 006117 2001!# for an infection that did not confer immunity upon recovery, the epidemic process—in contrast with many traditional epidemiological models—does not exhibit threshold behavior, and we demonstrate that this is a consequence of the extreme heterogeneity in the connectivity distribution of a scale-free network. Finally, we discuss effects that arise from finite population sizes, showing that networks of finite size do exhibit threshold effects: infections cannot spread for arbitrarily low transmission probabilities.", "The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible infective removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks." ] }
1307.2893
1492766965
We introduce a new model of competition on growing networks. This extends the preferential attachment model, with the key property that node choices evolve simultaneously with the network. When a new node joins the network, it chooses neighbours by preferential attachment, and selects its type based on the number of initial neighbours of each type. The model is analysed in detail, and in particular, we determine the possible proportions of the various types in the limit of large networks. An important qualitative feature we find is that, in contrast to many current theoretical models, often several competitors will coexist. This matches empirical observations in many real-world networks.
Another large area of epidemiology studies conditions under which multiple strains of a pathogen can coexist (see, e.g., @cite_7 and references therein), while the physics community has been studying the effects of the underlying network on competing epidemics @cite_43 @cite_40 @cite_9 .
{ "cite_N": [ "@cite_43", "@cite_40", "@cite_9", "@cite_7" ], "mid": [ "", "1889213102", "1967608252", "2054476679" ], "abstract": [ "", "Department of Physics, University of Seoul, Seoul 130-743, Korea(Dated: May 22, 2011)We study the non-equilibrium phase transition in a model for epidemic spreading on scale-freenetworks. The model consists of two particle species A and B, and the coupling between them istaken to be asymmetric; A induces B while B suppresses A. This model describes the spreadingof an epidemic on networks equipped with a reactive immune system. We present analytic resultson the phase diagram and the critical behavior, which depends on the degree exponent γ of theunderlying scale-free networks. Numerical simulation results that support the analytic results arealso presented.", "Human diseases spread over networks of contacts between individuals and a substantial body of recent research has focused on the dynamics of the spreading process. Here we examine a model of two competing diseases spreading over the same network at the same time, where infection with either disease gives an individual subsequent immunity to both. Using a combination of analytic and numerical methods, we derive the phase diagram of the system and estimates of the expected final numbers of individuals infected with each disease. The system shows an unusual dynamical transition between dominance of one disease and dominance of the other as a function of their relative rates of growth. Close to this transition the final outcomes show strong dependence on stochastic fluctuations in the early stages of growth, dependence that decreases with increasing network size, but does so sufficiently slowly as still to be easily visible in systems with millions or billions of individuals. In most regions of the phase diagram we find that one disease eventually dominates while the other reaches only a vanishing fraction of the network, but the system also displays a significant coexistence regime in which both diseases reach epidemic proportions and infect an extensive fraction of the network.", "In most pathogens, multiple strains are maintained within host populations. Quantifying the mechanisms underlying strain coexistence would aid public health planning and improve understanding of disease dynamics. We argue that mathematical models of strain coexistence, when applied to indistinguishable strains, should meet criteria for both ecological neutrality and population genetic neutrality. We show that closed clonal transmission models which can be written in an “ancestor-tracing” form that meets the former criterion will also satisfy the latter. Neutral models can be a parsimonious starting point for studying mechanisms of strain coexistence; implications for past and future studies are discussed." ] }
1307.2893
1492766965
We introduce a new model of competition on growing networks. This extends the preferential attachment model, with the key property that node choices evolve simultaneously with the network. When a new node joins the network, it chooses neighbours by preferential attachment, and selects its type based on the number of initial neighbours of each type. The model is analysed in detail, and in particular, we determine the possible proportions of the various types in the limit of large networks. An important qualitative feature we find is that, in contrast to many current theoretical models, often several competitors will coexist. This matches empirical observations in many real-world networks.
This research in epidemiology is relevant in a much broader context, since many dynamical processes, such as the diffusion of information and opinions, can be modeled as epidemics. Indeed, the spread of competing products has been modeled in this way as well @cite_12 @cite_28 . @cite_12 the authors study a @math '' model of competing viruses with perfect mutual immunity in a mean-field setting for fixed networks, and conclude that the winner takes all'', i.e., one virus will take over, while in @cite_28 they study what level of partial immunity allows for coexistence of the two viruses. A related model of competing first passage percolation has been studied in probability theory on various network topologies, including random regular graphs @cite_51 and scale-free networks @cite_35 ; the conclusion again is that the winner takes all. In contrast, in many current markets we observe that competing products coexist, even when they are mutually exclusive.
{ "cite_N": [ "@cite_28", "@cite_35", "@cite_51", "@cite_12" ], "mid": [ "2171031021", "2168137408", "2950901863", "2163595839" ], "abstract": [ "Suppose we have two competing ideas products viruses, that propagate over a social or other network. Suppose that they are strong virulent enough, so that each, if left alone, could lead to an epidemic. What will happen when both operate on the network? Earlier models assume that there is perfect competition: if a user buys product 'A' (or gets infected with virus 'X'), she will never buy product 'B' (or virus 'Y'). This is not always true: for example, a user could install and use both Firefox and Google Chrome as browsers. Similarly, one type of flu may give partial immunity against some other similar disease. In the case of full competition, it is known that 'winner takes all,' that is the weaker virus product will become extinct. In the case of no competition, both viruses survive, ignoring each other. What happens in-between these two extremes? We show that there is a phase transition: if the competition is harsher than a critical level, then 'winner takes all;' otherwise, the weaker virus survives. These are the contributions of this paper (a) the problem definition, which is novel even in epidemiology literature (b) the phase-transition result and (c) experiments on real data, illustrating the suitability of our results.", "We study competing first passage percolation on graphs generated by the configuration model. At time 0, vertex 1 and vertex 2 are infected with the type 1 and the type 2 infection, respectively, and an uninfected vertex then becomes type 1 (2) infected at rate @math ( @math ) times the number of edges connecting it to a type 1 (2) infected neighbor. Our main result is that, if the degree distribution is a power-law with exponent @math , then, as the number of vertices tends to infinity and with high probability, one of the infection types will occupy all but a finite number of vertices. Furthermore, which one of the infections wins is random and both infections have a positive probability of winning regardless of the values of @math and @math . The picture is similar with multiple starting points for the infections.", "We consider two competing first passage percolation processes started from uniformly chosen subsets of a random regular graph on @math vertices. The processes are allowed to spread with different rates, start from vertex subsets of different sizes or at different times. We obtain tight results regarding the sizes of the vertex sets occupied by each process, showing that in the generic situation one process will occupy @math vertices, for some @math . The value of @math is calculated in terms of the relative rates of the processes, as well as the sizes of the initial vertex sets and the possible time advantage of one process. The motivation for this work comes from the study of viral marketing on social networks. The described processes can be viewed as two competing products spreading through a social network (random regular graph). Considering the processes which grow at different rates (corresponding to different attraction levels of the two products) or starting at different times (the first to market advantage) allows to model aspects of real competition. The results obtained can be interpreted as one of the two products taking the lion share of the market. We compare these results to the same process run on @math dimensional grids where we show that in the generic situation the two products will have a linear fraction of the market each.", "Given two competing products (or memes, or viruses etc.) spreading over a given network, can we predict what will happen at the end, that is, which product will 'win', in terms of highest market share? One may naively expect that the better product (stronger virus) will just have a larger footprint, proportional to the quality ratio of the products (or strength ratio of the viruses). However, we prove the surprising result that, under realistic conditions, for any graph topology, the stronger virus completely wipes-out the weaker one, thus not merely 'winning' but 'taking it all'. In addition to the proofs, we also demonstrate our result with simulations over diverse, real graph topologies, including the social-contact graph of the city of Portland OR (about 31 million edges and 1 million nodes) and internet AS router graphs. Finally, we also provide real data about competing products from Google-Insights, like Facebook-Myspace, and we show again that they agree with our analysis." ] }
1307.2964
1914347863
The stack-based access control mechanism plays a fundamental role in the security architecture of Java and Microsoft CLR (common language runtime). It is enforced at runtime by inspecting methods in the current call stack for granted permissions before the program performs safety-critical operations. Although stack inspection is well studied, there is relatively little work on automated generation of access control policies, and most existing work on inferring security policies assume the permissions to be checked at stack inspection points are known beforehand. Practiced approaches to generating access control policies are still manually done by developers based on domain-specific knowledges and trial-and-error testing. In this paper, we present a systematic approach to automated generation of access control policies for Java programs that necessarily ensure the program to pass stack inspection. The techniques are abstract interpretation based context-sensitive static program analyses. Our analysis models the program by combining a context-sensitive call graph with a dependency graph. We are hereby able to precisely identify permission requirements at stack inspection points, which are usually ignored in previous study.
@cite_14 provided a backward static analysis to approximate redundant permission checks with must-fail stack inspection and success permission checks with must-pass stack inspection. This approach was later employed in a visualization tool of permission checks in Java @cite_9 . But the tool didn't provide any means to relieve users from the burden of deciding access rights. In addition to a policy file, users were also required to explicitly specify which methods and permissions to check. Two control flow forward analysis, Denied Permission Analysis and Granted Permission Analysis, were defined by @cite_13 @cite_8 to approximate the set of permissions denied or granted to a given Java bytecode at runtime. Outcome of the analysis were then used to eliminate redundant permission checks and relocate others to more proper places in the code.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_13", "@cite_8" ], "mid": [ "1604540233", "2135849267", "1991206705", "" ], "abstract": [ "The security manager in Java 2 is a runtime access control mechanism. Whenever an access permission to critical resources is requested, the security manager inspects a call stack to examine whether the program has appropriate access permissions or not. This run-time permission check called stack inspection enforces access-control policies that associate access rights with the class that initiates the access. In this paper, we develop a visualization tool which helps programmers enforce security policy effectively into programs. It is based on the static permission check analysis which approximates permission checks statically which must succeed or fail at each method. Using the visualization system, programmers can modify programs and policy files if necessary, as they examine how permission checks and their stack inspection are performed. This process can be repeated until the security policy is enforced correctly.", "Most static analysis techniques for optimizing stack inspection approximate permission sets such as granted permissions and denied permissions. Because they compute permission sets following control flow, they usually take intra-procedural control flow into consideration as well as call relationship. In this paper, we observed that it is necessary for more precise optimization on stack inspection to compute more specific information on checks instead of permissions. We propose a backward static analysis based on simple call graph to approximate redundant permission checks which must fail. In a similar way, we also propose a backward static analysis to approximate success permission checks, which must pass stack inspection.", "Abstract We propose two control flow analyses for the Java bytecode. They safely approximate the set of permissions granted denied to code at run-time. This static information helps optimizing the implementation of the stack inspection algorithm.", "" ] }
1307.2964
1914347863
The stack-based access control mechanism plays a fundamental role in the security architecture of Java and Microsoft CLR (common language runtime). It is enforced at runtime by inspecting methods in the current call stack for granted permissions before the program performs safety-critical operations. Although stack inspection is well studied, there is relatively little work on automated generation of access control policies, and most existing work on inferring security policies assume the permissions to be checked at stack inspection points are known beforehand. Practiced approaches to generating access control policies are still manually done by developers based on domain-specific knowledges and trial-and-error testing. In this paper, we present a systematic approach to automated generation of access control policies for Java programs that necessarily ensure the program to pass stack inspection. The techniques are abstract interpretation based context-sensitive static program analyses. Our analysis models the program by combining a context-sensitive call graph with a dependency graph. We are hereby able to precisely identify permission requirements at stack inspection points, which are usually ignored in previous study.
in @cite_15 proposed a context-sensitive, flow-sensitive, and context-sensitive (1-CFA) data flow analysis to automatically estimate the set of access rights required at each program point. In spite of notable experimental results, the study suffered from a practical matter, as it does not properly handle strings in the analysis. Being a module of privilege assertion in a popular tool -- IBM Security Workbench Development for Java (SWORD4J) @cite_1 , the interprocedural analysis for privileged code placement @cite_6 tackled three neat problems: identifying portions of codes that necessary to make privileged, detecting tainted variables in privileged codes, and exposing useless privileged blocks of codes, by utilizing the technique in @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_6" ], "mid": [ "2107370049", "", "2096230959" ], "abstract": [ "Java 2 has a security architecture that protects systems from unauthorized access by mobile or statically configured code. The problem is in manually determining the set of security access rights required to execute a library or application. The commonly used strategy is to execute the code, note authorization failures, allocate additional access rights, and test again. This process iterates until the code successfully runs for the test cases in hand. Test cases usually do not cover all paths through the code, so failures can occur in deployed systems. Conversely, a broad set of access rights is allocated to the code to prevent authorization failures from occurring. However, this often leads to a violation of the \"Principle of Least Privilege\"This paper presents a technique for computing the access rights requirements by using a context sensitive, flow sensitive, interprocedural data flow analysis. By using this analysis, we compute at each program point the set of access rights required by the code. We model features such as multi-threading, implicitly defined security policies, the semantics of the Permission.implies method and generation of a security policy description. We implemented the algorithms and present the results of our analysis on a set of programs. While the analysis techniques described in this paper are in the context of Java code, the basic techniques are applicable to access rights analysis issues in non-Java-based systems.", "", "In Java 2 and Microsoft .NET Common Language Runtime (CLR), trusted code has often been programmed to perform access-restricted operations not explicitly requested by its untrusted clients. Since an untrusted client will be on the call stack when access control is enforced, an access-restricted operation will not succeed unless the client is authorized. To avoid this, a portion of the trusted code can be made “privileged.” When access control is enforced, privileged code causes the stack traversal to stop at the trusted code frame, and the untrusted code stack frames will not be checked for authorization. For large programs, manually understanding which portions of code should be made privileged is a difficult task. Developers must understand which authorizations will implicitly be extended to client code and make sure that the values of the variables used by the privileged code are not “tainted” by client code. This paper presents an interprocedural analysis for Java bytecode to automatically identify which portions of trusted code should be made privileged, ensure that there are no tainted variables in privileged code, and detect “unnecessary” and “redundant” privileged code. We implemented the algorithm and present the results of our analyses on a set of large programs. While the analysis techniques are in the context of Java code, the basic concepts are also applicable to non-Java systems with a similar authorization model." ] }
1307.2964
1914347863
The stack-based access control mechanism plays a fundamental role in the security architecture of Java and Microsoft CLR (common language runtime). It is enforced at runtime by inspecting methods in the current call stack for granted permissions before the program performs safety-critical operations. Although stack inspection is well studied, there is relatively little work on automated generation of access control policies, and most existing work on inferring security policies assume the permissions to be checked at stack inspection points are known beforehand. Practiced approaches to generating access control policies are still manually done by developers based on domain-specific knowledges and trial-and-error testing. In this paper, we present a systematic approach to automated generation of access control policies for Java programs that necessarily ensure the program to pass stack inspection. The techniques are abstract interpretation based context-sensitive static program analyses. Our analysis models the program by combining a context-sensitive call graph with a dependency graph. We are hereby able to precisely identify permission requirements at stack inspection points, which are usually ignored in previous study.
To the best of our knowledge, the modular permission analysis proposed in @cite_3 is the most relevant to our work . On one hand, it was also concerned with automatically generating security polices for any given program, with particular attention on the principle of least privilege. On the other hand, they were the first to attempt to reflect the effects of string analysis in access rights analysis in terms of slicing. A modular analysis algorithm is proposed to achieve the practical scalability, and the authors developed a tool Automated Authorization Analysis (A3) to assess the precision of permission requirements for stack inspection. However, their algorithms are based on a context-insensitive call graph and the analysis results can be polluted by invalid call paths. Moreover, their slicing algorithms are also context-insensitive.
{ "cite_N": [ "@cite_3" ], "mid": [ "2129626531" ], "abstract": [ "In modern software systems, programs are obtained by dynamically assembling components. This has made it necessary to subject component providers to access-control restrictions. What permissions should be granted to each component? Too few permissions may cause run-time authorization failures, too many constitute a security hole. We have designed and implemented a composite algorithm for precise static permission analysis for Java and the CLR. Unlike previous work, the analysis is modular and fully integrated with a novel slicing-based string analysis that is used to statically compute the string values defining a permission and disambiguate permission propagation paths. The results of our research prototype on production-level Java code support the effectiveness, practicality, and precision of our techniques, and show outstanding improvement over previous work." ] }
1307.1662
1523296404
Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.
There is a large body of work regarding semi-supervised techniques which integrate unsupervised feature learning with discriminative learning methods to improve the performance of NLP applications. Word clustering has been used to learn classes of words that have similar semantic features to improve language modeling @cite_29 and knowledge transfer across languages @cite_27 . Dependency parsing and other NLP tasks have been shown to benefit from such a large unannotated corpus @cite_11 , and a variety of unsupervised feature learning methods have been shown to unilaterally improve the performance of supervised learning tasks @cite_24 . @cite_18 induce distributed representations for a pair of languages jointly, where a learner can be trained on annotations present in one language and applied to test data in another.
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_24", "@cite_27", "@cite_11" ], "mid": [ "2251033195", "2121227244", "2158139315", "1818534184", "2128634885" ], "abstract": [ "Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.", "We address the problem of predicting a word from previous words in a sample of text. In particular, we discuss n-gram models based on classes of words. We also discuss several statistical algorithms for assigning words to classes based on the frequency of their co-occurrence with other words. We find that we are able to extract classes that have the flavor of either syntactically based groupings or semantically based groupings, depending on the nature of the underlying statistics.", "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13 . When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26 .", "We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92.02 to 93.16 , and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86.13 to 87.13 . In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance." ] }
1307.1662
1523296404
Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.
Learning distributed word representations is a way to learn effective and meaningful information about words and their usages. They are usually generated as a side effect of training parametric language models as probabilistic neural networks. Training these models is slow and takes a significant amount of computational resources @cite_20 @cite_9 . Several suggestions have been proposed to speed up the training procedure, either by changing the model architecture to exploit an algorithmic speedup @cite_22 @cite_12 or by estimating the error by sampling @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_9", "@cite_20", "@cite_12" ], "mid": [ "2152808281", "", "2168231600", "100623710", "36903255" ], "abstract": [ "Previous work on statistical language modeling has shown that it is possible to train a feedforward neural network to approximate probabilities over sequences of words, resulting in significant error reduction when compared to standard baseline models based on n-grams. However, training the neural network model with the maximum-likelihood criterion requires computations proportional to the number of words in the vocabulary. In this paper, we introduce adaptive importance sampling as a way to accelerate training of the model. The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network. We show that a very significant speedup can be obtained on standard problems.", "", "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.", "A central goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on several methods to speed-up both training and probability computation, as well as comparative experiments to evaluate the improvements brought by these techniques. We finally describe the incorporation of this new language model into a state-of-the-art speech recognizer of conversational speech.", "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy." ] }
1307.1662
1523296404
Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.
@cite_13 shows that word embeddings can almost substitute NLP common features on several tasks. The system they built, SENNA, offers part of speech tagging, chunking, named entity recognition, semantic role labeling and dependency parsing @cite_39 . The system is built on top of word embeddings and performs competitively compared to state of art systems. In addition to pure performance, the system has a faster execution speed than comparable NLP pipelines @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_39" ], "mid": [ "2963217253", "2117130368", "" ], "abstract": [ "Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebankcompliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "" ] }
1307.1662
1523296404
Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.
To speed up the embedding generation process, SENNA embeddings are generated through a procedure that is different from language modeling. The representations are acquired through a model that distinguishes between phrases and corrupted versions of them. In doing this, the model avoids the need to normalize the scores across the vocabulary to infer probabilities. @cite_4 shows that the embeddings generated by SENNA perform well in a variety of term-based evaluation tasks. Given the training speed and prior performance on NLP tasks in English, we generate our multilingual embeddings using a similar network architecture to the one SENNA used.
{ "cite_N": [ "@cite_4" ], "mid": [ "1576954243" ], "abstract": [ "We seek to better understand the information encoded in word embeddings. We propose several tasks that help to distinguish the characteristics of different publicly released embeddings. Our evaluation shows that embeddings are able to capture surprisingly nuanced semantics even in the absence of sentence structure. Moreover, benchmarking the embeddings shows great variance in quality and characteristics of the semantics captured by the tested embeddings. Finally, we show the impact of varying the number of dimensions and the resolution of each dimension on the effective useful features captured by the embedding space. Our contributions highlight the importance of embeddings for NLP tasks and the effect of their quality on the final results." ] }
1307.1662
1523296404
Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.
Despite the progress made in creating distributed representations, combining them to produce meaning is still a challenging task. Several approaches have been proposed to address feature compositionality for semantic problems such as paraphrase detection @cite_19 , and sentiment analysis @cite_31 using word embeddings.
{ "cite_N": [ "@cite_19", "@cite_31" ], "mid": [ "2103305545", "1889268436" ], "abstract": [ "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.", "Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them." ] }
1307.1543
2951529892
Despite the services of sophisticated search engines like Google, there are a number of interesting information sources which are useful but largely inaccessible to current Web users. These information sources are often ad-hoc, location-specific and only useful for users over short periods of time, or relate to tacit knowledge of users or implicit knowledge in crowds. The solution presented in this paper addresses these problems by introducing an integrated concept of "location" and "presence" across the physical and virtual worlds enabling ad-hoc socializing of users interested in, or looking for similar information. While the definition of presence in the physical world is straightforward - through a spatial location and vicinity at a certain point in time - their definitions in the virtual world are neither obvious nor trivial. Based on a detailed analysis we provide an integrated spatial model spanning both worlds which enables us to define presence of users in a unified way. This integrated model allows us to enable ad-hoc socializing of users browsing the Web with users in the physical world specific to their joint information needs and allows us to unlock the untapped information sources mentioned above. We describe a proof-of-concept implementation of our model and provide an empirical analysis based on real-world experiments.
There are two principle approaches to investigate users' browsing behavior: such as @cite_16 @cite_17 analyze server access logs. @cite_16 shows that users often feature different behavior patterns, rather than a single one. The results in @cite_17 confirm previous findings about long-tailed distributions in site traffic. @cite_23 @cite_1 analyze search engine interaction logs to gain insights into the query behavior. @cite_23 investigated how users' search behavior can be used as feedback to improve the ranking of query results. @cite_1 found that after a few hundred queries a user's topical interest distribution converges. collect data on the client side using, e.g., browser plugins that log all user actions. @cite_9 focused on demographic factors, i.e., how age, sex, etc. affect users' browsing behavior. In @cite_11 the authors identify different types of revisitation behavior, providing recommendations towards web browser, search engines, and web design. @cite_8 @cite_12 investigated in detail tabbed browsing, i.e., the benefits of multiple tabs within a browser window.
{ "cite_N": [ "@cite_11", "@cite_8", "@cite_9", "@cite_1", "@cite_23", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "2140188249", "2070347268", "613065787", "2065222494", "2125771191", "2061757614", "2049981031", "2018874118" ], "abstract": [ "Our work examines Web revisitation patterns. Everybody revisits Web pages, but their reasons for doing so can differ depending on the particular Web page, their topic of interest, and their intent. To characterize how people revisit Web content, we analyzed five weeks of Web interaction logs of over 612,000 users. We supplemented these findings by a survey intended to identify the intent behind the observed revisitation. Our analysis reveals four primary revisitation patterns, each with unique behavioral, content, and structural characteristics. Through our analysis we illustrate how understanding revisitation patterns can enable Web sites to provide improved navigation, Web browsers to predict users' destinations, and search engines to better support fast, fresh, and effective finding and re-finding.", "We present a study which investigated how and why users of Mozilla Firefox use multiple tabs and windows during web browsing. The detailed web browsing usage of 21 participants was logged over a period of 13 to 21 days each, and was supplemented by qualitative data from diary entries and interviews. Through an examination of several measures of their tab usage, we show that our participants had a strong preference for the use of tabs rather than multiple windows. We report the reasons they cited for using tabs, and the advantages over multiple windows. We identify several common tab usage patterns which browsers could explicitly support. Finally, we look at how tab usage affects web page revisitation. Most of our participants switched tabs more often than they used the back button, making tab switching the second most important navigation mechanism in the browser, after link clicking.", "As the Web has become integrated into daily life, understanding how individuals spend their time online impacts domains ranging from public policy to marketing. It is difficult, however, to measure even simple aspects of browsing behavior via conventional methods---including surveys and site-level analytics---due to limitations of scale and scope. In part addressing these limitations, large-scale Web panel data are a relatively novel means for investigating patterns of Internet usage. In one of the largest studies of browsing behavior to date, we pair Web histories for 250,000 anonymized individuals with user-level demographics---including age, sex, race, education, and income---to investigate three topics. First, we examine how behavior changes as individuals spend more time online, showing that the heaviest users devote nearly twice as much of their time to social media relative to typical individuals. Second, we revisit the digital divide, finding that the frequency with which individuals turn to the Web for research, news, and healthcare is strongly related to educational background, but not as closely tied to gender and ethnicity. Finally, we demonstrate that browsing histories are a strong signal for inferring user attributes, including ethnicity and household income, a result that may be leveraged to improve ad targeting.", "Query logs, the patterns of activity left by millions of users, contain a wealth of information that can be mined to aid personalization. We perform a large-scale study of Yahoo! search engine logs, tracking 1.35 million browser-cookies over a period of 6 months. We define metrics to address questions such as 1) How much history is available?, 2) How do users' topical interests vary, as reflected by their queries?, and 3) What can we learn from user clicks? We find that there is significantly more expected history for the user of a randomly picked query than for a randomly picked user. We show that users exhibit consistent topical interests that vary between users. We also see that user clicks indicate a variety of special interests. Our findings shed light on user activity and can inform future personalization efforts.", "We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31 relative to the original performance.", "User Navigation Behavior Mining (UNBM) mainly studies the problems of extracting the interesting user access patterns from user access sequences (UAS), which are usually used for user access prediction and web page recommendation. Through analyzing the real world web data, we find most of user access sequences carrying hybrid features of different patterns, rather than a single one.", "Browsing the web has been shown to be a highly recurrent activity. Aimed to optimize the browsing experience, extensive previous research has been carried out on users' revisitation behavior. However, the conventional definition for revisitation, which only considers page loading activities by monitoring http requests initiated by the browser, largely underestimates users' intended revisitation activities with tabbed browsers. Thus, we introduce a goal-oriented definition and a refined revisitation measurement based on page viewings in tabbed browsers. An empirical analysis of statistics taken from a client-side log study showed that although the overall revisitation rate remained relatively constant, tabbed browsing has introduced new behaviors warrant future investigations.", "We examine the properties of all HTTP requests generated by a thousand undergraduates over a span of two months. Preserving user identity in the data set allows us to discover novel properties of Web traffic that directly affect models of hypertext navigation. We find that the popularity of Web sites--the number of users who contribute to their traffic--lacks any intrinsic mean and may be unbounded. Further, many aspects of the browsing behavior of individual users can be approximated by log-normal distributions even though their aggregate behavior is scale-free. Finally, we show that users' click streams cannot be cleanly segmented into sessions using timeouts, affecting any attempt to model hypertext navigation using statistics of individual sessions. We propose a strictly logical definition of sessions based on browsing activity as revealed by referrer URLs; a user may have several active sessions in their click stream at any one time. We demonstrate that applying a timeout to these logical sessions affects their statistics to a lesser extent than a purely timeout-based mechanism." ] }
1307.1543
2951529892
Despite the services of sophisticated search engines like Google, there are a number of interesting information sources which are useful but largely inaccessible to current Web users. These information sources are often ad-hoc, location-specific and only useful for users over short periods of time, or relate to tacit knowledge of users or implicit knowledge in crowds. The solution presented in this paper addresses these problems by introducing an integrated concept of "location" and "presence" across the physical and virtual worlds enabling ad-hoc socializing of users interested in, or looking for similar information. While the definition of presence in the physical world is straightforward - through a spatial location and vicinity at a certain point in time - their definitions in the virtual world are neither obvious nor trivial. Based on a detailed analysis we provide an integrated spatial model spanning both worlds which enables us to define presence of users in a unified way. This integrated model allows us to enable ad-hoc socializing of users browsing the Web with users in the physical world specific to their joint information needs and allows us to unlock the untapped information sources mentioned above. We describe a proof-of-concept implementation of our model and provide an empirical analysis based on real-world experiments.
Browsing and searching the Web is still primarily an isolated task. @cite_5 conducted a survey showing that collaborative browsing is crucial for many users, but currently requires users to revert to out-of-band channels such as phone, e-mail or instant messaging. @cite_4 and @cite_26 are systems providing mechanisms for co-located collaboration, i.e., where several users gather around one computer. @cite_6 and @cite_10 extend this idea to collaborative browsing between users working with their own computers. (llaborative rowsing and earching) @cite_0 proposes a browser extension providing a proof-of-concept implementation that allows users visiting the same site to communicate with each other.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_6", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2169942798", "2097559038", "2102958620", "2096463110", "2022555286", "1984315811" ], "abstract": [ "Web search is often viewed as a solitary task; however, there are many situations in which groups of people gather around a single computer to jointly search for information online. We present the findings of interviews with teachers, librarians, and developing world researchers that provide details about users' collaborative search habits in shared-computer settings, revealing several limitations of this practice. We then introduce CoSearch, a system we developed to improve the experience of co-located collaborative Web search by leveraging readily available devices such as mobile phones and extra mice. Finally, we present an evaluation comparing CoSearch to status quo collaboration approaches, and show that CoSearch enabled distributed control and division of labor, thus reducing the frustrations associated with shared-computer searches, while still preserving the positive aspects of communication and collaboration associated with joint computer use.", "Interactive tables can enhance small group colocated collaborative work in many domains. One application enabled by this new technology is copresent, collaborative search for digital content. For example, a group of students could sit around an interactive table and search for digital images to use in a report. We have developed TeamSearch, an application that enables this type of activity by supporting group specification of Boolean style queries. We explore whether TeamSearch should consider all group members' activities as contributing to a single query or should interpret them as separate, parallel search requests. The results reveal that both strategies are similarly efficient, but that collective query formation has advantages in terms of enhancing group collaboration and awareness, allowing users to bootstrap query specification skills, and personal preference. This suggests that team centric ills may offer benefits beyond the \"staples\" of efficiency and result quality that are usually considered when designing search interfaces.", "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.", "Finding relevant and reliable information on the web is a non-trivial task. While internet search engines do find correct web pages with respect to a set of keywords, they often cannot ensure the relevance or reliability of their content. An emerging trend is to harness internet users in the spirit of Web 2.0, to discern and personalize relevant and reliable information. Users collaboratively search or browse for information, either directly by communicating or indirectly by adding meta information (e.g., tags) to web pages. While gaining much popularity, such approaches are bound to specific service providers, or the Web 2.0 sites providing the necessary features, and the knowledge so generated is also confined to, and subject to the whims and censorship of such providers. To overcome these limitations we introduce COBS, a browser-centric knowledge repository which enjoys the inherent openness (similar to Wikipedia) while aiming to provide end-users the freedom of personalization and privacy by adopting an eventually hybrid p2p back-end. In this paper we first present the COBS front-end, a browser add-on that enables users to tag, rate or comment arbitrary web pages and to socialize with others in both a synchronous and asynchronous manner. We then discuss how a decentralized back-end can be realized. While Distributed Hash Tables (DHTs) are the most natural choice, and despite a decade of research on DHT designs, we encounter several, some small, while others more fundamental shortcomings that need to be surmounted in order to realize an efficient, scalable and reliable decentralized back-end for COBS. To that end, we outline various design alternatives and discuss qualitatively (and quantitatively, when possible) their (dis-)advantages. We believe that the objectives of COBS are ambitious, posing significant challenges for distributed systems, middleware and distributed data-analytics research, even while building on the existing momentum. Based on experiences from our ongoing work on COBS, we outline these systems research issues in this position paper.", "Today's Web browsers provide limited support for rich information-seeking and information-sharing scenarios. A survey we conducted of 204 knowledge workers at a large technology company has revealed that a large proportion of users engage in searches that include collaborative activities. We present the results of the survey, and then review the implications of these findings for designing new Web search interfaces that provide tools for sharing.", "Modern enterprises are replete with numerous online processes. Many must be performed frequently and are tedious, while others are done less frequently yet are complex or hard to remember. We present interviews with knowledge workers that reveal a need for mechanisms to automate the execution of and to share knowledge about these processes. In response, we have developed the CoScripter system (formerly Koala [11]), a collaborative scripting environment for recording, automating, and sharing web-based processes. We have deployed CoScripter within a large corporation for more than 10 months. Through usage log analysis and interviews with users, we show that CoScripter has addressed many user automation and sharing needs, to the extent that more than 50 employees have voluntarily incorporated it into their work practice. We also present ways people have used CoScripter and general issues for tools that support automation and sharing of how-to knowledge." ] }
1307.1719
168364134
Traditional algorithms for detecting differences in source code focus on differences between lines. As such, little can be learned about abstract changes that occur over time within a project. Structural differencing on the program's abstract syntax tree reveals changes at the syntactic level within code, which allows us to further process the differences to understand their meaning. We propose that grouping of changes by some metric of similarity, followed by pattern extraction via antiunification will allow us to identify patterns of change within a software project from the sequence of changes contained within a Version Control System (VCS). Tree similarity metrics such as a tree edit distance can be used to group changes in order to identify groupings that may represent a single class of change (e.g., adding a parameter to a function call). By applying antiunification within each group we are able to generalize from families of concrete changes to patterns of structural change. Studying patterns of change at the structural level, instead of line-by-line, allows us to gain insight into the evolution of software.
The use of version control repositories as a source of data to study changes to code over time is not new, but our approach to the problem is novel. Neamtiu @cite_3 uses a similar approach of analyzing the abstract syntax tree of code in successive program versions, but focuses on detecting change occurrences only instead of going a step further and attempting to identify any common patterns of change that can be found. Other groups have focused on identifying patterns based on common refactorings that can be identified in the code @cite_7 , and seek to infer simple abstract rules that encapsulate the changes that they detect @cite_4 . For example, one such rule could indicate that for all calls that match a certain pattern, an additional argument should be added to their argument list.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3" ], "mid": [ "2157836986", "2127811329", "2146957318" ], "abstract": [ "Mapping code elements in one version of a program to corresponding code elements in another version is a fundamental building block for many software engineering tools. Existing tools that match code elements or identify structural changes - refactorings and API changes - between two versions of a program have two limitations that we overcome. First, existing tools cannot easily disambiguate among many potential matches or refactoring candidates. Second, it is difficult to use these tools' results for various software engineering tasks due to an unstructured representation of results. To overcome these limitations, our approach represents structural changes as a set of high-level change rules, automatically infers likely change rules and determines method-level matches based on the rules. By applying our tool to several open source projects, we show that our tool identifies matches that are difficult to find using other approaches and produces more concise results than other approaches. Our representation can serve as a better basis for other software engineering tools.", "Software has been and is still mostly refactored without tool support. Moreover, as we found in our case studies, programmers tend not to document these changes as refactorings, or even worse label changes as refactorings, although they are not. In this paper we present a technique to detect changes that are likely to be refactorings and rank them according to the likelihood. The evaluation shows that the method has both a high recall and a high precision ? it finds most of the refactorings, and most of the found refactoring candidates are really refactorings.", "Mining software repositories at the source code level can provide a greater understanding of how software evolves. We present a tool for quickly comparing the source code of different versions of a C program. The approach is based on partial abstract syntax tree matching, and can track simple changes to global variables, types and functions. These changes can characterize aspects of software evolution useful for answering higher level questions. In particular, we consider how they could be used to inform the design of a dynamic software updating system. We report results based on measurements of various versions of popular open source programs. including BIND, OpenSSH, Apache, Vsftpd and the Linux kernel." ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
One of the main purpose of studying human mobility in this concept is to investigate the population flow between different places @cite_19 @cite_15 with the aim of understanding and developing an optimal road network with efficient movement of traffic and minimal traffic congestion problems. In this part, main characteristics of traffic flows at the microscopic and macroscopic level are described . In a microscopic approach to traffic, each individual is examined separately. while at the macroscopic level we do not look at the individual as separate entities. The macroscopic level is more relevant to the dynamic description of traffic. Some macroscopic variables that translate the discrete nature of traffic into continuous variables are reviewed below:
{ "cite_N": [ "@cite_19", "@cite_15" ], "mid": [ "2153811040", "2069343813" ], "abstract": [ "We report on our experience scaling up the Mobile Millennium traffic information system using cloud computing and the Spark cluster computing framework. Mobile Millennium uses machine learning to infer traffic conditions for large metropolitan areas from crowdsourced data, and Spark was specifically designed to support such applications. Many studies of cloud computing frameworks have demonstrated scalability and performance improvements for simple machine learning algorithms. Our experience implementing a real-world machine learning-based application corroborates such benefits, but we also encountered several challenges that have not been widely reported. These include: managing large parameter vectors, using memory efficiently, and integrating with the application's existing storage infrastructure. This paper describes these challenges and the changes they required in both the Spark framework and the Mobile Millennium software. While we focus on a system for traffic estimation, we believe that the lessons learned are applicable to other machine learning-based applications.", "The central points of communication network flow have often been identified using graph theoretical centrality measures. In real networks, the state of traffic density arises from an interplay between the dynamics of the flow and the underlying network structure. In this work we investigate the relationship between centrality measures and the density of traffic for some simple particle hopping models on networks with emerging scale-free degree distributions. We also study how the speed of the dynamics are affected by the underlying network structure. Among other conclusions, we find that, even at low traffic densities, the dynamical measure of traffic density (the occupation ratio) has a non-trivial dependence on the static centrality (quantified by \"betweenness centrality\"), where non-central vertices get a comparatively large portion of the traffic." ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
In these parts we presented different studies and findings about Human Mobility. According to reviewed findings, we distinguish three different baselines in Human Mobility studies. The first categories belongs to trajectory-base studies, which is mainly on analyzing trajectories of individuals. In these set of approaches, statistical measures such as jump length and radius of gyration are analyzed in order to find patterns and models for individuals movement. The second category is studying dynamic proximity networks of people in mobile ad hoc networks. In such a network , finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity (frequencies of meetings, average duration of a contact, etc.), and exploiting them to predict future encounters @cite_26 . The third category of Human Mobility studies related to investigating flow on networks such as road network, transportation network, infrastructure networks and etc. Figure 1 illustrates three different baselines that we have distinguished for Human Mobility studies.
{ "cite_N": [ "@cite_26" ], "mid": [ "2060334710" ], "abstract": [ "Mobile ad hoc networks enable communications between clouds of mobile devices without the need for a preexisting infrastructure. One of their most interesting evolutions are opportunistic networks, whose goal is to also enable communication in disconnected environments, where the general absence of an end-to-end path between the sender and the receiver impairs communication when legacy MANET networking protocols are used. The key idea of OppNets is that the mobility of nodes helps the delivery of messages, because it may connect, asynchronously in time, otherwise disconnected subnetworks. This is especially true for networks whose nodes are mobile devices (e.g., smartphones and tablets) carried by human users, which is the typical OppNets scenario. In such a network where the movements of the communicating devices mirror those of their owners, finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity (frequencies of meetings, average duration of a contact, etc.), and exploiting them to predict future encounters. Therefore, there is a challenge in studying human mobility, specifically in its application to OppNets research. In this article we review the state of the art in the field of human mobility analysis and present a survey of mobility models. We start by reviewing the most considerable findings regarding the nature of human movements, which we classify along the spatial, temporal, and social dimensions of mobility. We discuss the shortcomings of the existing knowledge about human movements and extend it with the notion of predictability and patterns. We then survey existing approaches to mobility modeling and fit them into a taxonomy that provides the basis for a discussion on open problems and further directions for research on modeling human mobility." ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
table* [ht] &&&& & & & & & & & & @cite_21 & @cite_42 & @cite_64 & @cite_64 & @cite_17 & @cite_15 & @cite_38 & @cite_7 & @cite_16 & @cite_19 & @cite_23 & @cite_3
{ "cite_N": [ "@cite_38", "@cite_64", "@cite_7", "@cite_21", "@cite_42", "@cite_3", "@cite_19", "@cite_23", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2189139187", "2152204876", "1982300822", "2004602565", "2060840985", "1996573126", "2153811040", "2583508449", "2069343813", "1481716413", "2022780164" ], "abstract": [ "We consider the problem of estimating real-time traffic conditions from sparse, noisy GPS probe vehicle data. We specifically address arterial roads, which are also known as the secondary road network (highways are considered the primary road network). We consider several estimation problems: historical traffic patterns, real-time traffic conditions, and forecasting future traffic conditions. We assume that the data available for these estimation problems is a small set of sparsely traced vehicle trajectories, which represents a small fraction of the total vehicle flow through the network. We present an expectation maximization algorithm that simultaneously learns the likely paths taken by probe vehicles as well as the travel time distributions through the network. A case study using data from San Francisco taxis is used to illustrate the performance of the algorithm.", "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people's travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.", "This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.", "Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100,000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users’ frequent locations and show that commuting distances can be reasonably well explained by a gravity model.", "The spread of infectious disease epidemics is mediated by human travel. Yet human mobility patterns vary substantially between countries and regions. Quantifying the frequency of travel and length of journeys in well-defined population is therefore critical for predicting the likely speed and pattern of spread of emerging infectious diseases, such as a new influenza pandemic. Here we present the results of a large population survey undertaken in 2007 in two areas of China: Shenzhen city in Guangdong province, and Huangshan city in Anhui province. In each area, 10,000 randomly selected individuals were interviewed, and data on regular and occasional journeys collected. Travel behaviour was examined as a function of age, sex, economic status and home location. Women and children were generally found to travel shorter distances than men. Travel patterns in the economically developed Shenzhen region are shown to resemble those in developed and economically advanced middle income countries with a significant fraction of the population commuting over distances in excess of 50 km. Conversely, in the less developed rural region of Anhui, travel was much more local, with very few journeys over 30 km. Travel patterns in both populations were well-fitted by a gravity model with a lognormal kernel function. The results provide the first quantitative information on human travel patterns in modern China, and suggest that a pandemic emerging in a less developed area of rural China might spread geographically sufficiently slowly for containment to be feasible, while spatial spread in the more economically developed areas might be expected to be much more rapid, making containment more difficult.", "Traffic delays and congestion are a major source of inefficiency, wasted fuel, and commuter frustration. Measuring and localizing these delays, and routing users around them, is an important step towards reducing the time people spend stuck in traffic. As others have noted, the proliferation of commodity smartphones that can provide location estimates using a variety of sensors---GPS, WiFi, and or cellular triangulation---opens up the attractive possibility of using position samples from drivers' phones to monitor traffic delays at a fine spatiotemporal granularity. This paper presents VTrack, a system for travel time estimation using this sensor data that addresses two key challenges: energy consumption and sensor unreliability. While GPS provides highly accurate location estimates, it has several limitations: some phones don't have GPS at all, the GPS sensor doesn't work in \"urban canyons\" (tall buildings and tunnels) or when the phone is inside a pocket, and the GPS on many phones is power-hungry and drains the battery quickly. In these cases, VTrack can use alternative, less energy-hungry but noisier sensors like WiFi to estimate both a user's trajectory and travel time along the route. VTrack uses a hidden Markov model (HMM)-based map matching scheme and travel time estimation method that interpolates sparse data to identify the most probable road segments driven by the user and to attribute travel times to those segments. We present experimental results from real drive data and WiFi access point sightings gathered from a deployment on several cars. We show that VTrack can tolerate significant noise and outages in these location estimates, and still successfully identify delay-prone segments, and provide accurate enough delays for delay-aware routing algorithms. We also study the best sampling strategies for WiFi and GPS sensors for different energy cost regimes.", "We report on our experience scaling up the Mobile Millennium traffic information system using cloud computing and the Spark cluster computing framework. Mobile Millennium uses machine learning to infer traffic conditions for large metropolitan areas from crowdsourced data, and Spark was specifically designed to support such applications. Many studies of cloud computing frameworks have demonstrated scalability and performance improvements for simple machine learning algorithms. Our experience implementing a real-world machine learning-based application corroborates such benefits, but we also encountered several challenges that have not been widely reported. These include: managing large parameter vectors, using memory efficiently, and integrating with the application's existing storage infrastructure. This paper describes these challenges and the changes they required in both the Spark framework and the Mobile Millennium software. While we focus on a system for traffic estimation, we believe that the lessons learned are applicable to other machine learning-based applications.", "Abstract Purpose — In this chapter, we will review several alternative methods of collecting data from mobile phones for human mobility analysis. We propose considering cellular network location data as a useful complementary source for human mobility research and provide case studies to illustrate the advantages and disadvantages of each method. Methodology approach — We briefly describe cellular phone network architecture and the location data it can provide, and discuss two types of data collection: active and passive localization. Active localization is something like a personal travel diary. It provides a tool for recording positioning data on a survey sample over a long period of time. Passive localization, on the other hand, is based on phone network data that are automatically recorded for technical or billing purposes. It offers the advantage of access to very large user populations for mobility flow analysis of a broad area. Findings — We review several alternative methods of collecting data from mobile phone for human mobility analysis to show that cellular network data, although limited in terms of location precision and recording frequency, offer two major advantages for studying human mobility. First, very large user samples – covering broad geographical areas – can be followed over a long period of time. Second, this type of data allows researchers to choose a specific data collection methodology (active or passive), depending on the objectives of their study. The big mobile phone localization datasets have provided a new impulse for the interdisciplinary research in human mobility. Originality value of chapter — We propose considering cellular network location data as a useful complementary source for transportation research and provide case studies to illustrate the advantages and disadvantages of each proposed method. Mobile phones have become a kind of “personal sensor” offering an ever-increasing amount of location data on mobile phone users over long time periods. These data can thus provide a framework for a comprehensive and longitudinal study of temporal dynamics, and can be used to capture ephemeral events and fluctuations in day-to-day mobility behavior offering powerful tools to transportation research, urban planning, or even real-time city monitoring.", "The central points of communication network flow have often been identified using graph theoretical centrality measures. In real networks, the state of traffic density arises from an interplay between the dynamics of the flow and the underlying network structure. In this work we investigate the relationship between centrality measures and the density of traffic for some simple particle hopping models on networks with emerging scale-free degree distributions. We also study how the speed of the dynamics are affected by the underlying network structure. Among other conclusions, we find that, even at low traffic densities, the dynamical measure of traffic density (the occupation ratio) has a non-trivial dependence on the static centrality (quantified by \"betweenness centrality\"), where non-central vertices get a comparatively large portion of the traffic.", "We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.", "Modern technologies not only provide a variety of communication modes (e.g., texting, cell phone conversation, and online instant messaging), but also detailed electronic traces of these communications between individuals. These electronic traces indicate that the interactions occur in temporal bursts. Here, we study intercall duration of communications of the 100,000 most active cell phone users of a Chinese mobile phone operator. We confirm that the intercall durations follow a power-law distribution with an exponential cutoff at the population level but find differences when focusing on individual users. We apply statistical tests at the individual level and find that the intercall durations follow a power-law distribution for only 3,460 individuals (3.46 ). The intercall durations for the majority (73.34 ) follow a Weibull distribution. We quantify individual users using three measures: out-degree, percentage of outgoing calls, and communication diversity. We find that the cell phone users with a power-law duration distribution fall into three anomalous clusters: robot-based callers, telecom fraud, and telephone sales. This information is of interest to both academics and practitioners, mobile telecom operators in particular. In contrast, the individual users with a Weibull duration distribution form the fourth cluster of ordinary cell phone users. We also discover more information about the calling patterns of these four clusters (e.g., the probability that a user will call the cr-th most contact and the probability distribution of burst sizes). Our findings may enable a more detailed analysis of the huge body of data contained in the logs of massive users." ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
& @cite_23 & @cite_7 & @cite_19 & @cite_21 & @cite_37 & @cite_19 & @cite_19 & @cite_58 & @cite_7 & @cite_64 & @cite_57 &
{ "cite_N": [ "@cite_37", "@cite_64", "@cite_7", "@cite_21", "@cite_57", "@cite_19", "@cite_23", "@cite_58" ], "mid": [ "", "2152204876", "1982300822", "2004602565", "", "2153811040", "2583508449", "1170115397" ], "abstract": [ "", "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people's travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.", "This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.", "Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100,000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users’ frequent locations and show that commuting distances can be reasonably well explained by a gravity model.", "", "We report on our experience scaling up the Mobile Millennium traffic information system using cloud computing and the Spark cluster computing framework. Mobile Millennium uses machine learning to infer traffic conditions for large metropolitan areas from crowdsourced data, and Spark was specifically designed to support such applications. Many studies of cloud computing frameworks have demonstrated scalability and performance improvements for simple machine learning algorithms. Our experience implementing a real-world machine learning-based application corroborates such benefits, but we also encountered several challenges that have not been widely reported. These include: managing large parameter vectors, using memory efficiently, and integrating with the application's existing storage infrastructure. This paper describes these challenges and the changes they required in both the Spark framework and the Mobile Millennium software. While we focus on a system for traffic estimation, we believe that the lessons learned are applicable to other machine learning-based applications.", "Abstract Purpose — In this chapter, we will review several alternative methods of collecting data from mobile phones for human mobility analysis. We propose considering cellular network location data as a useful complementary source for human mobility research and provide case studies to illustrate the advantages and disadvantages of each method. Methodology approach — We briefly describe cellular phone network architecture and the location data it can provide, and discuss two types of data collection: active and passive localization. Active localization is something like a personal travel diary. It provides a tool for recording positioning data on a survey sample over a long period of time. Passive localization, on the other hand, is based on phone network data that are automatically recorded for technical or billing purposes. It offers the advantage of access to very large user populations for mobility flow analysis of a broad area. Findings — We review several alternative methods of collecting data from mobile phone for human mobility analysis to show that cellular network data, although limited in terms of location precision and recording frequency, offer two major advantages for studying human mobility. First, very large user samples – covering broad geographical areas – can be followed over a long period of time. Second, this type of data allows researchers to choose a specific data collection methodology (active or passive), depending on the objectives of their study. The big mobile phone localization datasets have provided a new impulse for the interdisciplinary research in human mobility. Originality value of chapter — We propose considering cellular network location data as a useful complementary source for transportation research and provide case studies to illustrate the advantages and disadvantages of each proposed method. Mobile phones have become a kind of “personal sensor” offering an ever-increasing amount of location data on mobile phone users over long time periods. These data can thus provide a framework for a comprehensive and longitudinal study of temporal dynamics, and can be used to capture ephemeral events and fluctuations in day-to-day mobility behavior offering powerful tools to transportation research, urban planning, or even real-time city monitoring.", "" ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
& @cite_2 & @cite_5 & @cite_35 & @cite_23 & & @cite_15 & @cite_15 & @cite_0 & @cite_21 & @cite_13 & @cite_21 &
{ "cite_N": [ "@cite_35", "@cite_21", "@cite_0", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "1964717191", "2004602565", "2056284729", "2583508449", "2016674662", "2090978188", "2069343813", "" ], "abstract": [ "Anonymous location data from cellular phone networks sheds light on how people move around on a large scale.", "Mobile phone datasets allow for the analysis of human behavior on an unprecedented scale. The social network, temporal dynamics and mobile behavior of mobile phone users have often been analyzed independently from each other using mobile phone datasets. In this article, we explore the connections between various features of human behavior extracted from a large mobile phone dataset. Our observations are based on the analysis of communication data of 100,000 anonymized and randomly chosen individuals in a dataset of communications in Portugal. We show that clustering and principal component analysis allow for a significant dimension reduction with limited loss of information. The most important features are related to geographical location. In particular, we observe that most people spend most of their time at only a few locations. With the help of clustering methods, we then robustly identify home and office locations and compare the results with official census data. Finally, we analyze the geographic spread of users’ frequent locations and show that commuting distances can be reasonably well explained by a gravity model.", "The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process.", "Abstract Purpose — In this chapter, we will review several alternative methods of collecting data from mobile phones for human mobility analysis. We propose considering cellular network location data as a useful complementary source for human mobility research and provide case studies to illustrate the advantages and disadvantages of each method. Methodology approach — We briefly describe cellular phone network architecture and the location data it can provide, and discuss two types of data collection: active and passive localization. Active localization is something like a personal travel diary. It provides a tool for recording positioning data on a survey sample over a long period of time. Passive localization, on the other hand, is based on phone network data that are automatically recorded for technical or billing purposes. It offers the advantage of access to very large user populations for mobility flow analysis of a broad area. Findings — We review several alternative methods of collecting data from mobile phone for human mobility analysis to show that cellular network data, although limited in terms of location precision and recording frequency, offer two major advantages for studying human mobility. First, very large user samples – covering broad geographical areas – can be followed over a long period of time. Second, this type of data allows researchers to choose a specific data collection methodology (active or passive), depending on the objectives of their study. The big mobile phone localization datasets have provided a new impulse for the interdisciplinary research in human mobility. Originality value of chapter — We propose considering cellular network location data as a useful complementary source for transportation research and provide case studies to illustrate the advantages and disadvantages of each proposed method. Mobile phones have become a kind of “personal sensor” offering an ever-increasing amount of location data on mobile phone users over long time periods. These data can thus provide a framework for a comprehensive and longitudinal study of temporal dynamics, and can be used to capture ephemeral events and fluctuations in day-to-day mobility behavior offering powerful tools to transportation research, urban planning, or even real-time city monitoring.", "Among the realistic ingredients to be considered in the computational modeling of infectious diseases, human mobility represents a crucial challenge both on the theoretical side and in view of the limited availability of empirical data. To study the interplay between short-scale commuting flows and long-range airline traffic in shaping the spatiotemporal pattern of a global epidemic we (i) analyze mobility data from 29 countries around the world and find a gravity model able to provide a global description of commuting patterns up to 300 kms and (ii) integrate in a worldwide-structured metapopulation epidemic model a timescale-separation technique for evaluating the force of infection due to multiscale mobility processes in the disease dynamics. Commuting flows are found, on average, to be one order of magnitude larger than airline flows. However, their introduction into the worldwide model shows that the large-scale pattern of the simulated epidemic exhibits only small variations with respect to the baseline case where only airline traffic is considered. The presence of short-range mobility increases, however, the synchronization of subpopulations in close proximity and affects the epidemic behavior at the periphery of the airline transportation infrastructure. The present approach outlines the possibility for the definition of layered computational approaches where different modeling assumptions and granularities can be used consistently in a unifying multiscale framework.", "A parameter-free model predicts patterns of commuting, phone calls and trade using only population density at all intermediate points.", "The central points of communication network flow have often been identified using graph theoretical centrality measures. In real networks, the state of traffic density arises from an interplay between the dynamics of the flow and the underlying network structure. In this work we investigate the relationship between centrality measures and the density of traffic for some simple particle hopping models on networks with emerging scale-free degree distributions. We also study how the speed of the dynamics are affected by the underlying network structure. Among other conclusions, we find that, even at low traffic densities, the dynamical measure of traffic density (the occupation ratio) has a non-trivial dependence on the static centrality (quantified by \"betweenness centrality\"), where non-central vertices get a comparatively large portion of the traffic.", "" ] }
1307.0814
1549741124
Human Mobility has attracted attentions from dierent elds of studies such as epidemic modeling, trac engineering, trac prediction and urban planning. In this survey we review major characteristics of human mobility studies including from trajectory-based studies to studies using graph and network theory. In trajectory-based studies statistical measures such as jump length distribution and radius of gyration are analyzed in order to investigate how people move in their daily life, and if it is possible to model this individual movements and make prediction based on them. Using graph in mobility studies, helps to investigate the dynamic behavior of the system, such as diusion and ow in the network and makes it easier to estimate how much one part of the network inuences another by using metrics like centrality measures. We aim to study population ow in transportation networks using mobility data to derive models and patterns, and to develop new applications in predicting phenomena such as congestion. Human Mobility studies with the new generation of mobility data provided by cellular phone networks, arise new challenges such as data storing, data representation, data analysis and computation complexity. A comparative review of dierent data types used in current tools and applications of Human Mobility studies leads us to new approaches for dealing with mentioned challenges.
&&& @cite_51 & @cite_57 & && @cite_8 & @cite_16 && @cite_58 & @cite_53 & &&& @cite_13 & @cite_3 & &&&& & & @cite_7 & &&& & @cite_19 & &&&&& && &&& & @cite_16 & &&&&& && &&& & @cite_5 & &&&&& && &&& & @cite_42 & &&&&& &&
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_8", "@cite_53", "@cite_42", "@cite_16", "@cite_3", "@cite_57", "@cite_19", "@cite_5", "@cite_58", "@cite_51" ], "mid": [ "", "1982300822", "2017955130", "2115240023", "2060840985", "1481716413", "1996573126", "", "2153811040", "2090978188", "1170115397", "1970238092" ], "abstract": [ "", "This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.", "Whilst being hailed as the remedy to the world’s ills, cities will need to adapt in the 21st century. In particular, the role of public transport is likely to increase significantly, and new methods and technics to better plan transit systems are in dire need. This paper examines one fundamental aspect of transit: network centrality. By applying the notion of betweenness centrality to 28 worldwide metro systems, the main goal of this paper is to study the emergence of global trends in the evolution of centrality with network size and examine several individual systems in more detail. Betweenness was notably found to consistently become more evenly distributed with size (i.e. no “winner takes all”) unlike other complex network properties. Two distinct regimes were also observed that are representative of their structure. Moreover, the share of betweenness was found to decrease in a power law with size (with exponent 1 for the average node), but the share of most central nodes decreases much slower than least central nodes (0.87 vs. 2.48). Finally the betweenness of individual stations in several systems were examined, which can be useful to locate stations where passengers can be redistributed to relieve pressure from overcrowded stations. Overall, this study offers significant insights that can help planners in their task to design the systems of tomorrow, and similar undertakings can easily be imagined to other urban infrastructure systems (e.g., electricity grid, water wastewater system, etc.) to develop more sustainable cities.", "We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.", "The spread of infectious disease epidemics is mediated by human travel. Yet human mobility patterns vary substantially between countries and regions. Quantifying the frequency of travel and length of journeys in well-defined population is therefore critical for predicting the likely speed and pattern of spread of emerging infectious diseases, such as a new influenza pandemic. Here we present the results of a large population survey undertaken in 2007 in two areas of China: Shenzhen city in Guangdong province, and Huangshan city in Anhui province. In each area, 10,000 randomly selected individuals were interviewed, and data on regular and occasional journeys collected. Travel behaviour was examined as a function of age, sex, economic status and home location. Women and children were generally found to travel shorter distances than men. Travel patterns in the economically developed Shenzhen region are shown to resemble those in developed and economically advanced middle income countries with a significant fraction of the population commuting over distances in excess of 50 km. Conversely, in the less developed rural region of Anhui, travel was much more local, with very few journeys over 30 km. Travel patterns in both populations were well-fitted by a gravity model with a lognormal kernel function. The results provide the first quantitative information on human travel patterns in modern China, and suggest that a pandemic emerging in a less developed area of rural China might spread geographically sufficiently slowly for containment to be feasible, while spatial spread in the more economically developed areas might be expected to be much more rapid, making containment more difficult.", "We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.", "Traffic delays and congestion are a major source of inefficiency, wasted fuel, and commuter frustration. Measuring and localizing these delays, and routing users around them, is an important step towards reducing the time people spend stuck in traffic. As others have noted, the proliferation of commodity smartphones that can provide location estimates using a variety of sensors---GPS, WiFi, and or cellular triangulation---opens up the attractive possibility of using position samples from drivers' phones to monitor traffic delays at a fine spatiotemporal granularity. This paper presents VTrack, a system for travel time estimation using this sensor data that addresses two key challenges: energy consumption and sensor unreliability. While GPS provides highly accurate location estimates, it has several limitations: some phones don't have GPS at all, the GPS sensor doesn't work in \"urban canyons\" (tall buildings and tunnels) or when the phone is inside a pocket, and the GPS on many phones is power-hungry and drains the battery quickly. In these cases, VTrack can use alternative, less energy-hungry but noisier sensors like WiFi to estimate both a user's trajectory and travel time along the route. VTrack uses a hidden Markov model (HMM)-based map matching scheme and travel time estimation method that interpolates sparse data to identify the most probable road segments driven by the user and to attribute travel times to those segments. We present experimental results from real drive data and WiFi access point sightings gathered from a deployment on several cars. We show that VTrack can tolerate significant noise and outages in these location estimates, and still successfully identify delay-prone segments, and provide accurate enough delays for delay-aware routing algorithms. We also study the best sampling strategies for WiFi and GPS sensors for different energy cost regimes.", "", "We report on our experience scaling up the Mobile Millennium traffic information system using cloud computing and the Spark cluster computing framework. Mobile Millennium uses machine learning to infer traffic conditions for large metropolitan areas from crowdsourced data, and Spark was specifically designed to support such applications. Many studies of cloud computing frameworks have demonstrated scalability and performance improvements for simple machine learning algorithms. Our experience implementing a real-world machine learning-based application corroborates such benefits, but we also encountered several challenges that have not been widely reported. These include: managing large parameter vectors, using memory efficiently, and integrating with the application's existing storage infrastructure. This paper describes these challenges and the changes they required in both the Spark framework and the Mobile Millennium software. While we focus on a system for traffic estimation, we believe that the lessons learned are applicable to other machine learning-based applications.", "A parameter-free model predicts patterns of commuting, phone calls and trade using only population density at all intermediate points.", "", "In this paper, we combine the most complete record of daily mobility, based on large-scale mobile phone data, with detailed Geographic Information System (GIS) data, uncovering previously hidden patterns in urban road usage. We find that the major usage of each road segment can be traced to its own - surprisingly few - driver sources. Based on this finding we propose a network of road usage by defining a bipartite network framework, demonstrating that in contrast to traditional approaches, which define road importance solely by topological measures, the role of a road segment depends on both: its betweeness and its degree in the road usage network. Moreover, our ability to pinpoint the few driver sources contributing to the major traffic flow allows us to create a strategy that achieves a significant reduction of the travel time across the entire road system, compared to a benchmark approach." ] }
1307.0794
2001903467
Two main trends in today's internet are of major interest for video streaming services: most content delivery platforms coincide towards using adaptive video streaming over HTTP and new network architectures allowing caching at intermediate points within the network. We investigate one of the most popular streaming service in terms of rate adaptation and opportunistic caching. Our experimental study shows that the streaming client's rate selection trajectory, i.e., the set of selected segments of varied bit rates which constitute a complete video, is not repetitive across separate downloads. Also, the involvement of caching could lead to frequent alternation between cache and server when serving back client's requests for video segments. These observations warrant cautions for rate adaption algorithm design and trigger our analysis to characterize the performance of in-network caching for HTTP streaming. Our analytic results show: (i) a significant degradation of cache hit rate for adaptive streaming, with a typical file popularity distribution in nowadays internet; (ii) as a result of the (usually) higher throughput at the client-cache connection compared to client-server one, cache-server oscillations due to misjudgments of the rate adaptation algorithm occur. Finally, we introduce DASH-INC, a framework for improved video streaming in caching networks including transcoding and multiple throughput estimation.
Much recent work has considered different aspects of video streaming and rate adaptation. Adhikari performed measurements to uncover and evaluate Netflix @cite_0 . @cite_20 the authors did a performance study of dynamic rate adaptation for several streaming services, including Netflix, Hulu and Vudu. They identified a - a dramatic anomalous drop in the video playback rate. This illustrates the interactions of the rate adaptation mechanism with the underlying network. As we will see, similar interactions of the streaming rate and the network layer are exacerbated by inserting in-network caching. The interaction of video streaming with TCP where studied (and to some extent, solved) in @cite_13 .
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_20" ], "mid": [ "2158542147", "198328412", "2115363289" ], "abstract": [ "Netflix is the leading provider of on-demand Internet video streaming in the US and Canada, accounting for 29.7 of the peak downstream traffic in US. Understanding the Netflix architecture and its performance can shed light on how to best optimize its design as well as on the design of similar on-demand streaming services. In this paper, we perform a measurement study of Netflix to uncover its architecture and service strategy. We find that Netflix employs a blend of data centers and Content Delivery Networks (CDNs) for content distribution. We also perform active measurements of the three CDNs employed by Netflix to quantify the video delivery bandwidth available to users across the US. Finally, as improvements to Netflix's current CDN assignment strategy, we propose a measurement-based adaptive CDN selection strategy and a multiple-CDN-based video delivery strategy, and demonstrate their potentials in significantly increasing user's average bandwidth.", "YouTube traffic is bursty. These bursts trigger packet losses and stress router queues, causing TCP's congestion-control algorithm to kick in. In this paper, we introduce Trickle, a server-side mechanism that uses TCP to rate limit YouTube video streaming. Trickle paces the video stream by placing an upper bound on TCP's congestion window as a function of the streaming rate and the round-trip time. We evaluated Trickle on YouTube production data centers in Europe and India and analyzed its impact on losses, bandwidth, RTT, and video buffer under-run events. The results show that Trickle reduces the average TCP loss rate by up to 43 and the average RTT by up to 28 while maintaining the streaming rate requested by the application.", "Today's commercial video streaming services use dynamic rate selection to provide a high-quality user experience. Most services host content on standard HTTP servers in CDNs, so rate selection must occur at the client. We measure three popular video streaming services -- Hulu, Netflix, and Vudu -- and find that accurate client-side bandwidth estimation above the HTTP layer is hard. As a result, rate selection based on inaccurate estimates can trigger a feedback loop, leading to undesirably variable and low-quality video. We call this phenomenon the \"downward spiral effect\", and we measure it on all three services, present insights into its root causes, and validate initial solutions to prevent it." ] }
1307.0794
2001903467
Two main trends in today's internet are of major interest for video streaming services: most content delivery platforms coincide towards using adaptive video streaming over HTTP and new network architectures allowing caching at intermediate points within the network. We investigate one of the most popular streaming service in terms of rate adaptation and opportunistic caching. Our experimental study shows that the streaming client's rate selection trajectory, i.e., the set of selected segments of varied bit rates which constitute a complete video, is not repetitive across separate downloads. Also, the involvement of caching could lead to frequent alternation between cache and server when serving back client's requests for video segments. These observations warrant cautions for rate adaption algorithm design and trigger our analysis to characterize the performance of in-network caching for HTTP streaming. Our analytic results show: (i) a significant degradation of cache hit rate for adaptive streaming, with a typical file popularity distribution in nowadays internet; (ii) as a result of the (usually) higher throughput at the client-cache connection compared to client-server one, cache-server oscillations due to misjudgments of the rate adaptation algorithm occur. Finally, we introduce DASH-INC, a framework for improved video streaming in caching networks including transcoding and multiple throughput estimation.
ICNs @cite_6 (say CCN @cite_3 or DONA @cite_17 ) were proposed to facilitate the distribution of content, and of video in particular. In ICNs, caching becomes part of the network service where all nodes potentially have caches (on-path caching). A request for a specific object can be satisfied by any node holding a copy in its cache.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_17" ], "mid": [ "", "2109983959", "2168903090" ], "abstract": [ "", "The information-centric networking (ICN) concept is a significant common approach of several future Internet research activities. The approach leverages in-network caching, multiparty communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today?s use (in particular. content distribution and mobility) and more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focusing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general.", "The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution." ] }
1307.0794
2001903467
Two main trends in today's internet are of major interest for video streaming services: most content delivery platforms coincide towards using adaptive video streaming over HTTP and new network architectures allowing caching at intermediate points within the network. We investigate one of the most popular streaming service in terms of rate adaptation and opportunistic caching. Our experimental study shows that the streaming client's rate selection trajectory, i.e., the set of selected segments of varied bit rates which constitute a complete video, is not repetitive across separate downloads. Also, the involvement of caching could lead to frequent alternation between cache and server when serving back client's requests for video segments. These observations warrant cautions for rate adaption algorithm design and trigger our analysis to characterize the performance of in-network caching for HTTP streaming. Our analytic results show: (i) a significant degradation of cache hit rate for adaptive streaming, with a typical file popularity distribution in nowadays internet; (ii) as a result of the (usually) higher throughput at the client-cache connection compared to client-server one, cache-server oscillations due to misjudgments of the rate adaptation algorithm occur. Finally, we introduce DASH-INC, a framework for improved video streaming in caching networks including transcoding and multiple throughput estimation.
Recently, Fayazbakhsh discussed an incremental deployment of an ICN @cite_7 , where caching happens only at the client's edge. This is the simple abstracted model we adopt in our paper: we consider a client connected to a local cache which can deliver some content and some enhanced, accelerated services, or can forward content requests onward to an origin server. This is similar to current HTTP proxies and caches @cite_14 , and our results have applicability beyond ICNs to such systems of web proxies. @cite_9 @cite_1 specified an architecture where this edge cache is distributed over an access network domain, and managed by a modified content-aware SDN controller. This controller allows content routing over IP network for HTTP requests. We leverage this architecture for our evaluation.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_1", "@cite_7" ], "mid": [ "156301160", "2127768834", "", "2156104365" ], "abstract": [ "Information-Centric Networks place content as the narrow waist of the network architecture. This allows to route based upon the content name, and not based upon the locations of the content consumer and producer. However, current Internet architecture does not support content routing at the network layer. We present ContentFlow, an Information-Centric network architecture which supports content routing by mapping the content name to an IP flow, and thus enables the use of OpenFlow switches to achieve content routing over a legacy IP architecture. ContentFlow is viewed as an evolutionary step between the current IP networking architecture, and a full fledged ICN architecture. It supports content management, content caching and content routing at the network layer, while using a legacy OpenFlow infrastructure and a modified controller. In particular, ContentFlow is transparent from the point of view of the client and the server, and can be inserted in between with no modification at either end. We have implemented ContentFlow and describe our implementation choices as well as the overall architecture specification. We evaluate the performance of ContentFlow in our testbed.", "While algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale World Wide Web environment. This paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter-proxy cooperation. With our traces, we evaluate quantitatively the performance-improvement potential of cooperation between 200 small-organization proxies within a university environment, and between two large-organization proxies handling 23,000 and 60,000 clients, respectively. With our model, we extend beyond these populations to project cooperative caching behavior in regions with millions of clients. Overall, we demonstrate that cooperative caching has performance benefits only within limited population bounds. We also use our model to examine the implications of future trends in Web-access behavior and traffic.", "", "Information-Centric Networking (ICN) has seen a significant resurgence in recent years. ICN promises benefits to users and service providers along several dimensions (e.g., performance, security, and mobility). These benefits, however, come at a non-trivial cost as many ICN proposals envision adding significant complexity to the network by having routers serve as content caches and support nearest-replica routing. This paper is driven by the simple question of whether this additional complexity is justified and if we can achieve these benefits in an incrementally deployable fashion. To this end, we use trace-driven simulations to analyze the quantitative benefits attributed to ICN (e.g., lower latency and congestion). Somewhat surprisingly, we find that pervasive caching and nearest-replica routing are not fundamentally necessary---most of the performance benefits can be achieved with simpler caching architectures. We also discuss how the qualitative benefits of ICN (e.g., security, mobility) can be achieved without any changes to the network. Building on these insights, we present a proof-of-concept design of an incrementally deployable ICN architecture." ] }
1307.1151
1532168148
We provide conditions for exact reconstruction of a bandlimited function from irregular polar samples of its Radon transform. First, we prove that the Radon transform is a continuous L2-operator for certain classes of bandlimited signals. We then show that the Beurling-Malliavin condition for the radial sampling density ensures existence and uniqueness of a solution. Moreover, Jaffard's density condition is sufficient for stable reconstruction.
For efficient experimental design of CT scans, i.e. determination of a suitable sampling geometry or a posteriori choice of function spaces for reconstruction, it is essential to understand the discretization effects due to sampling. Furthermore, the emergence of CT acquisition procedures involving incomplete or irregular data calls for irregular sampling theory. Past research has focused on functions that are simultaneously essentially space- and band-limited, i.e., functions for which decay exponentially with the radius @math of the ball @math . For these functions, interleaved sampling geometries are more efficient than regular sampling geometries @cite_14 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2059596616", "2058583833" ], "abstract": [ "The Radon transform of a bivariate function, which has application in tomographic imaging, has traditionally been viewed as a parametrized univariate function. In this paper, the Radon transform is instead viewed as a bivariate function and two-dimensional sampling theory is used to address sampling and information content issues. It is Shown that the band region of the Radon transform of a function with a finite space-bandwidth product is a \"finite-length bowtie.\" Because of the special shape of this band region. \"Nyquist sampling\" of the Radon transform is on a hexagonal grid. This sampling grid requires approximately one-half as many samples as the rectangular grid obtained from the traditional viewpoint. It is also shown that for a nonbandlimited function of finite spatial support, the bandregion of the Radon transform is an \"infinite-length bowtie.\" Consequently, it follows that approximately 2M2 π independent pieces of information about the function can be extracted from M \"projections.\" These results and others follow very naturally from the two-dimensional viewpoint presented.", "The Mathematics of Computerized Tomography covers the relevant mathematical theory of the Radon transform and related transforms and also studies more practical questions such as stability, sampling, resolution, and accuracy. Quite a bit of attention is given to the derivation, analysis, and practical examination of reconstruction algorithm, for both standard problems and problems with incomplete data." ] }
1307.1151
1532168148
We provide conditions for exact reconstruction of a bandlimited function from irregular polar samples of its Radon transform. First, we prove that the Radon transform is a continuous L2-operator for certain classes of bandlimited signals. We then show that the Beurling-Malliavin condition for the radial sampling density ensures existence and uniqueness of a solution. Moreover, Jaffard's density condition is sufficient for stable reconstruction.
Bandlimitedness conditions also appear implicitly in reconstruction techniques based on discretizations of the inverse Radon transform. Filtered backprojection tacitly assumes that the Radon transform is bandlimited and periodic in the radial coordinate (for computation of a so-called absolute derivative operator) and that quadrature rules for the angular integral (for the backprojection) are exact---for example by assuming that the angular component of the Radon transform has a finite Fourier series representation @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "2058583833" ], "abstract": [ "The Mathematics of Computerized Tomography covers the relevant mathematical theory of the Radon transform and related transforms and also studies more practical questions such as stability, sampling, resolution, and accuracy. Quite a bit of attention is given to the derivation, analysis, and practical examination of reconstruction algorithm, for both standard problems and problems with incomplete data." ] }
1307.1151
1532168148
We provide conditions for exact reconstruction of a bandlimited function from irregular polar samples of its Radon transform. First, we prove that the Radon transform is a continuous L2-operator for certain classes of bandlimited signals. We then show that the Beurling-Malliavin condition for the radial sampling density ensures existence and uniqueness of a solution. Moreover, Jaffard's density condition is sufficient for stable reconstruction.
Algorithms that are based on the Fourier slice theorem commonly use some sort of Fast Fourier Transform (FFT) for the radial variable to obtain the Fourier transform of the unknown function on a polar grid. This operation is either followed by interpolation onto a rectangular grid and application of the two-di -men -sion -al inverse FFT---a process known as gridding @cite_0 @cite_13 ---or by using a version of the two-di -men -sion -al inverse FFT for non-rectangular grids @cite_5 @cite_15 . The assumptions are, again, that the Radon transform @math is bandlimited and periodic with respect to the radial coordinate and that @math is bandlimited and periodic in both Cartesian variables.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "2091808722", "1642187803", "2055511749", "1550067027" ], "abstract": [ "Preface to the second edition Preface How to use this nook Notational conventions 1. Measurements and modeling 2. Linear models and linear equations 3. A basic model for tomography 4. Introduction to the Fourier transform 5. Convolution 6. The radon transform 7. Introduction to Fourier series 8. Sampling 9. Filters 10. Implementing shift invariant filters 11. Reconstruction in X-ray tomography 12. Imaging artifacts in X-ray tomography 13. Algebraic reconstruction techniques 14. Magnetic resonance imaging 15. Probability and random variables 16. Applications of probability 17. Random processes A. Background material B. Basic analysis Bibliography Index.", "In image reconstruction there are techniques that use analytical formulae for the Radon transform to recover an image from a continuum of data. In practice, however, one has only discrete data available. Thus one often resorts to sampling and interpolation methods. This article presents an approach to the inversion of the Radon transform that uses a discrete set of samples which need not be completely regular.", "In this paper, we suggest a new Fourier transform based algorithm forthe reconstruction of functions from their nonstandard sampled Radon transform. The algorithm incorporates recently developed fast Fourier transforms for nonequispaced data. We estimate the-corresponding aliasing error in dependence on the sampling geometry of the Radon transform and confirm our theoretical results by numerical examples.", "We consider the problem of reconstructing a 2D bandlimited signal from its nonuniform samples taken in polar coordinates. We introduce two nonuniform sampling strategies in polar coordinates and develop algorithms for reconstructing the signal from these samples. The proposed methods collect nonuniform samples along concentric circles or radial lines, where the circles or lines are nonuniformly distributed. We then apply these methods to the problem of reconstruction of tomographic images and show through simulations that they result in a higher quality of reconstruction with respect to the traditionally used algorithms." ] }
1307.0849
1631021651
We study the content placement problem for cache delivery video-on-demand systems under static random network topologies with fixed heavy-tailed video demand. The performance measure is the amount of server load; we wish to minimize the total download rate for all users from the server and maximize the rate from caches. Our approach reduces the analysis for multiple videos to consideration of decoupled systems with a single video each. For each placement policy, insights gained from the single video analysis carry back to the original multiple video content placement problem. Finally, we propose a hybrid placement technique that achieves near optimal performance with low complexity.
Wu and Li @cite_3 studied the optimal cache replacement algorithm and suggested that the simplest heuristics perform as well as the optimal algorithms, with very insignificant differences. While our formulation assumes random but static video requests, we also look for a simple suboptimal alternative to the optimal content placement algorithm. In this direction, we decompose the analysis of the content placement problem from the scale of the entire system with multiple videos into decoupled systems, each with only a single video of given popularity. @cite_2 formulated the adaptive whole storage placement problem as a mixed integer program (MIP) subject to a storage constraint and link bandwidth constraint. The adaptive whole storage placement problem is solved approximately by MIP. In our work, we derive an upper and a lower bound on the performance of the single video adaptive whole storage placement policy using analytical and heuristic approaches. Zhang @cite_6 used MDS codes to relax the integer constraint and converted the integer program into a convex, adaptive fractional storage placement problem that can be solved exactly. This result provides an upper bound on the performance of any content placement policy, including all single video placement policies.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_2" ], "mid": [ "", "2110499194", "2158261262" ], "abstract": [ "", "Peer-assisted Video-on-Demand (VoD) systems have not only received substantial recent research attention, but also been implemented and deployed with success in large-scale real- world streaming systems, such as PPLive. Peer-assisted Video- on-Demand systems are designed to take full advantage of peer upload bandwidth contributions with a cache on each peer. Since the size of such a cache on each peer is limited, it is imperative that an appropriate cache replacement algorithm is designed. There exists a tremendous level of flexibility in the design space of such cache replacement algorithms, including the simplest alternatives such as Least Recently Used (LRU). Which algorithm is the best to minimize server bandwidth costs, so that when peers need a media segment, it is most likely available from caches of other peers? Such a question, however, is arguably non-trivial to answer, as both the demand and supply of media segments are stochastic in nature. In this paper, we seek to construct an analytical framework based on optimal control theory and dynamic programming, to help us form an in-depth understanding of optimal strategies to design cache replacement algorithms. With such analytical insights, we have shown with extensive simulations that, the performance margin enjoyed by optimal strategies over the simplest algorithms is not substantial, when it comes to reducing server bandwidth costs. In most cases, the simplest choices are good enough as cache replacement algorithms in peer-assisted VoD systems.", "IPTV service providers offering Video-on-Demand currently use servers at each metropolitan office to store all the videos in their library. With the rapid increase in library sizes, it will soon become infeasible to replicate the entire library at each office. We present an approach for intelligent content placement that scales to large library sizes (e.g., 100Ks of videos). We formulate the problem as a mixed integer program (MIP) that takes into account constraints such as disk space, link bandwidth, and content popularity. To overcome the challenges of scale, we employ a Lagrangian relaxation-based decomposition technique combined with integer rounding. Our technique finds a near-optimal solution (e.g., within 1-2 ) with orders of magnitude speedup relative to solving even the LP relaxation via standard software. We also present simple strategies to address practical issues such as popularity estimation, content updates, short-term popularity fluctuation, and frequency of placement updates. Using traces from an operational system, we show that our approach significantly outperforms simpler placement strategies. For instance, our MIP-based solution can serve all requests using only half the link bandwidth used by LRU or LFU cache replacement policies. We also investigate the trade-off between disk space and network bandwidth." ] }
1307.0118
1983512832
We present a full pipeline for computing the medial axis transform of an arbitrary 2D shape. The instability of the medial axis transform is overcome by a pruning algorithm guided by a user-defined Hausdorff distance threshold. The stable medial axis transform is then approximated by spline curves in 3D to produce a smooth and compact representation. These spline curves are computed by minimizing the approximation error between the input shape and the shape represented by the medial axis transform. Our results on various 2D shapes suggest that our method is practical and effective, and yields faithful and compact representations of medial axis transforms of 2D shapes.
Exact medial axis computation is possible only for simple or special shapes, such as polyhedra @cite_25 @cite_17 . For free-form shapes, medial axis approximations are widely used in practice. There are several main approaches to computing the medial axis approximation: pixel or voxel-based methods that compute the medial axis using a thinning operation @cite_26 ; methods based on distance transform @cite_21 @cite_13 @cite_8 @cite_28 , often performed on a regular or adaptive grid; the divide-and-conquer methods @cite_5 , performed on spline curve boundaries; the tracing approaches @cite_3 , by tracing along the shape boundary or the seam curves; and the Voronoi diagram (VD) based methods @cite_0 @cite_23 @cite_9 @cite_29 @cite_15 .
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_28", "@cite_9", "@cite_21", "@cite_29", "@cite_3", "@cite_0", "@cite_23", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2158008371", "2024335283", "", "2020065037", "2013991211", "2071657752", "2116251909", "2660405673", "2029041800", "2008425943", "2141835663", "1967930428", "1977276706", "1974675819" ], "abstract": [ "A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >", "The medial axis transform (MAT) of a shape, better known as its skeleton, is frequently used in shape analysis and related areas. In this paper a new approach for determining the skeleton of an object is presented. The boundary is segmented at points of maximal positive curvature and a distance map from each of the segments is calculated. The skeleton is then located by applying simple rules to the zero sets of distance map differences. A framework is proposed for numerical approximation of distance maps that is consistent with the continuous case and hence does not suffer from digitization bias due to metrication errors of the implementation on the grid. Subpixel accuracy in distance map calculation is obtained by using gray-level information along the boundary of the shape in the numerical scheme. The accuracy of the resulting efficient skeletonization algorithm is demonstrated by several examples.", "", "The power crust is a construction which takes a sample of points from the surface of a three-dimensional object and produces a surface mesh and an approximate medial axis. The approach is to first approximate the medial axis transform (MAT) of the object. We then use an inverse transform to produce the surface representation from the MAT. This idea leads to a simple algorithm with theoretical guarantees comparable to those of other surface reconstruction and medial axis approximation algorithms. It also comes with a guarantee that does not depend in any way on the quality of the input point sample. Any input gives an output surface which is the watertight' boundary of a three-dimensional polyhedral solid: the solid described by the approximate MAT. This unconditional guarantee makes the algorithm quite robust and eliminates the polygonalization, hole-filling or manifold extraction post-processing steps required in previous surface reconstruction algorithms. In this paper, we use the theory to develop a power crust implementation which is indeed robust for realistic and even difficult samples. We describe the careful design of a key subroutine which labels parts of the MAT as inside or outside of the object, easy in theory but non-trivial in practice. We find that we can handle areas in which the input sampling is scanty or noisy by simply discarding the unreliable parts of the MAT approximation. We demonstrate good empirical results on inputs including models with sharp corners, sparse and unevenly distributed point samples, holes, and noise, both natural and synthetic. We also demonstrate some simple extensions: intentionally leaving holes where there is no data, producing approximate offset surfaces, and simplifying the approximate MAT in a principled way to preserve stable features.", "", "Skeletons provide a synthetic and thin representation of objects. Therefore, they are useful for shape description. Recent papers have proposed to approximate the skeleton of continuous shapes using the Voronoi graph of boundary points. An original formulation is presented here, using the notion of polyballs (we call polyball any finite union of balls). A preliminary work shows that their skeletons consist of simple components (line segments in 2D and polygons in 3D). An efficient method for simplifying 3D continuous skeletons is also presented. The originality of our approach consists in simplifying the shape without modifying its topology and in including these modifications on the skeleton. Depending on the desired result, we propose two strategies which lead to either surfacical skeletons or wireframe skeletons. Two angular criteria are proposed that allow us to build a size-invariant hierarchy of simplified skeletons.", "The paper describes an algorithm for generating an approximation of the medial axis transform (MAT) for planar objects with free form boundaries. The algorithm generates the MAT by a tracing technique that marches along the object boundary rather than the bisectors of the boundary entities. The level of approximation is controlled by the choice of the step size in the tracing procedure. Criteria based on distance and local curvature of boundary entities are used to identify the junction or branch points and the search for these branch points is more efficient than while tracing the bisectors. The algorithm works for multiply connected objects as well. Results of implementation are provided.", "", "We give a simple combinatorial algorithm that computes a piecewise-linear approximation of a smooth surface from a finite set of sample points. The algorithm uses Voronoi vertices to remove triangles from the Delaunay triangulation. We prove the algorithm correct by showing that for densely sampled surfaces, where density depends on a local feature size function, the output is topologically valid and convergent (both pointwise and in surface normals) to the original surface. We briefly describe an implementation of the algorithm and show example outputs.", "We present a simple, efficient, and stable method for computing-with any desired precision-the medial axis of simply connected planar domains. The domain boundaries are assumed to be given as polynomial spline curves. Our approach combines known results from the field of geometric approximation theory with a new algorithm from the field of computational geometry. Challenging steps are (1) the approximation of the boundary spline such that the medial axis is geometrically stable, and (2) the efficient decomposition of the domain into base cases where the medial axis can be computed directly and exactly. We solve these problems via spiral biarc approximation and a randomized divide & conquer algorithm.", "We present a new algorithm for simplifying the shape of 3D objects by manipulating their medial axis transform (MAT). From an unorganized set of boundary points, our algorithm computes the MAT, decomposes the axis into parts, then selectively removes a subset of these parts in order to reduce the complexity of the overall shape. The result is simplified MAT that can be used for a variety of shape operations. In addition, a polygonal surface of the resulting shape can be directly generated from the filtered MAT using a robust surface reconstruction method. The algorithm presented is shown to have a number of advantages over other existing approaches.", "Abstract We present a fast algorithm for preserving the total volume of a solid undergoing free-form deformation using level-of-detail representations. Given the boundary representation of a solid and user-specified deformation, the algorithm computes the new node positions of the deformation lattice, while minimizing the elastic energy subject to the volume-preserving criterion. During each iteration, a non-linear optimizer computes the volume deviation and its derivatives based on a triangular approximation, which requires a finely tessellated mesh to achieve the desired accuracy. To reduce the computational cost, we exploit the multi-resolution representations of the boundary surfaces to greatly accelerate the performance of the non-linear optimizer. This technique also provides interactive response by progressively refining the solution. Furthermore, it is generally applicable to lattice-based free-form deformation and its variants. Our implementation has been applied to several complex solids. We have been able to achieve an order of magnitude performance improvement over the conventional methods.", "We present an accurate algorithm to compute the internal Voronoi diagram and medial axis of a 3-D polyhedron. It uses exact arithmetic and exact representations for accurate computation of the medial axis. The algorithm works by recursively finding neighboring junctions along the seam curves. To speed up the computation, we have designed specialized algorithms for fast computation with algebraic curves and surfaces. These algorithms include lazy evaluation based on multivariate Sturm sequences, fast resultant computation, culling operations, and floating-point filters. The algorithm has been implemented and we highlight its performance on a number of examples.", "" ] }
1307.0118
1983512832
We present a full pipeline for computing the medial axis transform of an arbitrary 2D shape. The instability of the medial axis transform is overcome by a pruning algorithm guided by a user-defined Hausdorff distance threshold. The stable medial axis transform is then approximated by spline curves in 3D to produce a smooth and compact representation. These spline curves are computed by minimizing the approximation error between the input shape and the shape represented by the medial axis transform. Our results on various 2D shapes suggest that our method is practical and effective, and yields faithful and compact representations of medial axis transforms of 2D shapes.
Among these, the VD based approach stands out due to its theoretical guarantee and efficient computation. As a preprocessing step, we obtain an initial discrete medial axis of a shape using the VD based algorithm. The VD based method assumes that the boundary of an input shape @math is a smooth curve and is sampled by a dense discrete set @math of points (Fig. ), with the sampling density determined by the local feature size @cite_23 in order to capture the boundary topology correctly. The Voronoi diagram of @math is computed and the Voronoi vertices interior to @math are taken to approximate the medial axis of Voronoi diagram (Fig. ), since a point on the Voronoi diagram is also characterized by having at least two closest points among the sample points. @PARASPLIT
{ "cite_N": [ "@cite_23" ], "mid": [ "2029041800" ], "abstract": [ "We give a simple combinatorial algorithm that computes a piecewise-linear approximation of a smooth surface from a finite set of sample points. The algorithm uses Voronoi vertices to remove triangles from the Delaunay triangulation. We prove the algorithm correct by showing that for densely sampled surfaces, where density depends on a local feature size function, the output is topologically valid and convergent (both pointwise and in surface normals) to the original surface. We briefly describe an implementation of the algorithm and show example outputs." ] }
1307.0118
1983512832
We present a full pipeline for computing the medial axis transform of an arbitrary 2D shape. The instability of the medial axis transform is overcome by a pruning algorithm guided by a user-defined Hausdorff distance threshold. The stable medial axis transform is then approximated by spline curves in 3D to produce a smooth and compact representation. These spline curves are computed by minimizing the approximation error between the input shape and the shape represented by the medial axis transform. Our results on various 2D shapes suggest that our method is practical and effective, and yields faithful and compact representations of medial axis transforms of 2D shapes.
Many studies have been conducted to understand and resolve the instability problem of the MAT. We review here several typical methods, whereas a survey can be found in @cite_2 . One general approach is to define certain measures for the significance of a medial point, and to filter medial points against a user-defined threshold, thereby removing unstable branches of the medial axis. Examples include the angle-based methods which consider the separation angle or the object angle (i.e., the angle spanned by the closest contacting points) @cite_21 @cite_24 @cite_18 and scaled axis transform (SAT) which essentially exploits the rate of change of the radius function as the filtering condition @cite_14 . Another approach of computing a stable MAT is to consider the difference between the initial shape and the reconstructed shape from the pruned MAT @cite_2 . The filtering step in our algorithm resembles this latter approach by considering the Hausdorff distance between the boundary of the input and the approximate shape to ensure the approximation accuracy of the output stable MAT.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_21", "@cite_24", "@cite_2" ], "mid": [ "2037162618", "2159584837", "2013991211", "1826313674", "2142495952" ], "abstract": [ "Abstract Medial axis as a compact representation of shapes has evolved as an essential geometric structure in a number of applications involving 3D geometric shapes. Since exact computation of the medial axis is difficult in general, efforts continue to approximate them. One line of research considers the point cloud representation of the boundary surface of a solid and then attempts to compute an approximate medial axis from this point sample. It is known that the Voronoi vertices converge to the medial axis for a curve in 2D as the sample density approaches infinity. Unfortunately, the same is not true in 3D. Recently, it is discovered that a subset of Voronoi vertices called poles converge to the medial axis in 3D. However, in practice, a continuous approximation as opposed to a discrete one is sought. Recently few algorithms have been proposed which use the Voronoi diagram and its derivatives to compute this continuous approximation. These algorithms are scale or density dependent. Most of them do not have convergence guarantees, and one of them computes it indirectly from the power diagram of the poles. Recently, we proposed a new algorithm that approximates the medial axis straight from the Voronoi diagram in a scale and density independent manner with convergence guarantees. In this paper, we present several experimental results with this algorithm that support our theoretical claims and also show its effectiveness on practical data sets.", "We introduce the scale axis transform, a new skelet al shape representation for bounded open sets O ⊂ Rd. The scale axis transform induces a family of skeletons that captures the important features of a shape in a scale-adaptive way and yields a hierarchy of successively simplified skeletons. Its definition is based on the medial axis transform and the simplification of the shape under multiplicative scaling: the s-scaled shape Os is the union of the medial balls of O with radii scaled by a factor of s. The s-scale axis transform of O is the medial axis transform of Os, with radii scaled back by a factor of 1 s. We prove topological properties of the scale axis transform and we describe the evolution s → Os by defining the multiplicative distance function to the shape and studying properties of the corresponding steepest ascent flow. All our theoretical results hold for any dimension. In addition, using a discrete approximation, we present several examples of two-dimensional scale axis transforms that illustrate the practical relevance of our new framework.", "", "The skeleton of an object is the locus of the centers of maximal discs included in the shape. The skeleton provides a compact representation of objects, useful for shape description and recognition. A well-known drawback of the skeleton transformation is its lack of continuity. This paper is concerned with the modeling of noise that may affect objects and the consequence of this noise on the skeleton. A graph (called the parameter graph) is introduced, on which branches due to noise are characterized. We deduce from this preliminary study a method to simplify skeletons. It depends on thresholds that can be chosen directly on the parameter graph associated to each skeleton.", "The medial axis of a geometric shape captures its connectivity. In spite of its inherent instability, it has found applications in a number of areas that deal with shapes. In this survey paper, we focus on results that shed light on this instability and use the new insights to generate simplified and stable modifications of the medial axis." ] }
1307.0118
1983512832
We present a full pipeline for computing the medial axis transform of an arbitrary 2D shape. The instability of the medial axis transform is overcome by a pruning algorithm guided by a user-defined Hausdorff distance threshold. The stable medial axis transform is then approximated by spline curves in 3D to produce a smooth and compact representation. These spline curves are computed by minimizing the approximation error between the input shape and the shape represented by the medial axis transform. Our results on various 2D shapes suggest that our method is practical and effective, and yields faithful and compact representations of medial axis transforms of 2D shapes.
@cite_27 propose a continuous medial representation by modeling the MAT with cubic B-splines, as an extension of its discrete counterpart called the @cite_10 . The m-rep is built upon a sparse set of medial atoms, each of which encapsulates the 2D position of a medial point and the corresponding spoke vectors from the 2D medial point to the closest points on the object boundary. The continuous m-rep @cite_27 (cm-rep), on the other hand, uses control points in cubic B-splines to describe the MAT, which must meet specific constraints defined on the implied boundary. In applying cm-reps to object modeling and image segmentation, a template cm-rep model is first built manually which is then deformed to fit a target shape. There is currently no method for automatically computing a smooth curve approximation to the MAT of a 2D shape.
{ "cite_N": [ "@cite_27", "@cite_10" ], "mid": [ "2134886586", "2172167640" ], "abstract": [ "We describe a novel continuous medial repre- sentation for describing object geometry and a deformable templates method for tting the representation to images. Our representation simultaneously describes the boundary and medial loci of geometrical objects, always maintaining Blum's symmetric axis transform (SAT) relationship. Cu- bic b-splines dene the continuous medial locus and the as- sociated thickness eld, which in turn generate the object boundary. We present geometrical properties of the rep- resentation and derive a set of constraints on the b-spline parameters. The 2D representation encompasses branching medial loci; the 3D version can model objects with a sin- gle medial surface, and the extension to branching medial surfaces is a subject of ongoing research. We present prelim- inary results of segmenting 2D and 3D medical images. The representation is ultimately intended for use in statistical shape analysis.", "A model of object shape by nets of medial and boundary primitives is justified as richly capturing multiple aspects of shape and yet requiring representation space and image analysis work proportional to the number of primitives. Metrics are described that compute an object representation's prior probability of local geometry by reflecting variabilities in the net's node and link parameter values, and that compute a likelihood function measuring the degree of match of an image to that object representation. A paradigm for image analysis of deforming such a model to optimize a posteriori probability is described, and this paradigm is shown to be usable as a uniform approach for object definition, object-based registration between images of the same or different imaging modalities, and measurement of shape variation of an abnormal anatomical object, compared with a normal anatomical object. Examples of applications of these methods in radiotherapy, surgery, and psychiatry are given." ] }
1307.0214
2019964557
We revisit recent results from the area of collusion-resistant traitor tracing, and show how they can be combined and improved to obtain more efficient dynamic traitor tracing schemes. In particular, we show how the dynamic Tardos scheme of can be combined with the optimized score functions of to trace coalitions much faster. If the attack strategy is known, in many cases the order of the code length goes down from quadratic to linear in the number of colluders, while if the attack is not known, we show how the interleaving defense may be used to catch all colluders about twice as fast as in the dynamic Tardos scheme. Some of these results also apply to the static traitor tracing setting where the attack strategy is known in advance, and to group testing.
Fiat and Tassa @cite_5 showed that if the alphabet size @math satisfies @math , then with @math segments, one can find and disconnect all colluders deterministically, i.e., with probability of error @math . This scheme is very efficient, and the only drawback is that it requires a large alphabet size, which may not be possible in practice. Tassa @cite_11 later showed that combining the binary ( @math ) static scheme of Boneh and Shaw @cite_4 with the dynamic scheme of Fiat and Tassa @cite_5 leads to a binary dynamic traitor tracing scheme that uses @math content segments. Due to the large required length of the code, this scheme is not practical.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_11" ], "mid": [ "286861583", "2134195444", "2057182007" ], "abstract": [ "Traitor tracing schemes were introduced to combat the typical piracy scenario whereby pirate decoders (or access control smartcards) are manufactured and sold by pirates to illegal subscribers. Those traitor tracing schemes, however, are ineffective for the currently less common scenario where a pirate publishes the periodical access control keys on the Internet or, alternatively, simply rebroadcasts the content via an independent pirate network. This new piracy scenario may become especially attractive (to pirates) in the context of broadband multicast over the Internet. In this paper we consider the consequences of this type of piracy and offer countermeasures. We introduce the concept of dynamic traitor tracing which is a practical and efficient tool to combat this type of piracy.", "This paper discusses methods for assigning code-words for the purpose of fingerprinting digital data, e.g., software, documents, music, and video. Fingerprinting consists of uniquely marking and registering each copy of the data. This marking allows a distributor to detect any unauthorized copy and trace it back to the user. This threat of detection will deter users from releasing unauthorized copies. A problem arises when users collude: for digital data, two different fingerprinted objects can be compared and the differences between them detected. Hence, a set of users can collude to detect the location of the fingerprint. They can then alter the fingerprint to mask their identities. We present a general fingerprinting solution which is secure in the context of collusion. In addition, we discuss methods for distributing fingerprinted data.", "Dynamic traitor tracing schemes were introduced by Fiat and Tassa in order to combat piracy in active broadcast scenarios. In such settings the data provider supplies access control keys to its legal customers on a periodical basis. A number of users may collude in order to publish those keys via the Internet or any other network. Dynamic traitor tracing schemes rely on the feedback from the pirate network in order to modify their key allocation until they are able either to incriminate and disconnect all traitors or force them to stop their illegal activity. Those schemes are deterministic in the sense that incrimination is always certain. As such deterministic schemes must multiply the critical data by at least p + 1, where p is the number of traitors, they may impose a too large toll on bandwidth. We suggest here probabilistic schemes that enable one to trace all traitors with almost certainty, where the critical data is multiplied by two, regardless of the number of traitors. These techniques are obtained by combining dynamic traitor tracing schemes with binary fingerprinting techniques, such as those proposed by Boneh and Shaw." ] }
1307.0214
2019964557
We revisit recent results from the area of collusion-resistant traitor tracing, and show how they can be combined and improved to obtain more efficient dynamic traitor tracing schemes. In particular, we show how the dynamic Tardos scheme of can be combined with the optimized score functions of to trace coalitions much faster. If the attack strategy is known, in many cases the order of the code length goes down from quadratic to linear in the number of colluders, while if the attack is not known, we show how the interleaving defense may be used to catch all colluders about twice as fast as in the dynamic Tardos scheme. Some of these results also apply to the static traitor tracing setting where the attack strategy is known in advance, and to group testing.
Recently, @cite_8 showed that the celebrated binary static scheme of Tardos @cite_10 can be converted into a dynamic binary scheme, with a code length of the order @math . Together with the divide-and-conquer construction of @cite_1 , allowing for a linear trade-off between the alphabet size and the code length, this allows one to build @math -ary schemes with a code length of the order @math , for arbitrary values of @math . For large @math , this approaches the result of Fiat and Tassa, both in the alphabet size and in the number of segments needed.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_8" ], "mid": [ "1991439166", "2031722321", "2039632567" ], "abstract": [ "We give a generic divide-and-conquer approach for constructing collusion-resistant probabilistic dynamic traitor tracing schemes with larger alphabets from schemes with smaller alphabets. This construction offers a linear tradeoff between the alphabet size and the codelength. In particular, we show that applying our results to the binary dynamic Tardos scheme of leads to schemes that are shorter by a factor equal to half the alphabet size. Asymptotically, these codelengths correspond, up to a constant factor, to the fingerprinting capacity for static probabilistic schemes. This gives a hierarchy of probabilistic dynamic traitor tracing schemes, and bridges the gap between the low bandwidth, high codelength scheme of and the high bandwidth, low codelength scheme of Fiat and Tassa.", "We construct binary codes for fingerprinting digital documents. Our codes for n users that are e-secure against c pirates have length O(c2log(n e)). This improves the codes proposed by Boneh and Shaw l1998r whose length is approximately the square of this length. The improvement carries over to works using the Boneh--Shaw code as a primitive, for example, to the dynamic traitor tracing scheme of Tassa l2005r. By proving matching lower bounds we establish that the length of our codes is best within a constant factor for reasonable error probabilities. This lower bound generalizes the bound found independently by l2003r that applies to a limited class of codes. Our results also imply that randomized fingerprint codes over a binary alphabet are as powerful as over an arbitrary alphabet and the equal strength of two distinct models for fingerprinting.", "We construct binary dynamic traitor tracing schemes, where the number of watermark bits needed to trace and disconnect any coalition of pirates is quadratic in the number of pirates, and logarithmic in the total number of users and the error probability. Our results improve upon results of Tassa, and our schemes have several other advantages, such as being able to generate all codewords in advance, a simple accusation method, and flexibility when the feedback from the pirate network is delayed." ] }
1307.0214
2019964557
We revisit recent results from the area of collusion-resistant traitor tracing, and show how they can be combined and improved to obtain more efficient dynamic traitor tracing schemes. In particular, we show how the dynamic Tardos scheme of can be combined with the optimized score functions of to trace coalitions much faster. If the attack strategy is known, in many cases the order of the code length goes down from quadratic to linear in the number of colluders, while if the attack is not known, we show how the interleaving defense may be used to catch all colluders about twice as fast as in the dynamic Tardos scheme. Some of these results also apply to the static traitor tracing setting where the attack strategy is known in advance, and to group testing.
Even more recently, @cite_9 studied the score function used in Tardos' static scheme, and showed how to choose better score functions when the pirate attack is known. In one particular case, they came across a score function that turned out to work well against attack. More precisely, for asymptotically large @math , they showed that this score function achieves capacity, i.e., attains the known exact lower bound on the code length of @math in the binary setting @cite_12 . So for large @math , using this new score function in the static setting may lead to a decrease in the length of the code of almost $60
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "1986979839", "2083594366" ], "abstract": [ "We investigate alternative suspicion functions for bias-based traitor tracing schemes, and present a practical construction of a simple decoder that attains capacity in the limit of large coalition size @math . We derive optimal suspicion functions in both the restricted-digit model and the combined-digit model. These functions depend on information that is usually not available to the tracer—the attack strategy or the tallies of the symbols received by the colluders. We discuss how such results can be used in realistic contexts. We study several combinations of coalition attack strategy versus suspicion function optimized against some attack (another attack or the same). In many of these combinations, the usual codelength scaling @math changes to a lower power of @math , e.g., @math . We find that the interleaving strategy is an especially powerful attack. The suspicion function tailored against interleaving is the key ingredient of the capacity-achieving construction.", "We study a fingerprinting game in which the number of colluders and the collusion channel are unknown. The encoder embeds fingerprints into a host sequence and provides the decoder with the capability to trace back pirated copies to the colluders. Fingerprinting capacity has recently been derived as the limit value of a sequence of maximin games with mutual information as their payoff functions. However, these games generally do not admit saddle-point solutions and are very hard to solve numerically. Here under the so-called Boneh-Shaw marking assumption, we reformulate the capacity as the value of a single two-person zero-sum game, and show that it is achieved by a saddle-point solution. If the maximal coalition size is k and the fingerprinting alphabet is binary, we show that capacity decays quadratically with k. Furthermore, we prove rigorously that the asymptotic capacity is 1 (k221n2) and we confirm our earlier conjecture that Tardos' choice of the arcsine distribution asymptotically maximizes the mutual information payoff function while the interleaving attack minimizes it. Along with the asymptotics, numerical solutions to the game for small k are also presented." ] }
1307.0556
2030999876
A homomorphism from a graph G to a graph H is a function from V(G) to V(H) that preserves edges. Many combinatorial structures that arise in mathematics and in computer science can be represented naturally as graph homomorphisms and as weighted sums of graph homomorphisms. In this article, we study the complexity of counting homomorphisms modulo 2. The complexity of modular counting was introduced by Papadimitriou and Zachos and it has been pioneered by Valiant who famously introduced a problem for which counting modulo 7 is easy but counting modulo 2 is intractable. Modular counting provides a rich setting in which to study the structure of homomorphism problems. In this case, the structure of the graph H has a big influence on the complexity of the problem. Thus, our approach is graph-theoretic. We give a complete solution for the class of cactus graphs, which are connected graphs in which every edge belongs to at most one cycle. Cactus graphs arise in many applications such as the modelling of wireless sensor networks and the comparison of genomes. We show that, for some cactus graphs H, counting homomorphisms to H modulo 2 can be done in polynomial time. For every other fixed cactus graph H, the problem is complete in the complexity class ⊕P, which is a wide complexity class to which every problem in the polynomial hierarchy can be reduced (using randomised reductions). Determining which H lead to tractable problems can be done in polynomial time. Our result builds upon the work of Faben and Jerrum, who gave a dichotomy for the case in which H is a tree.
Modular counting has also been considered in the context of Boolean constraint satisfaction problems (CSPs). Graph homomorphism problems are CSPs but they are not known to be representable as Boolean CSPs. The known results are as follows. Faben @cite_15 @cite_14 provided a modular counting dichotomy for unweighted Boolean CSPs. This was extended by @cite_10 to the weighted case. @cite_22 also provided a dichotomy for the solution of Boolean Holant problems modulo 2.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_10", "@cite_22" ], "mid": [ "1881931158", "", "2282669851", "2000217912" ], "abstract": [ "Generalised Satisfiability Problems (or Boolean Constraint Satisfaction Problems), introduced by Schaefer in 1978, are a general class of problem which allow the systematic study of the complexity of satisfiability problems with different types of constraints. In 1979, Valiant introduced the complexity class parity P, the problem of counting the number of solutions to NP problems modulo two. Others have since considered the question of counting modulo other integers. We give a dichotomy theorem for the complexity of counting the number of solutions to Generalised Satisfiability Problems modulo integers. This follows from an earlier result of Creignou and Hermann which gave a counting dichotomy for these types of problem, and the dichotomy itself is almost identical. Specifically, counting the number of solutions to a Generalised Satisfiability Problem can be done in polynomial time if all the relations are affine. Otherwise, except for one special case with k = 2, it is #_kP-complete.", "", "We prove a complexity dichotomy theorem for counting weighted Boolean CSP modulo k for any positive integer k > 1. This generalizes a theorem by Faben for the unweighted setting. In the weighted setting, there are new interesting tractable problems. We first prove a dichotomy theorem for the finite field case where k is a prime. It turns out that the dichotomy theorem for the finite field is very similar to the one for the complex weighted Boolean #CSP, found by [Cai, Lu and Xia, STOC 2009]. Then we further extend the result to an arbitrary integer k.", "For certain subclasses of NP, @math P, or #P characterized by local constraints, it is known that if there exist any problems within that subclass that are not polynomial time computable, then all the problems in the subclass are NP-complete, @math P-complete, or #P-complete. Such dichotomy results have been proved for characterizations such as constraint satisfaction problems and directed and undirected graph homomorphism problems, often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant problems. For example, these additionally allow for the expression of matching problems, which have had pivotal roles in the development of complexity theory. As our main result we prove the dichotomy theorem that, for the class @math P, every set of symmetric Holant signatures of any arities that is not polynomial time computable is @math P-complete. The result exploits some special properties of the class @math P and characterizes four distinct tractable ..." ] }
1306.6526
2951775680
In heap-based languages, knowing that a variable x points to an acyclic data structure is useful for analyzing termination: this information guarantees that the depth of the data structure to which x points is greater than the depth of the structure pointed to by x.fld, and allows bounding the number of iterations of a loop which traverses the data structure on fld. In general, proving termination needs acyclicity, unless program-specific or non-automated reasoning is performed. However, recent work could prove that certain loops terminate even without inferring acyclicity, because they traverse data structures "acyclically". Consider a double-linked list: if it is possible to demonstrate that every cycle involves both the "next" and the "prev" field, then a traversal on "next" terminates since no cycle will be traversed completely. This paper develops a static analysis inferring field-sensitive reachability and cyclicity information, which is more general than existing approaches. Propositional formulae are computed, which describe which fields may or may not be traversed by paths in the heap. Consider a tree with edges "left" and "right" to the left and right sub-trees, and "parent" to the parent node: termination of a loop traversing leaf-up cannot be guaranteed by state-of-the-art analyses. Instead, propositional formulae computed by this analysis indicate that cycles must traverse "parent" and at least one between "left" and "right": termination is guaranteed as no cycle is traversed completely. This paper defines the necessary abstract domains and builds an abstract semantics on them. A prototypical implementation provides the expected result on relevant examples.
The present paper is very related to research in the area of @cite_6 , which considers properties of the heap and builds to enforce them. Clearly, techniques which directly deal with the reachability and cyclicity originated by paths in the heap represent the closest work in this area. Apart from that, , , and analysis are the most related pointer analyses which can be found in the literature.
{ "cite_N": [ "@cite_6" ], "mid": [ "2046699259" ], "abstract": [ "During the past twenty-one years, over seventy-five papers and nine Ph.D. theses have been published on pointer analysis. Given the tomes of work on this topic one may wonder, “Haven'trdquo; we solved this problem yet?'' With input from many researchers in the field, this paper describes issues related to pointer analysis and remaining open problems." ] }
1306.6526
2951775680
In heap-based languages, knowing that a variable x points to an acyclic data structure is useful for analyzing termination: this information guarantees that the depth of the data structure to which x points is greater than the depth of the structure pointed to by x.fld, and allows bounding the number of iterations of a loop which traverses the data structure on fld. In general, proving termination needs acyclicity, unless program-specific or non-automated reasoning is performed. However, recent work could prove that certain loops terminate even without inferring acyclicity, because they traverse data structures "acyclically". Consider a double-linked list: if it is possible to demonstrate that every cycle involves both the "next" and the "prev" field, then a traversal on "next" terminates since no cycle will be traversed completely. This paper develops a static analysis inferring field-sensitive reachability and cyclicity information, which is more general than existing approaches. Propositional formulae are computed, which describe which fields may or may not be traversed by paths in the heap. Consider a tree with edges "left" and "right" to the left and right sub-trees, and "parent" to the parent node: termination of a loop traversing leaf-up cannot be guaranteed by state-of-the-art analyses. Instead, propositional formulae computed by this analysis indicate that cycles must traverse "parent" and at least one between "left" and "right": termination is guaranteed as no cycle is traversed completely. This paper defines the necessary abstract domains and builds an abstract semantics on them. A prototypical implementation provides the expected result on relevant examples.
A well-known technique in Pointer analysis, @cite_6 investigates the program variables which might point to the same heap location at runtime. @cite_27 is more general in that it determines if two variables @math and @math can reach a common location in the heap, i.e., if the portions of the heap which are reachable from @math and @math are not disjoint. Aliasing between two variables implies that they also share. computes the set of objects which might be referred to by a pointer variable.
{ "cite_N": [ "@cite_27", "@cite_6" ], "mid": [ "1574640530", "2046699259" ], "abstract": [ "Pair-sharing analysis of object-oriented programs determines those pairs of program variables bound at run-time to overlapping data structures. This information is useful for program parallelisation and analysis. We follow a similar construction for logic programming and formalise the property, or abstract domain, Sh of pair-sharing. We prove that Sh induces a Galois insertion w.r.t the concrete domain of program states. We define a compositional abstract semantics for the static analysis over Sh, and prove it correct.", "During the past twenty-one years, over seventy-five papers and nine Ph.D. theses have been published on pointer analysis. Given the tomes of work on this topic one may wonder, “Haven'trdquo; we solved this problem yet?'' With input from many researchers in the field, this paper describes issues related to pointer analysis and remaining open problems." ] }
1306.6526
2951775680
In heap-based languages, knowing that a variable x points to an acyclic data structure is useful for analyzing termination: this information guarantees that the depth of the data structure to which x points is greater than the depth of the structure pointed to by x.fld, and allows bounding the number of iterations of a loop which traverses the data structure on fld. In general, proving termination needs acyclicity, unless program-specific or non-automated reasoning is performed. However, recent work could prove that certain loops terminate even without inferring acyclicity, because they traverse data structures "acyclically". Consider a double-linked list: if it is possible to demonstrate that every cycle involves both the "next" and the "prev" field, then a traversal on "next" terminates since no cycle will be traversed completely. This paper develops a static analysis inferring field-sensitive reachability and cyclicity information, which is more general than existing approaches. Propositional formulae are computed, which describe which fields may or may not be traversed by paths in the heap. Consider a tree with edges "left" and "right" to the left and right sub-trees, and "parent" to the parent node: termination of a loop traversing leaf-up cannot be guaranteed by state-of-the-art analyses. Instead, propositional formulae computed by this analysis indicate that cycles must traverse "parent" and at least one between "left" and "right": termination is guaranteed as no cycle is traversed completely. This paper defines the necessary abstract domains and builds an abstract semantics on them. A prototypical implementation provides the expected result on relevant examples.
Research on @cite_13 basically reasons about heap-manipulating programs in order to prove program properties. In most cases, properties are dealt with @cite_24 @cite_31 @cite_34 . On the other hand, termination is a property, and is, typically, the final property to be proved when analyzing cyclicity; therefore, work on liveness @cite_29 @cite_0 @cite_23 @cite_20 @cite_8 is closer to the present approach. Most papers use techniques based on @cite_22 , @cite_5 , @cite_29 or @cite_8 in order to prove properties of programs manipulating the heap. Typically, shape analyses capture aliasing and points-to information, and build a representation of the heap from which reachability information can be obtained. Such analyses are very precise, sometimes at the cost of (i) limiting the shape of the data structures which can be analyzed; (ii) simplifying the programming language to be dealt with; or (iii) reducing scalability.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_29", "@cite_24", "@cite_0", "@cite_23", "@cite_5", "@cite_31", "@cite_34", "@cite_13", "@cite_20" ], "mid": [ "2112561088", "", "2137628566", "1496073701", "1552505815", "2136242294", "1497571013", "1991837261", "2138245690", "", "2124909257" ], "abstract": [ "In the past two decades, model-checking has emerged as a promising and powerful approach to fully automatic verification of hardware systems. But model checking technology can be usefully applied to other application areas, and this article provides fundamentals that a practitioner can use to translate verification problems into model-checking questions. A taxonomy of the notions of \"model,\" \"property,\" and \"model checking\" are presented, and three standard model-checking approaches are described and applied to examples.", "", "In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a \"separating conjunction\" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related \"separating implication\". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.", "", "The paper presents an approach for shape analysis based on predicate abstraction. Using a predicate base that involves reachability relations between program variables pointing into the heap, we are able to analyze functional properties of programs with destructive heap updates, such as list reversal and various in-place list sorts. The approach allows verification of both safety and liveness properties. The abstraction we use does not require any abstract representation of the heap nodes (e.g. abstract shapes), only reachability relations between the program variables. The computation of the abstract transition relation is precise and automatic yet does not require the use of a theorem prover. Instead, we use a small model theorem to identify a truncated (small) finite-state version of the program whose abstraction is identical to the abstraction of the unbounded-heap version of the same program. The abstraction of the finite-state version is then computed by BDD techniques. For proving liveness properties, we augment the original system by a well-founded ranking function, which is abstracted together with the system. Well-foundedness is then abstracted into strong fairness (compassion). We show that, for a restricted class of programs that still includes many interesting cases, the small model theorem can be applied to this joint abstraction. Independently of the application to shape-analysis examples, we demonstrate the utility of the ranking abstraction method and its advantages over the direct use of ranking functions in a deductive verification of the same property.", "We describe a new program termination analysis designed to handle imperative programs whose termination depends on the mutation of the program's heap. We first describe how an abstract interpretation can be used to construct a finite number of relations which, if each is well-founded, implies termination. We then give an abstract interpretation based on separation logic formulaewhich tracks the depths of pieces of heaps. Finally, we combine these two techniques to produce an automatic termination prover. We show that the analysis is able to prove the termination of loops extracted from Windows device drivers that could not be proved terminating before by other means; we also discuss a previously unknown bug found with the analysis.", "In this paper, we propose a method for the automatic construction of an abstract state graph of an arbitrary system using the Pvs theorem prover.", "Shape analysis concerns the problem of determining \"shape invariants\" for programs that perform destructive updating on dynamically allocated storage. This article presents a parametric framework for shape analysis that can be instantiated in different ways to create different shape-analysis algorithms that provide varying degrees of efficiency and precision. A key innovation of the work is that the stores that can possibly arise during execution are represented (conservatively) using 3-valued logical structures. The framework is instantiated in different ways by varying the predicates used in the 3-valued logic. The class of programs to which a given instantiation of the framework can be applied is not limited a priori (i.e., as in some work on shape analysis, to programs that manipulate only lists, trees, DAGS, etc.); each instantiation of the framework can be applied to any program, but may produce imprecise results (albeit conservative ones) due to the set of predicates employed.", "The goal of this work is to develop compile-time algorithms for automatically verifying properties of imperative programs that manipulate dynamically allocated storage. The paper presents an analysis method that uses a characterization of a procedure's behavior in which parts of the heap not relevant to the procedure are ignored. The paper has two main parts: The first part introduces a non-standard concrete semantics, LSL, in which called procedures are only passed parts of the heap. In this semantics, objects are treated specially when they separate the \"local heap\" that can be mutated by a procedure from the rest of the heap, which---from the viewpoint of that procedure---is non-accessible and immutable. The second part concerns abstract interpretation of LSL and develops a new static-analysis algorithm using canonical abstraction.", "", "Program termination is central to the process of ensuring that systems code can always react. We describe a new program termination prover that performs a path-sensitive and context-sensitive program analysis and provides capacity for large program fragments (i.e. more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, pointers, function-pointers, side-effects, etc.We also present experimental results on device driver dispatch routines from theWindows operating system. The most distinguishing aspect of our tool is how it shifts the balance between the two tasks of constructing and respectively checking the termination argument. Checking becomes the hard step. In this paper we show how we solve the corresponding challenge of checking with binary reachability analysis." ] }
1306.6526
2951775680
In heap-based languages, knowing that a variable x points to an acyclic data structure is useful for analyzing termination: this information guarantees that the depth of the data structure to which x points is greater than the depth of the structure pointed to by x.fld, and allows bounding the number of iterations of a loop which traverses the data structure on fld. In general, proving termination needs acyclicity, unless program-specific or non-automated reasoning is performed. However, recent work could prove that certain loops terminate even without inferring acyclicity, because they traverse data structures "acyclically". Consider a double-linked list: if it is possible to demonstrate that every cycle involves both the "next" and the "prev" field, then a traversal on "next" terminates since no cycle will be traversed completely. This paper develops a static analysis inferring field-sensitive reachability and cyclicity information, which is more general than existing approaches. Propositional formulae are computed, which describe which fields may or may not be traversed by paths in the heap. Consider a tree with edges "left" and "right" to the left and right sub-trees, and "parent" to the parent node: termination of a loop traversing leaf-up cannot be guaranteed by state-of-the-art analyses. Instead, propositional formulae computed by this analysis indicate that cycles must traverse "parent" and at least one between "left" and "right": termination is guaranteed as no cycle is traversed completely. This paper defines the necessary abstract domains and builds an abstract semantics on them. A prototypical implementation provides the expected result on relevant examples.
The oldest notion of reachability dates back to @cite_33 : his is supposed to tell if a heap location reaches another one in a linear list. A reachability-based acyclicity analysis for C programs was developed by @cite_35 . That analysis was presented as a analysis, and the terms direction'' and interference'', were used for, respectively, reachability and sharing. Analyses which compute basically the same information were presented in more recent work. @cite_30 @cite_1 describe a formalization of the analysis proposed by @cite_35 in the framework of , based on a Java-like Object-Oriented language and provided with soundness proofs. The same analysis has been also formalized by means of Abstract Interpretation by @cite_36 , which efficiently implement it in the Julia analyzer for Java (bytecode) and Android http: www.juliasoft.com . As already discussed in the introduction, the analysis proposed by @cite_28 is less precise since it does not consider reachability in order to detect cycles. The present work also builds upon the results presented in @cite_4 @cite_19 . The relation with such works was explained in the introduction, and will be made even more clear in the rest of the paper, especially in Section .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_33", "@cite_36", "@cite_28", "@cite_1", "@cite_19" ], "mid": [ "", "2139356751", "", "2030101147", "1995268059", "1529156240", "2061213370", "" ], "abstract": [ "", "This paper reports on the design and implementation of a practical shape analysis for C. The purpose of the analysis is to aid in the disambiguation of heap-allocated data structures by estimating the shape (Tree, DAG, or Cyclic Graph) of the data structure accessible from each heap-directed pointer. This shape information can be used to improve dependence testing and in parallelization, and to guide the choice of more complex heap analyses.The method has been implemented as a context-sensitive interprocedural analysis in the McCAT conlpiler. Experimental results and observations are given for 16 benchmark programs. These results show that the analysis gives accurate and useful results for an important group of applications.", "", "The paper introduces a reachability predicate for linear lists, develops the elementary axiomatic theory of the predicate, and illustrates its application to program verification with a formal proof of correctness for a short program that traverses and splices linear lists.", "Reachability from a program variable v to a program variable w states that from v, it is possible to follow a path of memory locations that leads to the object bound to w. We present a new abstract domain for the static analysis of possible reachability between program variables or, equivalently, definite unreachability between them. This information is important for improving the precision of other static analyses, such as side-effects, field initialization, cyclicity and path-length analysis, as well as more complex analyses built upon them, such as nullness and termination analysis. We define and prove correct our reachability analysis for Java bytecode, defined as a constraint-based analysis, where the constraint is a graph whose nodes are the program points and whose arcs propagate reachability information in accordance to the abstract semantics of each bytecode instruction. For each program point p, our reachability analysis produces an overapproximation of the ordered pairs of variables 〈v, w〉 such that v might reach w at p. Seen the other way around, if a pair 〈v, w〉 is not present in the overapproximation at p, then v definitely does not reach w at p. We have implemented the analysis inside the Julia static analyzer. Our experiments of analysis of nontrivial Java and Android programs show the improvement of precision due to the presence of reachability information. Moreover, reachability analysis actually reduces the overall cost of nullness and termination analysis.", "Programming languages such as C, C++ and Java bind variables to dynamically-allocated data-structures held in memory. This lets programs build cyclical data at run-time, which complicates termination analysis and garbage collection. It is hence desirable to spot those variables which are only bound to non-cyclical data at run-time. We solve this problem by using abstract interpretation to define the abstract domain NC representing those variables. We relate NC through a Galois insertion to the concrete domain of program states. Hence NC is not redundant. We define a correct abstract denotational semantics over NC, which uses preliminary sharing information between variables to get more precise results. We apply it to a simple example of analysis. We use a Boolean representation for the abstract denotations over NC, which leads to an efficient implementation in terms of binary decision diagrams and to the elegant and efficient use of abstract compilation.", "In programming languages with dynamic use of memory, such as Java, knowing that a reference variable x points to an acyclic data structure is valuable for the analysis of termination and resource usage (e.g., execution time or memory consumption). For instance, this information guarantees that the depth of the data structure to which x points is greater than the depth of the data structure pointed to by x.f for any field f of x. This, in turn, allows bounding the number of iterations of a loop which traverses the structure by its depth, which is essential in order to prove the termination or infer the resource usage of the loop. The present paper provides an Abstract-Interpretation-based formalization of a static analysis for inferring acyclicity, which works on the reduced product of two abstract domains: reachability, which models the property that the location pointed to by a variable w can be reached by dereferencing another variable v (in this case, v is said to reach w); and cyclicity, modeling the property that v can point to a cyclic data structure. The analysis is proven to be sound and optimal with respect to the chosen abstraction.", "" ] }
1306.6370
2952593097
The proliferation of social media has the potential for changing the structure and organization of the web. In the past, scientists have looked at the web as a large connected component to understand how the topology of hyperlinks correlates with the quality of information contained in the page and they proposed techniques to rank information contained in web pages. We argue that information from web pages and network data on social relationships can be combined to create a personalized and socially connected web. In this paper, we look at the web as a composition of two networks, one consisting of information in web pages and the other of personal data shared on social media web sites. Together, they allow us to analyze how social media tunnels the flow of information from person to person and how to use the structure of the social network to rank, deliver, and organize information specifically for each individual user. We validate our social ranking concepts through a ranking experiment conducted on web pages that users shared on Google Buzz and Twitter.
Our work lies at the intersection of the study of social network analysis and the ranking techniques in information retrieval. The closest to our work are references @cite_7 @cite_10 @cite_14 in which the authors studied the problem of social searching while we studied the problem of social ranking. In @cite_7 , authors proposed an approximation to an algorithm called Partitioned Multi-Indexing to rank queries on the content generated in social networks by using a distributed hash table and schemas for updating the content continuously generated by the users. One similarity is that both theirs approach and ours consider information shared by social ties to be an important element in searching and ranking. Still, their work approximates network distances between users while our work uses the maximum flow of a constructed network. Another difference is that we do not focus on answering queries with social ties but on designing ranking techniques of URLs which could be used to answer friendship-related queries. In @cite_10 , authors proposed simple techniques to re-rank search results based on Similarity and Familiarity networks using their enterprise social network.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_7" ], "mid": [ "2109974859", "2147709600", "2171570327" ], "abstract": [ "This work investigates personalized social search based on the user's social relations -- search results are re-ranked according to their relations with individuals in the user's social network. We study the effectiveness of several social network types for personalization: (1) Familiarity-based network of people related to the user through explicit familiarity connection; (2) Similarity-based network of people \"similar\" to the user as reflected by their social activity; (3) Overall network that provides both relationship types. For comparison we also experiment with Topic-based personalization that is based on the user's related terms, aggregated from several social applications. We evaluate the contribution of the different personalization strategies by an off-line study and by a user survey within our organization. In the off-line study we apply bookmark-based evaluation, suggested recently, that exploits data gathered from a social bookmarking system to evaluate personalized retrieval. In the on-line study we analyze the feedback of 240 employees exposed to the alternative personalization approaches. Our main results show that both in the off-line study and in the user survey social network based personalization significantly outperforms non-personalized social search. Additionally, as reflected by the user survey, all three SN-based strategies significantly outperform the Topic-based strategy.", "This paper explores the use of social annotations to improve websearch. Nowadays, many services, e.g. del.icio.us, have been developed for web users to organize and share their favorite webpages on line by using social annotations. We observe that the social annotations can benefit web search in two aspects: 1) the annotations are usually good summaries of corresponding webpages; 2) the count of annotations indicates the popularity of webpages. Two novel algorithms are proposed to incorporate the above information into page ranking: 1) SocialSimRank (SSR)calculates the similarity between social annotations and webqueries; 2) SocialPageRank (SPR) captures the popularity of webpages. Preliminary experimental results show that SSR can find the latent semantic association between queries and annotations, while SPR successfully measures the quality (popularity) of a webpage from the web users' perspective. We further evaluate the proposed methods empirically with 50 manually constructed queries and 3000 auto-generated queries on a dataset crawledfrom delicious. Experiments show that both SSR and SPRbenefit web search significantly.", "To answer search queries on a social network rich with user-generated content, it is desirable to give a higher ranking to content that is closer to the individual issuing the query. Queries occur at nodes in the network, documents are also created by nodes in the same network, and the goal is to find the document that matches the query and is closest in network distance to the node issuing the query. In this paper, we present the \"Partitioned Multi-Indexing\" scheme, which provides an approximate solution to this problem. With m links in the network, after an offline O(m) pre-processing time, our scheme allows for social index operations (i.e., social search queries, as well as insertion and deletion of words into and from a document at any node), all in time O(1). Further, our scheme can be implemented on open source distributed streaming systems such as Yahoo! S4 or Twitter's Storm so that every social index operation takes O(1) processing time and network queries in the worst case, and just two network queries in the common case where the reverse index corresponding to the query keyword is much smaller than the memory available at any distributed compute node. Building on Das 's approximate distance oracle, the worst-case approximation ratio of our scheme is O(1) for undirected networks. Our simulations on the social network Twitter as well as synthetic networks show that in practice, the approximation ratio is actually close to 1 for both directed and undirected networks. We believe that this work is the first demonstration of the feasibility of social search with real-time text updates at large scales." ] }
1306.6370
2952593097
The proliferation of social media has the potential for changing the structure and organization of the web. In the past, scientists have looked at the web as a large connected component to understand how the topology of hyperlinks correlates with the quality of information contained in the page and they proposed techniques to rank information contained in web pages. We argue that information from web pages and network data on social relationships can be combined to create a personalized and socially connected web. In this paper, we look at the web as a composition of two networks, one consisting of information in web pages and the other of personal data shared on social media web sites. Together, they allow us to analyze how social media tunnels the flow of information from person to person and how to use the structure of the social network to rank, deliver, and organize information specifically for each individual user. We validate our social ranking concepts through a ranking experiment conducted on web pages that users shared on Google Buzz and Twitter.
While social searches have been introduced in multiple settings from the Social Query Model (SQM) @cite_0 to the implementation of social searching applications for mobile devices @cite_11 , a good amount of work has focused on finding the right answer to a search query by routing the search query to the right person in a social network graph @cite_5 @cite_11 . We studied the structure of the network to socially and automatically rank URLs without users intervention. In the Social Query Model @cite_0 , routing paths of search queries are studied in decentralized systems where indeterministic behavior of each agent willing to provide a correct answer with some level of accuracy and expertise is taken into consideration when forming an optimal routing policy. In Aardvark @cite_11 , the focus was to route a query from the searcher to a designated user in a social network that was assumed to be able to provide an answer. We took the approach of using network flow where the goal is to automatically rank a set of pages through the eyes of the searcher's social ties.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_11" ], "mid": [ "2142906335", "2039646365", "2163881971" ], "abstract": [ "Decentralized search by routing queries over a network is fast emerging as an important research problem, with potential applications in social search as well as peer-to-peer networks [17, 18]. In this paper, we introduce a novel Social Query Model (SQM) for decentralized search, which factors in realistic elements such as expertise levels and response rates of nodes, and has the Pagerank model and certain Markov Decision Processes as special cases. In the context of the model, we establish the existence of a query routing policy that is simultaneously optimal for all nodes, in that no subset of nodes will jointly have any incentive to use a different local routing policy. For computing the optimal policy, we present an efficient distributed approximation algorithm that is almost linear in the number of edges in the network. Extensive experiments on both simulated random graphs and real small-world networks demonstrate the potential of our model and the effectiveness of the proposed routing algorithm.", "The growth of Web 2.0 and fundamental theoretical breakthroughs have led to an avalanche of interest in social networks. This paper focuses on the problem of modeling how social networks accomplish tasks through peer production style collaboration. We propose a general interaction model for the underlying social networks and then a specific model ( i L ink for social search and message routing. A key contribution here is the development of a general learning framework for making such online peer production systems work at scale. The i L ink model has been used to develop a system for FAQ generation in a social network (FAQ tory ), and experience with its application in the context of a full-scale learning-driven workflow application (CALO) is reported. We also discuss methods of adapting i L ink technology for use in military knowledge sharing portals and other message routing systems. Finally, the paper shows the connection of i L ink to SQM, a theoretical model for social search that is a generalization of Markov Decision Processes and the popular Pagerank model.", "We present Aardvark, a social search engine. With Aardvark, users ask a question, either by instant message, email, web input, text message, or voice. Aardvark then routes the question to the person in the user's extended social network most likely to be able to answer that question. As compared to a traditional web search engine, where the challenge lies in finding the right document to satisfy a user's information need, the challenge in a social search engine like Aardvark lies in finding the right person to satisfy a user's information need. Further, while trust in a traditional search engine is based on authority, in a social search engine like Aardvark, trust is based on intimacy. We describe how these considerations inform the architecture, algorithms, and user interface of Aardvark, and how they are reflected in the behavior of Aardvark users." ] }
1306.6370
2952593097
The proliferation of social media has the potential for changing the structure and organization of the web. In the past, scientists have looked at the web as a large connected component to understand how the topology of hyperlinks correlates with the quality of information contained in the page and they proposed techniques to rank information contained in web pages. We argue that information from web pages and network data on social relationships can be combined to create a personalized and socially connected web. In this paper, we look at the web as a composition of two networks, one consisting of information in web pages and the other of personal data shared on social media web sites. Together, they allow us to analyze how social media tunnels the flow of information from person to person and how to use the structure of the social network to rank, deliver, and organize information specifically for each individual user. We validate our social ranking concepts through a ranking experiment conducted on web pages that users shared on Google Buzz and Twitter.
Indegree-based algorithms such as PageRank @cite_13 , SALSA @cite_1 , and HITS @cite_4 are used for ranking pages on a web graph where an edge between two pages represents an endorsement of one page by another page. The intuition behind network flow is that it automatically incorporates indegree analysis where a node that does not share a web page will distribute its flow to the sources that it follows, and sources of high indegree will eventually get the largest share of flow if the information is not found locally. In @cite_10 , authors looked at direct annotations from users in Delicious to enhance searches while we look at shared messages embedded with URLs to rank pages. To the best of our knowledge, we are the first to propose using maximum flow to personalize the ranking of pages based on the messages containing URLs that users share in online social networks.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_13", "@cite_4" ], "mid": [ "2089199911", "2147709600", "2066636486", "2138621811" ], "abstract": [ "Abstract Today, when searching for information on the World Wide Web, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web sites whose contents match the query. For broad topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the World Wide Web. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web sites: hubs and authorities . Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he devised an algorithm aimed at finding authoritative sites. We present SALSA, a new stochastic approach for link structure analysis, which examines random walks on graphs derived from the link structure. We show that both SALSA and Kleinberg's mutual reinforcement approach employ the same meta-algorithm. We then prove that SALSA is equivalent to a weighted in-degree analysis of the link-structure of World Wide Web subgraphs, making it computationally more efficient than the mutual reinforcement approach. We compare the results of applying SALSA to the results derived through Kleinberg's approach. These comparisons reveal a topological phenomenon called the TKC effect (Tightly Knit Community) which, in certain cases, prevents the mutual reinforcement approach from identifying meaningful authorities.", "This paper explores the use of social annotations to improve websearch. Nowadays, many services, e.g. del.icio.us, have been developed for web users to organize and share their favorite webpages on line by using social annotations. We observe that the social annotations can benefit web search in two aspects: 1) the annotations are usually good summaries of corresponding webpages; 2) the count of annotations indicates the popularity of webpages. Two novel algorithms are proposed to incorporate the above information into page ranking: 1) SocialSimRank (SSR)calculates the similarity between social annotations and webqueries; 2) SocialPageRank (SPR) captures the popularity of webpages. Preliminary experimental results show that SSR can find the latent semantic association between queries and annotations, while SPR successfully measures the quality (popularity) of a webpage from the web users' perspective. We further evaluate the proposed methods empirically with 50 manually constructed queries and 3000 auto-generated queries on a dataset crawledfrom delicious. Experiments show that both SSR and SPRbenefit web search significantly.", "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: google.stanford.edu . To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.", "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis." ] }