aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1304.2144
2952453532
The problem of mobile sequential recommendation is presented to suggest a route connecting some pick-up points for a taxi driver so that he she is more likely to get passengers with less travel cost. Essentially, a key challenge of this problem is its high computational complexity. In this paper, we propose a dynamical programming based method to solve this problem. Our method consists of two separate stages: an offline pre-processing stage and an online search stage. The offline stage pre-computes optimal potential sequence candidates from a set of pick-up points, and the online stage selects the optimal driving route based on the pre-computed sequences with the current position of an empty taxi. Specifically, for the offline pre-computation, a backward incremental sequence generation algorithm is proposed based on the iterative property of the cost function. Simultaneously, an incremental pruning policy is adopted in the process of sequence generation to reduce the search space of the potential sequences effectively. In addition, a batch pruning algorithm can also be applied to the generated potential sequences to remove the non-optimal ones of a certain length. Since the pruning effect continuously increases with the increase of the sequence length, our method can search the optimal driving route efficiently in the remaining potential sequence candidates. Experimental results on real and synthetic data sets show that the pruning percentage of our method is significantly improved compared to the state-of-the-art methods, which makes our method can be used to handle the problem of mobile sequential recommendation with more pick-up points and to search the optimal driving routes in arbitrary length ranges.
Yuan proposed a probability model for detecting pick-up points @cite_23 . It finds a route with the biggest pick-up probability to the parking position constrained by a distance threshold instead of the minimal cost of the route and provides location recommendation service both for the cab drivers and for the people needing the taxi services. In contrast, the problem solved in @cite_25 @cite_1 is different from the MSR problem which is to recommend a fastest route to a destination place with starting position and time constraints.
{ "cite_N": [ "@cite_1", "@cite_25", "@cite_23" ], "mid": [ "", "2031674781", "2132968912" ], "abstract": [ "", "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70 of the routes suggested by our method are faster than the competing methods, and 20 of the routes share the same results. On average, 50 of our routes are at least 20 faster than the competing approaches.", "We present a recommender for taxi drivers and people expecting to take a taxi, using the knowledge of 1) passengers' mobility patterns and 2) taxi drivers' pick-up behaviors learned from the GPS trajectories of taxicabs. First, this recommender provides taxi drivers with some locations and the routes to these locations, towards which they are more likely to pick up passengers quickly (during the routes or at these locations) and maximize the profit. Second, it recommends people with some locations (within a walking distance) where they can easily find vacant taxis. In our method, we learn the above knowledge (represented by probabilities) from GPS trajectories of taxis. Then, we feed the knowledge into a probabilistic model which estimates the profit of the candidate locations for a particular driver based on where and when the driver requests for the recommendation. We validate our recommender using historical trajectories generated by over 12,000 taxis during 110 days." ] }
1304.2144
2952453532
The problem of mobile sequential recommendation is presented to suggest a route connecting some pick-up points for a taxi driver so that he she is more likely to get passengers with less travel cost. Essentially, a key challenge of this problem is its high computational complexity. In this paper, we propose a dynamical programming based method to solve this problem. Our method consists of two separate stages: an offline pre-processing stage and an online search stage. The offline stage pre-computes optimal potential sequence candidates from a set of pick-up points, and the online stage selects the optimal driving route based on the pre-computed sequences with the current position of an empty taxi. Specifically, for the offline pre-computation, a backward incremental sequence generation algorithm is proposed based on the iterative property of the cost function. Simultaneously, an incremental pruning policy is adopted in the process of sequence generation to reduce the search space of the potential sequences effectively. In addition, a batch pruning algorithm can also be applied to the generated potential sequences to remove the non-optimal ones of a certain length. Since the pruning effect continuously increases with the increase of the sequence length, our method can search the optimal driving route efficiently in the remaining potential sequence candidates. Experimental results on real and synthetic data sets show that the pruning percentage of our method is significantly improved compared to the state-of-the-art methods, which makes our method can be used to handle the problem of mobile sequential recommendation with more pick-up points and to search the optimal driving routes in arbitrary length ranges.
Powell @cite_16 proposed a grid-based approach to suggest profit locations for taxi drivers by constructing a spatio-temporal profitability map, on which, the nearby regions of the driver are scored according to the potential profit calculated by the historical data. However, this method only finds a parking place with the biggest profit in a local scope instead of a set of pick-up points with overall consideration.
{ "cite_N": [ "@cite_16" ], "mid": [ "26726511" ], "abstract": [ "Taxicab service plays a vital role in public transportation by offering passengers quick personalized destination service in a semiprivate and secure manner. Taxicabs cruise the road network looking for a fare at designated taxi stands or alongside the streets. However, this service is often inefficient due to a low ratio of live miles (miles with a fare) to cruising miles (miles without a fare). The unpredictable nature of passengers and destinations make efficient systematic routing a challenge. With higher fuel costs and decreasing budgets, pressure mounts on taxicab drivers who directly derive their income from fares and spend anywhere from 35-60 percent of their time cruising the road network for these fares. Therefore, the goal of this paper is to reduce the number of cruising miles while increasing the number of live miles, thus increasing profitability, without systematic routing. This paper presents a simple yet practical method for reducing cruising miles by suggesting profitable locations to taxicab drivers. The concept uses the same principle that a taxicab driver uses: follow your experience. In our approach, historical data serves as experience and a derived Spatio-Temporal Profitability (STP) map guides cruising taxicabs. We claim that the STP map is useful in guiding for better profitability and validate this by showing a positive correlation between the cruising profitability score based on the STP map and the actual profitability of the taxicab drivers. Experiments using a large Shanghai taxi GPS data set demonstrate the effectiveness of the proposed method." ] }
1304.2144
2952453532
The problem of mobile sequential recommendation is presented to suggest a route connecting some pick-up points for a taxi driver so that he she is more likely to get passengers with less travel cost. Essentially, a key challenge of this problem is its high computational complexity. In this paper, we propose a dynamical programming based method to solve this problem. Our method consists of two separate stages: an offline pre-processing stage and an online search stage. The offline stage pre-computes optimal potential sequence candidates from a set of pick-up points, and the online stage selects the optimal driving route based on the pre-computed sequences with the current position of an empty taxi. Specifically, for the offline pre-computation, a backward incremental sequence generation algorithm is proposed based on the iterative property of the cost function. Simultaneously, an incremental pruning policy is adopted in the process of sequence generation to reduce the search space of the potential sequences effectively. In addition, a batch pruning algorithm can also be applied to the generated potential sequences to remove the non-optimal ones of a certain length. Since the pruning effect continuously increases with the increase of the sequence length, our method can search the optimal driving route efficiently in the remaining potential sequence candidates. Experimental results on real and synthetic data sets show that the pruning percentage of our method is significantly improved compared to the state-of-the-art methods, which makes our method can be used to handle the problem of mobile sequential recommendation with more pick-up points and to search the optimal driving routes in arbitrary length ranges.
Lu . @cite_8 introduced a problem of finding optimal trip route with time constraint. They also proposed an efficient trip planning method considering the current position of a user. However, their method uses the score of attractions to measure the preference of a route.
{ "cite_N": [ "@cite_8" ], "mid": [ "145447398" ], "abstract": [ "Short distance trips are crucial for urban mobility and accessibility. They can contribute to integrated transportation (the “last mile” problem), and more generally to urban ad-hoc ride sharing scenarios. Since no transport provider covers short distance trips where demand arises, private car use is flourishing in recent decades, with all the known disadvantages of traffic congestion, resource wastes, air pollution, and insufficient parking space especially in city centers. Taxis are focusing on providing a door-to-door service, but they do not perform well in short distance trip pickup and delivery services. This paper identifies the obstacles, and suggests the empty cruise time of taxis as (a) a feasible solution for the short distance trip problem, and (b) a contribution to develop a short distance trip market for the taxi industry. This empty cruise contribution hypothesis is investigated by testing different models that define ad-hoc matches of passengers and empty cruising taxis. An agent-based simulation is designed to study the match probability by these models. Based on the experimental results it is shown that taxi empty cruise match models have the potential to solve the short distance problem and to develop the taxi short distance trip markets." ] }
1304.2671
82290933
The matching of the soundtrack in a movie or a video can have an enormous influence in the message being con- veyed and its impact, in the sense of involvement and engagement, and ultimately in their aesthetic and entertainment qualities. Art is often associated with creativity, implying the presence of inspiration, originality and appropriateness. Evolutionary systems provides us with the novelty, showing us new and subtly different solutions in every generation, possibly stimulating the creativity of the human using the system. In this paper, we present Genetic Soundtracks, an evolutionary approach to the creative matching of audio to a video. It analyzes both media to extract features based on their content, and adopts genetic algorithms, with the purpose of truncating, combining and adjusting audio clips, to align and match them with the video scenes. Index Terms— Genetic algorithms, multimedia, entertainment, feature extraction, audio & video signal processing, video editing
In Synesthetic Video @cite_7 , the authors explored the relation of visual and auditory properties to experience video in cross-sensorial modes, resulting in ways to hear its colours (synthesised, not matching of existing audio) and to influence its visual properties with sound and music, through user interaction or ambient influence. The motivations behind this work were accessibility, enriching users' experiences, and stimulating and supporting creativity. @cite_3 is performed automatic and semi-automatic selection and alignment of video segments to music. The objective of the proposed method is to create a suitable video track for the given soundtrack, which is the opposite of our work. The process is based on the detection of audio and video changes, plus camera motion and exposure, to help determine suitability between the video and audio tracks. Deterministic methods are proposed for the alignment of audio and video, such as best-first search.
{ "cite_N": [ "@cite_3", "@cite_7" ], "mid": [ "2053836391", "2001555052" ], "abstract": [ "We present methods for automatic and semi-automatic creation of music videos, given an arbitrary audio soundtrack and source video. Significant audio changes are automatically detected; similarly, the source video is automatically segmented and analyzed for suitability based on camera motion and exposure. Video with excessive camera motion or poor contrast is penalized with a high unsuitability score, and is more likely to be discarded in the final edit. High quality video clips are then automatically selected and aligned in time with significant audio changes. Video clips are adjusted to match the audio segments by selecting the most suitable region of the desired length. Besides a fully automated solution, our system can also start with clips manually selected and ordered using a graphical interface. The video is then created by truncating the selected clips (preserving the high quality portions) to produce a video digest that is synchronized with the soundtrack music, thus enhancing the impact of both.", "In this paper we present Synesthetic Video, an interactive video that allows to experience video in cross-sensorial ways, to hear its colors and to influence its visual properties with sound and music, through user interaction or ambient influence. Our main motivations include accessibility, enriching users experiences, stimulating and supporting users creativity, and to learn more about synesthesia and how videos can influence and be influenced by users and the ambient, at the crossroads of art, science and technology." ] }
1304.2671
82290933
The matching of the soundtrack in a movie or a video can have an enormous influence in the message being con- veyed and its impact, in the sense of involvement and engagement, and ultimately in their aesthetic and entertainment qualities. Art is often associated with creativity, implying the presence of inspiration, originality and appropriateness. Evolutionary systems provides us with the novelty, showing us new and subtly different solutions in every generation, possibly stimulating the creativity of the human using the system. In this paper, we present Genetic Soundtracks, an evolutionary approach to the creative matching of audio to a video. It analyzes both media to extract features based on their content, and adopts genetic algorithms, with the purpose of truncating, combining and adjusting audio clips, to align and match them with the video scenes. Index Terms— Genetic algorithms, multimedia, entertainment, feature extraction, audio & video signal processing, video editing
Evolutionary computation has been widely used in art domains, such as music generation @cite_9 and video generation @cite_1 . @cite_0 , music videos are automatically generated from personal home videos, based on the extraction and matching of temporal structures of video and music, using genetic algorithms to find global optimal solutions. These solutions may involve repetitive patterns in video based on those found in the music. In MovieGene @cite_4 the authors used genetic algorithms to explore creative editing and production of videos, with the main focus on visual and semantic properties, by defining criteria or interactively performing selections in the evolving population of video clips, which could be explored and discovered through emergent narratives and aesthetics.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_1", "@cite_4" ], "mid": [ "2095005467", "", "2152743478", "1905344765" ], "abstract": [ "Music video (MV) is a short film meant to present a visual representation of a popular music song. In this paper, we present a system that automatically generates MV-like videos from personal home videos based on observations that generally there are obvious repetitive visual and aural patterns in MVs. Based on a set of video and music analysis algorithms, the automatic music video (AMV) generation system automatically extracts temporal structures of the video and music, as well as repetitive patterns in the music. And then, according to the structure and patterns, a set of highlight segments from the raw home video footage are selected, aiming at matching the visual content with the aural structure and pattern. And last, the output music video is rendered by connecting the selected highlight video segments with appropriate transition effects, accompanied with the music. Experiments show that the results are compelling and promising.", "", "The boundaries of art are subjective, but the impetus for art is often associated with creativity, regarded with wonder and admiration along human history. Most interesting activities and their products are a result of creativity. The main goal of our approach is to explore new creative ways of editing and producing videos, using evolutionary algorithms. A creative evolutionary system makes use of evolutionary computation operators and properties and is designed to aid our own creative processes, and to generate results to problems that traditionally required creative people to solve. Our system is able to generate new videos or to help a user in doing so. New video sequences are combined and selected, based on their characteristics represented as video annotations, either by defining criteria or by interactively performing selections in the evolving population of video clips, in forms that can reflect editing styles. With evolving video, the clips can be explored through emergent narratives and aesthetics in ways that may reveal or inspire creativity in digital art.", "We propose a new multimedia authoring paradigm based on evolutionary computation, video annotation, and cinematic rules. New clips are produced in an evolving population through genetic transformations influenced by user choices, and regulated by cinematic techniques like montage and video editing. The evolutionary mechanisms, through the fitness function will condition how video sequences are retrieved and assembled, based on the video annotations. The system uses several descriptors, as genetic information, coded in an XML document following the MPEG-7 standard. With evolving video, the clips can be explored and discovered through emergent narratives and aesthetics in ways that inspire creativity and learning about the topics that are presented." ] }
1304.1449
2952513421
Our main result is that the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which answers in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, we prove that for every edge-weighted graph @math and a subset of terminals @math , there is a graph @math that is isomorphic to a minor of @math , such that for every two terminals @math , the shortest-path distances between them in @math and in @math satisfy @math . Our existence proof actually gives a randomized polynomial-time algorithm. Our proof features a new variant of metric decomposition. It is well-known that every @math -point metric space @math admits a @math -separating decomposition for @math , which roughly means for every desired diameter bound @math there is a randomized partitioning of @math , which satisfies the following separation requirement: for every @math , the probability they lie in different clusters of the partition is at most @math . We introduce an additional requirement, which is the following tail bound: for every shortest-path @math of length @math , the number of clusters of the partition that meet the path @math , denoted @math , satisfies @math for all @math .
This problem differs from SPR in @math may contain a few non-terminals, but all terminal distances should be preserved exactly. Formally, the objective is to find a small graph @math such that (i) @math is isomorphic to a minor of @math ; (ii) @math ; and (iii) for every @math , @math . This problem was originally defined by Krauthgamer and Zondiner @cite_23 , who showed an upper bound @math for general graphs, and a lower bound of @math that holds even for planar graphs.
{ "cite_N": [ "@cite_23" ], "mid": [ "1968729061" ], "abstract": [ "We introduce the following notion of compressing an undirected graph @math with (nonnegative) edge-lengths and terminal vertices @math . A distance-preserving minor is a minor @math (of @math ) with possibly different edge-lengths, such that @math and the shortest-path distance between every pair of terminals is exactly the same in @math and in @math . We ask: what is the smallest @math such that every graph @math with @math terminals admits a distance-preserving minor @math with at most @math vertices? Simple analysis shows that @math . Our main result proves that @math , significantly improving on the trivial @math . Our lower bound holds even for planar graphs @math , in contrast to graphs @math of constant treewidth, for which we prove that @math vertices suffice." ] }
1304.1845
2952147226
In this paper we study how the network of agents adopting a particular technology relates to the structure of the underlying network over which the technology adoption spreads. We develop a model and show that the network of agents adopting a particular technology may have characteristics that differ significantly from the social network of agents over which the technology spreads. For example, the network induced by a cascade may have a heavy-tailed degree distribution even if the original network does not. This provides evidence that online social networks created by technology adoption over an underlying social network may look fundamentally different from social networks and indicates that using data from many online social networks may mislead us if we try to use it to directly infer the structure of social networks. Our results provide an alternate explanation for certain properties repeatedly observed in data sets, for example: heavy-tailed degree distribution, network densification, shrinking diameter, and network community profile. These properties could be caused by a sort of sampling bias' rather than by attributes of the underlying social structure. By generating networks using cascades over traditional network models that do not themselves contain these properties, we can nevertheless reliably produce networks that contain all these properties. An opportunity for interesting future research is developing new methods that correctly infer underlying network structure from data about a network that is generated via a cascade spread over the underlying network.
Technology adoption as a process on a social network has been studied and documented before; however, usually only the size of the cascade is considered. For example, in an experimental study, Centola @cite_31 creates online communities populated with volunteers and studies the spread of joining a health forum network over this strictly enforced underlying network. Centola was mostly concerned with what types of underlying network structures would foster the largest cascade. For more examples, see Chapters 6 and 9 of @cite_16 .
{ "cite_N": [ "@cite_31", "@cite_16" ], "mid": [ "2150208547", "1533368239" ], "abstract": [ "How do social networks affect the spread of behavior? A popular hypothesis states that networks with many clustered ties and a high degree of separation will be less effective for behavioral diffusion than networks in which locally redundant ties are rewired to provide shortcuts across the social space. A competing hypothesis argues that when behaviors require social reinforcement, a network with more clustering may be more advantageous, even if the network as a whole has a larger diameter. I investigated the effects of network structure on diffusion by studying the spread of health behavior through artificially structured online communities. Individual adoption was much more likely when participants received social reinforcement from multiple neighbors in the social network. The behavior spread farther and faster across clustered-lattice networks than across corresponding random networks.", "Networks of relationships help determine the careers that people choose, the jobs they obtain, the products they buy, and how they vote. The many aspects of our lives that are governed by social networks make it critical to understand how they impact behavior, which network structures are likely to emerge in a society, and why we organize ourselves as we do. In Social and Economic Networks, Matthew Jackson offers a comprehensive introduction to social and economic networks, drawing on the latest findings in economics, sociology, computer science, physics, and mathematics. He provides empirical background on networks and the regularities that they exhibit, and discusses random graph-based models and strategic models of network formation. He helps readers to understand behavior in networked societies, with a detailed analysis of learning and diffusion in networks, decision making by individuals who are influenced by their social neighbors, game theory and markets on networks, and a host of related subjects. Jackson also describes the varied statistical and modeling techniques used to analyze social networks. Each chapter includes exercises to aid students in their analysis of how networks function. This book is an indispensable resource for students and researchers in economics, mathematics, physics, sociology, and business." ] }
1304.1845
2952147226
In this paper we study how the network of agents adopting a particular technology relates to the structure of the underlying network over which the technology adoption spreads. We develop a model and show that the network of agents adopting a particular technology may have characteristics that differ significantly from the social network of agents over which the technology spreads. For example, the network induced by a cascade may have a heavy-tailed degree distribution even if the original network does not. This provides evidence that online social networks created by technology adoption over an underlying social network may look fundamentally different from social networks and indicates that using data from many online social networks may mislead us if we try to use it to directly infer the structure of social networks. Our results provide an alternate explanation for certain properties repeatedly observed in data sets, for example: heavy-tailed degree distribution, network densification, shrinking diameter, and network community profile. These properties could be caused by a sort of sampling bias' rather than by attributes of the underlying social structure. By generating networks using cascades over traditional network models that do not themselves contain these properties, we can nevertheless reliably produce networks that contain all these properties. An opportunity for interesting future research is developing new methods that correctly infer underlying network structure from data about a network that is generated via a cascade spread over the underlying network.
A series of work (e.g. @cite_4 @cite_11 ) points out a similar sampling bias in context of traceroute sampling. For example, Achlioptas, Clauset, Kempe, and Moore @cite_11 show that traceroute sampling finds power-law degree distributions even in regular random graphs (which are very far from having a power-law degree distribution). A sampling bias caused by using traceroute sampling means a power-law distribution can be measured even when the underlying degree distribution is constant.
{ "cite_N": [ "@cite_4", "@cite_11" ], "mid": [ "2107648668", "2120511087" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions." ] }
1304.1567
1807420371
We propose an automated and unsupervised methodology for a novel summarization of group behavior based on content preference. We show that graph theoretical community evolution (based on similarity of user preference for content) is effective in indexing these dynamics. Combined with text analysis that targets automatically-identified representative content for each community, our method produces a novel multi-layered representation of evolving group behavior. We demonstrate this methodology in the context of political discourse on a social news site with data that spans more than four years and find coexisting political leanings over extended periods and a disruptive external event that lead to a significant reorganization of existing patterns. Finally, where there exists no ground truth, we propose a new evaluation approach by using entropy measures as evidence of coherence along the evolution path of these groups. This methodology is valuable to designers and managers of online forums in need of granular analytics of user activity, as well as to researchers in social and political sciences who wish to extend their inquiries to large-scale data available on the web.
Clustering and community detection methods are in essence network summarization tools @cite_37 @cite_38 @cite_0 @cite_31 @cite_1 . A survey paper by @cite_12 provides a comprehensive summary of this field. Building on this literature, a growing body of work has been produced on community evolution, varying from works on evolutionary clustering @cite_6 @cite_14 @cite_18 and communtiy detection in dynamic social networks @cite_10 to processes that also optimize for smoothness in temporal evolution @cite_32 or use community cores in evolving networks @cite_11 . A categorization and review of community evolution methods is presented by @cite_25 . These works focus solely on the network structure.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_18", "@cite_1", "@cite_32", "@cite_6", "@cite_0", "@cite_31", "@cite_10", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2047940964", "1971421925", "", "1989745999", "2032273054", "2166563561", "2092124750", "2131681506", "", "", "2025665774", "2127048411", "2014714110" ], "abstract": [ "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.", "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "", "It has long been realized that social network of scientific collaborations provides a window on patterns of collaboration within the academic community. Investigations and studies about static and dynamic properties of co-authorship network have also been done in the recent years. However, the accent of most of the research is on the analysis about the macroscopic structure of the whole network or community over time such as distance, diameter shrinking and densication phenomenon and microscopic formation analysis of links,groups or communities over time. But in fact, how an individual or a community grows over time may not only provide a new view point to mine copious and valuable information about scientific networks but also reveal important factors that influence the growth process. In this paper,from a temporal and microscopic analytical perspective, we propose a method to trace scientific individual's and community's growth process based on community's evolution path combination with quantifiable measurements. During the process of tracing, we find out that it is the fact that the lifespan of community is related to the ability of altering its membership, but what's more and complementary, we find out that the lifespan of community is also related to the ability of maintaining its core members meaning that community may last for a longer lifespan if its core members are much more stable. Meanwhile, we also trace the growth process of research individuals based on the evolution of communities.", "The modularity of a network quantifies the extent, relative to a null model network, to which vertices cluster into community groups. We define a null model appropriate for bipartite networks, and use it to define a bipartite modularity. The bipartite modularity is presented in terms of a modularity matrix @math ; some key properties of the eigenspectrum of @math are identified and used to describe an algorithm for identifying modules in bipartite networks. The algorithm is based on the idea that the modules in the two parts of the network are dependent, with each part mutually being used to induce the vertices for the other part into the modules. We apply the algorithm to real-world network data, showing that the algorithm successfully identifies the modular structure of bipartite networks.", "We discover communities from social network data, and analyze the community evolution. These communities are inherent characteristics of human interaction in online social networks, as well as paper citation networks. Also, communities may evolve over time, due to changes to individuals' roles and social status in the network as well as changes to individuals' research interests. We present an innovative algorithm that deviates from the traditional two-step approach to analyze community evolutions. In the traditional approach, communities are first detected for each time slice, and then compared to determine correspondences. We argue that this approach is inappropriate in applications with noisy data. In this paper, we propose FacetNet for analyzing communities and their evolutions through a robust unified process. In this novel framework, communities not only generate evolutions, they also are regularized by the temporal smoothness of evolutions. As a result, this framework will discover communities that jointly maximize the fit to the observed data and the temporal evolution. Our approach relies on formulating the problem in terms of non-negative matrix factorization, where communities and their evolutions are factorized in a unified way. Then we develop an iterative algorithm, with proven low time complexity, which is guaranteed to converge to an optimal solution. We perform extensive experimental studies, on both synthetic datasets and real datasets, to demonstrate that our method discovers meaningful communities and provides additional insights not directly obtainable from traditional methods.", "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "", "", "The fast and unpredictable evolution of social data poses challenges for capturing user activities and complex associations. Evolving social graph clustering promises to uncover the dynamics of latent user and content patterns. This Web extra overviews evolving data clustering approaches.", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "Community structure is a key property of complex networks. Many algorithms have been proposed to automatically detect communities in static networks but few studies have considered the detection and tracking of communities in an evolving network. Tracking the evolution of a given community over time requires a clustering algorithm that produces stable clusters. However, most community detection algorithms are very unstable and therefore unusable for evolving networks. In this paper, we apply the methodology proposed in [seifi2012] to detect what we call community cores in evolving networks. We show that cores are much more stable than \"classical\" communities and that we can overcome the disadvantages of the stabilized methods." ] }
1304.1567
1807420371
We propose an automated and unsupervised methodology for a novel summarization of group behavior based on content preference. We show that graph theoretical community evolution (based on similarity of user preference for content) is effective in indexing these dynamics. Combined with text analysis that targets automatically-identified representative content for each community, our method produces a novel multi-layered representation of evolving group behavior. We demonstrate this methodology in the context of political discourse on a social news site with data that spans more than four years and find coexisting political leanings over extended periods and a disruptive external event that lead to a significant reorganization of existing patterns. Finally, where there exists no ground truth, we propose a new evaluation approach by using entropy measures as evidence of coherence along the evolution path of these groups. This methodology is valuable to designers and managers of online forums in need of granular analytics of user activity, as well as to researchers in social and political sciences who wish to extend their inquiries to large-scale data available on the web.
A number of recent papers focus on using content alone to create summaries of text, such as opinions @cite_8 , product reviews @cite_15 , political leanings @cite_20 @cite_35 @cite_2 , and news streams - more specifically, @cite_28 create structured summaries of content in the form of narrative maps and @cite_4 produce story-lines of streaming news. The bulk of literature in this field uses text-based techniques such as language models used in sentiment and subjectivity analyses and topic modeling and are not concerned with user networks. On the other hand, incorporating both the network graph and content, @cite_5 use the citation network between documents to get a better summarization of document content over time (topic evolution), @cite_22 track popular events in the social web, and @cite_13 summarize activity over time. Yet none of the mentioned papers produce a comprehensive multi-scale map of group behavior among users.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_22", "@cite_8", "@cite_28", "@cite_2", "@cite_5", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "2055232564", "", "2056797132", "2164936599", "2069207503", "1974681798", "2106211940", "2141631351", "2161696730", "" ], "abstract": [ "Affordable and ubiquitous online communications (social media) provide the means for flows of ideas and opinions and play an increasing role for the transformation and cohesion of society - yet little is understood about how online opinions emerge, diffuse, and gain momentum. To address this problem, an opinion formation framework based on content analysis of social media and sociophysical system modeling is proposed. Based on prior research and own projects, three building blocks of online opinion tracking and simulation are described: (1) automated topic and opinion detection in real-time, (2) topic and opinion modeling and agent-based simulation, and (3) visualizations of topic and opinion networks. Finally, two application scenarios are presented to illustrate the framework and motivate further research.", "", "User generated information in online communities has been characterized with the mixture of a text stream and a network structure both changing over time. A good example is a web-blogging community with the daily blog posts and a social network of bloggers. An important task of analyzing an online community is to observe and track the popular events, or topics that evolve over time in the community. Existing approaches usually focus on either the burstiness of topics or the evolution of networks, but ignoring the interplay between textual topics and network structures. In this paper, we formally define the problem of popular event tracking in online communities (PET), focusing on the interplay between texts and networks. We propose a novel statistical method that models the the popularity of events over time, taking into consideration the burstiness of user interest, information diffusion on the network structure, and the evolution of textual topics. Specifically, a Gibbs Random Field is defined to model the influence of historic status and the dependency relationships in the graph; thereafter a topic model generates the words in text content of the event, regularized by the Gibbs Random Field. We prove that two classic models in information diffusion and text burstiness are special cases of our model under certain situations. Empirical experiments with two different communities and datasets (i.e., Twitter and DBLP) show that our approach is effective and outperforms existing approaches.", "This paper presents a new unsupervised approach to generating ultra-concise summaries of opinions. We formulate the problem of generating such a micropinion summary as an optimization problem, where we seek a set of concise and non-redundant phrases that are readable and represent key opinions in text. We measure representativeness based on a modified mutual information function and model readability with an n-gram language model. We propose some heuristic algorithms to efficiently solve this optimization problem. Evaluation results show that our unsupervised approach outperforms other state of the art summarization methods and the generated summaries are informative and readable.", "When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives. In order to explore these stories, one needs a map to navigate unfamiliar territory. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents maximizing coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. We first formalize characteristics of good maps and formulate their construction as an optimization problem. Then we provide efficient methods with theoretical guarantees for generating maps. Finally, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with a real-world dataset demonstrate that the method is able to produce maps which help users acquire knowledge efficiently.", "In this paper, we address a relatively new and interesting text categorization problem: classify a political blog as either liberal or conservative, based on its political leaning. Our subjectivity analysis based method is twofold: 1) we identify subjective sentences that contain at least two strong subjective clues based on the General Inquirer dictionary; 2) from subjective sentences identified, we extract opinion expressions and other features to build political leaning classifiers. Experimental results with a political blog corpus we built show that by using features from subjective sentences can significantly improve the classification performance. In addition, by extracting opinion expressions from subjective sentences, we are able to reveal opinions that are characteristic of a specific political leaning to some extent.", "In this paper we study how to discover the evolution of topics over time in a time-stamped document collection. Our approach is uniquely designed to capture the rich topology of topic evolution inherent in the corpus. Instead of characterizing the evolving topics at fixed time points, we conceptually define a topic as a quantized unit of evolutionary change in content and discover topics with the time of their appearance in the corpus. Discovered topics are then connected to form a topic evolution graph using a measure derived from the underlying document network. Our approach allows inhomogeneous distribution of topics over time and does not impose any topological restriction in topic evolution graphs. We evaluate our algorithm on the ACM corpus. The topic evolution graphs obtained from the ACM corpus provide an effective and concrete summary of the corpus with remarkably rich topology that are congruent to our background knowledge. In a finer resolution, the graphs reveal concrete information about the corpus that were previously unknown to us, suggesting the utility of our approach as a navigational tool for the corpus.", "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly.", "This paper presents JAM (Joint Action Matrix Factorization), a novel framework to summarize social activity from rich media social networks. Summarizing social network activities requires an understanding of the relationships among concepts, users, and the context in which the concepts are used. Our work has three contributions: First, we propose a novel summarization method which extracts the co-evolution on multiple facets of social activity – who (users), what (concepts), how (actions) and when (time), and constructs a context rich summary called \"activity theme\". Second, we provide an efficient algorithm for mining activity themes over time. The algorithm extracts representative elements in each facet based on their co-occurrences with other facets through specific actions. Third, we propose new metrics for evaluating the summarization results based on the temporal and topological relationship among activity themes. Extensive experiments on real-world Flickr datasets demonstrate that our technique significantly outperforms several baseline algorithms. The results explore nontrivial evolution in Flickr photo-sharing communities.", "" ] }
1304.1567
1807420371
We propose an automated and unsupervised methodology for a novel summarization of group behavior based on content preference. We show that graph theoretical community evolution (based on similarity of user preference for content) is effective in indexing these dynamics. Combined with text analysis that targets automatically-identified representative content for each community, our method produces a novel multi-layered representation of evolving group behavior. We demonstrate this methodology in the context of political discourse on a social news site with data that spans more than four years and find coexisting political leanings over extended periods and a disruptive external event that lead to a significant reorganization of existing patterns. Finally, where there exists no ground truth, we propose a new evaluation approach by using entropy measures as evidence of coherence along the evolution path of these groups. This methodology is valuable to designers and managers of online forums in need of granular analytics of user activity, as well as to researchers in social and political sciences who wish to extend their inquiries to large-scale data available on the web.
There is considerable debate whether new online spaces promote diversity or through winner-take-all dynamics exacerbate polarization and conflict in society. While some have hailed the promise of democratic effects of the Internet, others have argued against this notion, asserting that such web-based platforms increase interaction among like-minded people and reduce contact among people of different opinions, leading to fragmentation in society (see for example, @cite_16 @cite_27 and @cite_26 ). @cite_30 demonstrate political polarization in linking patterns between blogs labeled as liberal or conservative. @cite_9 @cite_29 and @cite_24 propose and simulate models of polarization dynamics in populations, @cite_21 jointly classify Digg users and news articles in one of two classes (liberal or conservative) using label propagation starting from a small number of labeled users and articles. Finally, a seminal work in political science @cite_33 models polarization in American politics through analyzing roll-call votes by members of congress (also see @cite_19 on dimensionality of these votes).
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_33", "@cite_9", "@cite_29", "@cite_21", "@cite_24", "@cite_19", "@cite_27", "@cite_16" ], "mid": [ "2152284345", "618228702", "1985949226", "2101134179", "", "2217796615", "2046643693", "2320876964", "2108243506", "" ], "abstract": [ "In this paper, we study the linking patterns and discussion topics of political bloggers. Our aim is to measure the degree of interaction between liberal and conservative blogs, and to uncover any differences in the structure of the two communities. Specifically, we analyze the posts of 40 \"A-list\" blogs over the period of two months preceding the U.S. Presidential Election of 2004, to study how often they referred to one another and to quantify the overlap in the topics they discussed, both within the liberal and conservative communities, and also across communities. We also study a single day snapshot of over 1,000 political blogs. This snapshot captures blogrolls (the list of links to other blogs frequently found in sidebars), and presents a more static picture of a broader blogosphere. Most significantly, we find differences in the behavior of liberal and conservative blogs, with conservative blogs linking to each other more frequently and in a denser pattern.", "Is the Internet democratizing American politics? Do political Web sites and blogs mobilize inactive citizens and make the public sphere more inclusive? The Myth of Digital Democracy reveals that, contrary to popular belief, the Internet has done little to broaden political discourse but in fact empowers a small set of elites--some new, but most familiar. Matthew Hindman argues that, though hundreds of thousands of Americans blog about politics, blogs receive only a miniscule portion of Web traffic, and most blog readership goes to a handful of mainstream, highly educated professionals. He shows how, despite the wealth of independent Web sites, online news audiences are concentrated on the top twenty outlets, and online organizing and fund-raising are dominated by a few powerful interest groups. Hindman tracks nearly three million Web pages, analyzing how their links are structured, how citizens search for political content, and how leading search engines like Google and Yahoo! funnel traffic to popular outlets. He finds that while the Internet has increased some forms of political participation and transformed the way interest groups and candidates organize, mobilize, and raise funds, elites still strongly shape how political material on the Web is presented and accessed. The Myth of Digital Democracy. debunks popular notions about political discourse in the digital age, revealing how the Internet has neither diminished the audience share of corporate media nor given greater voice to ordinary citizens.", "A general nonlinear logit model is used to analyze political choice data. The model assumes probabilistic voting based on a spatial utility function. The parameters of the utility function and the spatial coordinates of the choices and the choosers can all be estimated on the basis of observed choices. Ordinary Guttman scaling is a degenerate case of this model. Estimation of the model is implemented in the NOMINATE program for one dimensional analysis of two alternative choices with no nonvoting. The robustness and face validity of the program outputs are evaluated on the basis of roll call voting data for the U.S. House and Senate.", "Information technology can link geographically separated people and help them locate interesting or useful resources. These attributes have the potential to bridge gaps and unite communities. Paradoxically, they also have the potential to fragment interaction and divide groups. Advances in technology can make it easier for people to spend more time on special interests and to screen out unwanted contact. Geographic boundaries can thus be supplanted by boundaries on other dimensions. This paper formally defines a precise set of measures of information integration and develops a model of individual knowledge profiles and community affiliation. These factors suggest specific conditions under which improved access, search, and screening can either integrate or fragment interaction on various dimensions. As IT capabilities continue to improve, preferences--not geography or technology--become the key determinants of community boundaries.", "", "Social news aggregator services generate readers’ subjective reactions to news opinion articles. Can we use those as a resource to classify articles as liberal or conservative, even without knowing the self-identified political leaning of most users? We applied three semi-supervised learning methods that propagate classifications of political news articles and users as conservative or liberal, based on the assumption that liberal users will vote for liberal articles more often, and similarly for conservative users and articles. Starting from a few labeled articles and users, the algorithms propagate political leaning labels to the entire graph. In cross-validation, the best algorithm achieved 99.6 accuracy on held-out users and 96.3 accuracy on held-out articles. Adding social data such as users’ friendship or text features such as cosine similarity did not improve accuracy. The propagation algorithms, using the subjective liking data from users, also performed better than an SVM based text classifier, which achieved 92.0 accuracy on articles.", "It is not uncommon for certain social networks to divide into two opposing camps in response to stress. This happens, for example, in networks of political parties during winner-takes-all elections, in networks of companies competing to establish technical standards, and in networks of nations faced with mounting threats of war. A simple model for these two-sided separations is the dynamical system dX dt = X2, where X is a matrix of the friendliness or unfriendliness between pairs of nodes in the network. Previous simulations suggested that only two types of behavior were possible for this system: Either all relationships become friendly or two hostile factions emerge. Here we prove that for generic initial conditions, these are indeed the only possible outcomes. Our analysis yields a closed-form expression for faction membership as a function of the initial conditions and implies that the initial amount of friendliness in large social networks (started from random initial conditions) determines whether they will end up in intractable conflict or global harmony.", "VWhile dimensional studies of congressional voting find a single, ideological dimension, regression estimates find several constituency and party dimensions in addition to ideology. I rescale several unidimensional studies to show their increased classification success over the null hypothesis that votes are not unidimensional. Several null hypotheses are explored. With these null hypotheses, 66 -75 of nonunidimensional roll call votes are nevertheless correctly classified by one dimension. After the resealing, one dimension succeeds in correctly classifying 25 -50 of the votes, and second and third dimensions are important.", "New communications technologies offer important opportunities for revitalizing American democracy. They encourage broader issue discussions, greater specificity in candidate positions, and positive messages rather than negative ones. New communications systems can begin to uncouple wealth from voter impressions, make candidate messages available in multiple formats and languages, and encourage two-way communications: from candidate to candidate, from voter to candidate, and from voter to voter.", "" ] }
1304.1863
2169292308
Solid-state drives (SSDs) have been widely deployed in desktops and data centers. However, SSDs suffer from bit errors, and the bit error rate is time dependent since it increases as an SSD wears down. Traditional storage systems mainly use parity-based RAID to provide reliability guarantees by striping redundancy across multiple devices, but the effectiveness of RAID in SSDs remains debatable as parity updates aggravate the wearing and bit error rates of SSDs. In particular, an open problem is that how different parity distributions over multiple devices, such as the even distribution suggested by conventional wisdom, or uneven distributions proposed in recent RAID schemes for SSDs, may influence the reliability of an SSD RAID array. To address this fundamental problem, we propose the first analytical model to quantify the reliability dynamics of an SSD RAID array. Specifically, we develop a "non-homogeneous" continuous time Markov chain model, and derive the transient reliability solution. We validate our model via trace-driven simulations and conduct numerical analysis to provide insights into the reliability dynamics of SSD RAID arrays under different parity distributions and subject to different bit error rates and array configurations. Designers can use our model to decide the appropriate parity distribution based on their reliability requirements.
There have been extensive studies on NAND flash-based SSDs. A detailed survey of the algorithms and data structures for flash memories is found in @cite_10 . Recent papers empirically study the intrinsic characteristics of SSDs (e.g., @cite_16 @cite_7 ), or develop analytical models for the write performance (e.g., @cite_12 @cite_26 ) and garbage collection algorithms (e.g., @cite_35 ) of SSDs.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_7", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2950648877", "", "2106436364", "", "2099753358", "2097490816" ], "abstract": [ "Solid state drives (SSDs) have seen wide deployment in mobiles, desktops, and data centers due to their high I O performance and low energy consumption. As SSDs write data out-of-place, garbage collection (GC) is required to erase and reclaim space with invalid data. However, GC poses additional writes that hinder the I O performance, while SSD blocks can only endure a finite number of erasures. Thus, there is a performance-durability tradeoff on the design space of GC. To characterize the optimal tradeoff, this paper formulates an analytical model that explores the full optimal design space of any GC algorithm. We first present a stochastic Markov chain model that captures the I O dynamics of large-scale SSDs, and adapt the mean-field approach to derive the asymptotic steady-state performance. We further prove the model convergence and generalize the model for all types of workload. Inspired by this model, we propose a randomized greedy algorithm (RGA) that can operate along the optimal tradeoff curve with a tunable parameter. Using trace-driven simulation on DiskSim with SSD add-ons, we demonstrate how RGA can be parameterized to realize the performance-durability tradeoff.", "", "Flash Memory based Solid State Drive (SSD) has been called a \"pivotal technology\" that could revolutionize data storage systems. Since SSD shares a common interface with the traditional hard disk drive (HDD), both physically and logically, an effective integration of SSD into the storage hierarchy is very important. However, details of SSD hardware implementations tend to be hidden behind such narrow interfaces. In fact, since sophisticated algorithms are usually, of necessity, adopted in SSD controller firmware, more complex performance dynamics are to be expected in SSD than in HDD systems. Most existing literature or product specifications on SSD just provide high-level descriptions and standard performance data, such as bandwidth and latency. In order to gain insight into the unique performance characteristics of SSD, we have conducted intensive experiments and measurements on different types of state-of-the-art SSDs, from low-end to high-end products. We have observed several unexpected performance issues and uncertain behavior of SSDs, which have not been reported in the literature. For example, we found that fragmentation could seriously impact performance -- by a factor of over 14 times on a recently announced SSD. Moreover, contrary to the common belief that accesses to SSD are uncorrelated with access patterns, we found a strong correlation between performance and the randomness of data accesses, for both reads and writes. In the worst case, average latency could increase by a factor of 89 and bandwidth could drop to only 0.025MB sec. Our study reveals several unanticipated aspects in the performance dynamics of SSD technology that must be addressed by system designers and data-intensive application users in order to effectively place it in the storage hierarchy.", "", "Flash memory is a type of electrically-erasable programmable read-only memory (EEPROM). Because flash memories are nonvolatile and relatively dense, they are now used to store files and other persistent objects in handheld computers, mobile phones, digital cameras, portable music players, and many other computer systems in which magnetic disks are inappropriate. Flash, like earlier EEPROM devices, suffers from two limitations. First, bits can only be cleared by erasing a large block of memory. Second, each block can only sustain a limited number of erasures, after which it can no longer reliably store data. Due to these limitations, sophisticated data structures and algorithms are required to effectively use flash memories. These algorithms and data structures support efficient not-in-place updates of data, reduce the number of erasures, and level the wear of the blocks in the device. This survey presents these algorithms and data structures, many of which have only been described in patents until now.", "Solid state drives (SSDs) update data by writing a new copy, rather than overwriting old data, causing prior copies of the same data to be invalidated. These writes are performed in units of pages, while space is reclaimed in units of multi-page erase blocks, necessitating copying of any remaining valid pages in the block before reclamation. The efficiency of this cleaning process greatly affects performance under random workloads; in particular, in SSDs the write bottleneck is typically internal media throughput, and write amplification due to additional internal copying directly reduces application throughput. We present the first precise closed-form solution for write amplification under greedy cleaning for uniformly distributed random traffic, and validate its accuracy via simulation. In addition we also present the first models which predict performance degradation for both LRU cleaning and greedy cleaning under simple non-uniform traffic conditions; simulation results show the first model to be exact and the second to be accurate within 2 . We extend the LRU model to arbitrary combinations of random traffic, and demonstrate its use in predicting cleaning performance for real-world workloads. The utility of these analytic models lies in their amenability to optimization techniques not feasible in simulation. We examine the strategy of separating \"hot\" and \"cold\" data, showing that for our traffic model, such separation eliminates any loss in performance due to non-uniform traffic. We show how a system which separates hot and cold data may shift free space from cold to hot data in order to achieve improved performance, and how numeric methods may be used with our model to find the optimum operating point, which approaches a write amplification of 1.0 for increasingly skewed traffic. We examine online methods for achieving this optimal operating point, and show that a control strategy based on our model achieves near-optimal performance for a number of real-world block traces." ] }
1304.1863
2169292308
Solid-state drives (SSDs) have been widely deployed in desktops and data centers. However, SSDs suffer from bit errors, and the bit error rate is time dependent since it increases as an SSD wears down. Traditional storage systems mainly use parity-based RAID to provide reliability guarantees by striping redundancy across multiple devices, but the effectiveness of RAID in SSDs remains debatable as parity updates aggravate the wearing and bit error rates of SSDs. In particular, an open problem is that how different parity distributions over multiple devices, such as the even distribution suggested by conventional wisdom, or uneven distributions proposed in recent RAID schemes for SSDs, may influence the reliability of an SSD RAID array. To address this fundamental problem, we propose the first analytical model to quantify the reliability dynamics of an SSD RAID array. Specifically, we develop a "non-homogeneous" continuous time Markov chain model, and derive the transient reliability solution. We validate our model via trace-driven simulations and conduct numerical analysis to provide insights into the reliability dynamics of SSD RAID arrays under different parity distributions and subject to different bit error rates and array configurations. Designers can use our model to decide the appropriate parity distribution based on their reliability requirements.
RAID was first introduced in @cite_0 and has been widely used in many storage systems. Performance and reliability analysis on RAID in the context of hard disk drives has been extensively studied (e.g., see @cite_27 @cite_23 @cite_13 @cite_22 @cite_6 ). On the other hand, SSDs have a distinct property that their error rates increase as they wear down, so a new model is necessary to characterize the reliability of SSD RAID.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_0", "@cite_27", "@cite_23", "@cite_13" ], "mid": [ "1965724639", "2100923816", "2147504831", "1546442581", "2149509970", "1829547464" ], "abstract": [ "Abstract A reliability analysis of various disk array architectures (different levels of RAID) is performed. The dependence of reliability and mean time to data loss on various parameters of a disk array is characterized. A study of these characteristics reveals the impact of several design choices for a disk array on its reliability. Issues such as scalability of disk arrays, imperfect coverage of disk failures, cold versus hot spares, effect of predictive disk failures, and dependence of disk array reliability on data reconstruction time are studied.", "We present an analytic model to study the reliability of some important disk array organizations that have been proposed by others in the literature. These organizations are based on the combination of two options for the data layout, regular RAID-5 and block designs, and three alternatives for sparing: hot sparing, distributed sparing and parity sparing. Uncorrectable bit errors have big effects on reliability but are ignored in traditional reliability analysis of disk arrays. We consider both disk failures and uncorrectable bit errors in the model. The reliability of disk arrays is measured in terms of MTTDL (Mean Time To Data Loss). A unified formula of MTTDL has been derived for these disk array organizations. The MTTDLs of these disk array organizations are also compared using the analytic model.", "Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.", "Disk arrays (RAID) have been proposed as a possible approach to solving the emerging I O bottleneck problem. The performance of a RAID system when all disks are operational and the MTTF,,, (mean time to system failure) have been well studied. However, the performance of disk arrays in the presence of failed disks has not received much attention. The same techniques that provide the storage efficient redundancy of a RAID system can also result in a significant performance hit when a single disk fails. This is of importance since single disk failures are expected to be relatively frequent in a system with a large number of disks. In this paper we propose a new variation of the RAID organization that has significant advantages in both reducing the magnitude of the performance degradation when there is a single failure and can also reduce the MTTF,,,. We also discuss several strategies that can be implemented to speed the rebuild of the failed disk and thus increase the MTTF,,,. The efficacy of these strategies is shown to require the improved properties of the new RAID organization. An analysis is carried out to quantify the tradeoffs.", "In today's computer systems, the disk I O subsystem is often identified as the major bottleneck to system performance. One proposed solution is the so called redundant array of inexpensive disks (RAID). We examine the performance of two of the most promising RAID architectures, the mirrored array and the rotated parity array. First, we propose several scheduling policies for the mirrored array and a new data layout, group-rotate declustering, and compare their performance with each other and in combination with other data layout schemes. We observe that a policy that routes reads to the disk with the smallest number of requests provides the best performance, especially when the load on the I O system is high. Second, through a combination of simulation and analysis, we compare the performance of this mirrored array architecture to the rotated parity array architecture. This latter study shows that: 1) given the same storage capacity (approximately double the number of disks), the mirrored array considerably outperforms the rotated parity array; and 2) given the same number of disks, the mirrored array still outperforms the rotated parity array in most cases, even for applications where I O requests are for large amounts of data. The only exception occurs when the I O size is very large; most of the requests are writes, and most of these writes perform full stripe write operations.", "Fault tolerance requirements for near term disk array storage systems are analyzed. The excellent reliability provided by RAID Level five data organization is seen to be insufficient for these systems. The authors consider various alternatives - improved MTBF and MTTR times as well as smaller reliability groups and increased numbers of check disks per group - to obtain the necessary improved reliability. Two data organization schemes based on maximum distance separable error correcting codes are introduced. Several figures of merit are calculated using a standard Markov failure and repair model for these organizations. Based on these results, the multiple check disk approach to improved reliability is an excellent option." ] }
1304.1978
2007800438
Geometric discrepancies are standard measures to quantify the irregularity of distributions. They are an important notion in numerical integration. One of the most important discrepancy notions is the so-called star discrepancy. Roughly speaking, a point set of low star discrepancy value allows for a small approximation error in quasi-Monte Carlo integration. It is thus the most studied discrepancy notion. In this work we present a new algorithm to compute point sets of low star discrepancy. The two components of the algorithm (for the optimization and the evaluation, respectively) are based on evolutionary principles. Our algorithm clearly outperforms existing approaches. To the best of our knowledge, it is also the first algorithm which can be adapted easily to optimize inverse star discrepancies.
Our algorithm builds on previous work presented in @cite_26 on the construction of low @math -discrepancy sequences (see @cite_1 for an earlier GECCO version) and in @cite_10 on the evaluation of the star discrepancy of a given point set. It has two components, an , which computes candidate point sets, and an for assessing the quality of the proposed solutions. During the optimization process, the evaluation component is called several times. The two components of the algorithm will be described in . On the structure of the algorithms we note here only that the optimization part is based on a genetic algorithm, while the evaluation component is based on threshold accepting (TA)---a variant of simulated annealing with derandomized selection rules.
{ "cite_N": [ "@cite_10", "@cite_26", "@cite_1" ], "mid": [ "2123331303", "2100184918", "2165382421" ], "abstract": [ "We present a new algorithm for estimating the star discrepancy of arbitrary point sets. Similar to the algorithm for discrepancy approximation of Winker and Fang [SIAM J. Numer. Anal., 34 (1997), pp. 2028-2042] it is based on the optimization algorithm threshold accepting. Our improvements include, amongst others, a nonuniform sampling strategy, which is more suited for higher-dimensional inputs and additionally takes into account the topological characteristics of given point sets, and rounding steps which transform axis-parallel boxes, on which the discrepancy is to be tested, into critical test boxes. These critical test boxes provably yield higher discrepancy values and contain the box that exhibits the maximum value of the local discrepancy. We provide comprehensive experiments to test the new algorithm. Our randomized algorithm computes the exact discrepancy frequently in all cases where this can be checked (i.e., where the exact discrepancy of the point set can be computed in feasible time). Most importantly, in higher dimensions the new method behaves clearly better than all previously known methods.", "Low-discrepancy sequences provide a way to generate quasi-random numbers of high dimensionality with a very high level of uniformity. The nearly orthogonal Latin hypercube and the generalized Halton sequence are two popular methods when it comes to generate low-discrepancy sequences. In this article, we propose to use evolutionary algorithms in order to find optimized solutions to the combinatorial problem of configuring generators of these sequences. Experimental results show that the optimized sequence generators behave at least as well as generators from the literature for the Halton sequence and significantly better for the nearly orthogonal Latin hypercube.", "Many fields rely on some stochastic sampling of a given complex space. Low-discrepancy sequences are methods aiming at producing samples with better space-filling properties than uniformly distributed random numbers, hence allowing a more efficient sampling of that space. State-of-the-art methods like nearly orthogonal Latin hypercubes and scrambled Halton sequences are configured by permutations of internal parameters, where permutations are commonly done randomly. This paper proposes the use of evolutionary algorithms to evolve these permutations, in order to optimize a discrepancy measure. Results show that an evolutionary method is able to generate low-discrepancy sequences of significantly better space-filling properties compared to sequences configured with purely random permutations." ] }
1304.1978
2007800438
Geometric discrepancies are standard measures to quantify the irregularity of distributions. They are an important notion in numerical integration. One of the most important discrepancy notions is the so-called star discrepancy. Roughly speaking, a point set of low star discrepancy value allows for a small approximation error in quasi-Monte Carlo integration. It is thus the most studied discrepancy notion. In this work we present a new algorithm to compute point sets of low star discrepancy. The two components of the algorithm (for the optimization and the evaluation, respectively) are based on evolutionary principles. Our algorithm clearly outperforms existing approaches. To the best of our knowledge, it is also the first algorithm which can be adapted easily to optimize inverse star discrepancies.
Evaluating the star discrepancy of a given point set @math is known to be NP-hard @cite_3 . In fact, it is even W[1]-hard in @math @cite_12 , implying that, under standard complexity assumptions, there is no algorithm to evaluate the star discrepancy of @math points in @math dimension in a running time @math . The best known exact algorithm for evaluating discrepancies, the DEM-algorithm, has a running time of @math @cite_27 . For most relevant settings this is too slow to be applicable. This is true in particular for our setting, where many candidate point sets need to be evaluated. In fact, the complexity of star discrepancy evaluation is the main reason why only few algorithmic approaches are known for the explicit construction of low star discrepancy point sets, cf. also the comment in [page 3] DeRainville2012 .
{ "cite_N": [ "@cite_27", "@cite_12", "@cite_3" ], "mid": [ "2077220042", "1508410079", "2162820178" ], "abstract": [ "Patterns used for supersampling in graphics have been analyzed from statistical and signal-processing viewpoints. We present an analysis based on a type of isotropic discrepancy—how good patterns are at estimating the area in a region of defined type. We present algorithms for computing discrepancy relative to regions that are defined by rectangles, halfplanes, and higher-dimensional figures. Experimental evidence shows that popular supersampling patterns have discrepancies with better asymptotic behavior than random sampling, which is not inconsistent with theoretical bounds on discrepancy.", "Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. Depending on the ranges, several variants arise, including star discrepancy, box discrepancy, and discrepancy of halfspaces. These problems are solvable in time n^O^(^d^), where d is the dimension of the underlying space. As such a dependency on d becomes intractable for high-dimensional data, we ask whether it can be moderated. We answer this question negatively by proving that the canonical decision problems are W[1]-hard with respect to the dimension, implying that no f(d)@?n^O^(^1^)-time algorithm is possible for any function f(d) unless FPT=W[1]. We also discover the W[1]-hardness of other well known problems, such as determining the largest empty box that contains the origin and is inside the unit cube. This is shown to be hard even to approximate within a factor of 2^n.", "The well-known star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics. We study here the complexity of calculating the star discrepancy of point sets in the d-dimensional unit cube and show that this is an NP-hard problem. To establish this complexity result, we first prove NP-hardness of the following related problems in computational geometry: Given n points in the d-dimensional unit cube, find a subinterval of minimum or maximum volume that contains k of the n points. Our results for the complexity of the subinterval problems settle a conjecture of E. Thiemard [E. Thiemard, Optimal volume subintervals with k points and star discrepancy via integer programming, Math. Meth. Oper. Res. 54 (2001) 21-45]." ] }
1304.1978
2007800438
Geometric discrepancies are standard measures to quantify the irregularity of distributions. They are an important notion in numerical integration. One of the most important discrepancy notions is the so-called star discrepancy. Roughly speaking, a point set of low star discrepancy value allows for a small approximation error in quasi-Monte Carlo integration. It is thus the most studied discrepancy notion. In this work we present a new algorithm to compute point sets of low star discrepancy. The two components of the algorithm (for the optimization and the evaluation, respectively) are based on evolutionary principles. Our algorithm clearly outperforms existing approaches. To the best of our knowledge, it is also the first algorithm which can be adapted easily to optimize inverse star discrepancies.
A new robust algorithm to estimate star discrepancy values has been proposed in @cite_10 . This algorithm has been reported to give very accurate discrepancy estimates, and our experiments confirm these statements. We thus use this algorithm for the intermediate discrepancy evaluations; i.e., for the optimization process of creating good candidate point configurations. Where feasible, we do a final evaluation of the candidate sets using the exact DEM-algorithm described above.
{ "cite_N": [ "@cite_10" ], "mid": [ "2123331303" ], "abstract": [ "We present a new algorithm for estimating the star discrepancy of arbitrary point sets. Similar to the algorithm for discrepancy approximation of Winker and Fang [SIAM J. Numer. Anal., 34 (1997), pp. 2028-2042] it is based on the optimization algorithm threshold accepting. Our improvements include, amongst others, a nonuniform sampling strategy, which is more suited for higher-dimensional inputs and additionally takes into account the topological characteristics of given point sets, and rounding steps which transform axis-parallel boxes, on which the discrepancy is to be tested, into critical test boxes. These critical test boxes provably yield higher discrepancy values and contain the box that exhibits the maximum value of the local discrepancy. We provide comprehensive experiments to test the new algorithm. Our randomized algorithm computes the exact discrepancy frequently in all cases where this can be checked (i.e., where the exact discrepancy of the point set can be computed in feasible time). Most importantly, in higher dimensions the new method behaves clearly better than all previously known methods." ] }
1304.1467
1623385256
We compute the singular values of an @math sparse matrix @math in a distributed setting, without communication dependence on @math , which is useful for very large @math . In particular, we give a simple nonadaptive sampling scheme where the singular values of @math are estimated within relative error with constant probability. Our proven bounds focus on the MapReduce framework, which has become the de facto tool for handling such large matrices that cannot be stored or even streamed through a single machine. On the way, we give a general method to compute @math . We preserve singular values of @math with @math relative error with shuffle size @math and reduce-key complexity @math . We further show that if only specific entries of @math are required and @math has nonnegative entries, then we can reduce the shuffle size to @math and reduce-key complexity to @math , where @math is the minimum cosine similarity for the entries being estimated. All of our bounds are independent of @math , the larger dimension. We provide open-source implementations in Spark and Scalding, along with experiments in an industrial setting.
@cite_8 introduced a sampling procedure where rows and columns of @math are picked with probabilities proportional to their squared lengths and used that to compute an approximation to @math . Later @cite_3 and @cite_6 improved the sampling procedure. To implement these approximations to @math on MapReduce one would need a shuffle size dependent on @math or overload a single machine. We improve this to be independent of @math both in shuffle size and reduce-key complexity.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_8" ], "mid": [ "2885550688", "1970950689", "1979750072" ], "abstract": [ "", "Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.", "We consider the problem of approximating a given m × n matrix A by another matrix of specified rank k, which is smaller than m and n. The Singular Value Decomposition (SVD) can be used to find the \"best\" such approximation. However, it takes time polynomial in m, n which is prohibitive for some modern applications. In this article, we develop an algorithm that is qualitatively faster, provided we may sample the entries of the matrix in accordance with a natural probability distribution. In many applications, such sampling can be done efficiently. Our main result is a randomized algorithm to find the description of a matrix D* of rank at most k so that holds with probability at least 1 − Δ (where v·vF is the Frobenius norm). The algorithm takes time polynomial in k,1 e, log(1 Δ) only and is independent of m and n. In particular, this implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation." ] }
1304.1467
1623385256
We compute the singular values of an @math sparse matrix @math in a distributed setting, without communication dependence on @math , which is useful for very large @math . In particular, we give a simple nonadaptive sampling scheme where the singular values of @math are estimated within relative error with constant probability. Our proven bounds focus on the MapReduce framework, which has become the de facto tool for handling such large matrices that cannot be stored or even streamed through a single machine. On the way, we give a general method to compute @math . We preserve singular values of @math with @math relative error with shuffle size @math and reduce-key complexity @math . We further show that if only specific entries of @math are required and @math has nonnegative entries, then we can reduce the shuffle size to @math and reduce-key complexity to @math , where @math is the minimum cosine similarity for the entries being estimated. All of our bounds are independent of @math , the larger dimension. We provide open-source implementations in Spark and Scalding, along with experiments in an industrial setting.
Later on @cite_5 found an adaptive sampling scheme to improve the scheme of @cite_8 . Since the scheme is adaptive, it would require too much communication between machines holding @math . In particular a MapReduce implementation would still have shuffle size dependent on @math , and require many (more than 1) iterations.
{ "cite_N": [ "@cite_5", "@cite_8" ], "mid": [ "2044610104", "1979750072" ], "abstract": [ "We prove that any real matrix A contains a subset of at most 4k e+ 2k log(k+1) rows whose span “contains” a matrix of rank at most k with error only (1+e) times the error of the best rank-k approximation of A. We complement it with an almost matching lower bound by constructing matrices where the span of any k 2e rows does not “contain” a relative (1+e)-approximation of rank k. Our existence result leads to an algorithm that finds such rank-k approximation in time @math i.e., essentially O(Mk e), where M is the number of nonzero entries of A. The algorithm maintains sparsity, and in the streaming model [12,14,15], it can be implemented using only 2(k+1)(log(k+1)+1) passes over the input matrix and @math additional space. Previous algorithms for low-rank approximation use only one or two passes but obtain an additive approximation.", "We consider the problem of approximating a given m × n matrix A by another matrix of specified rank k, which is smaller than m and n. The Singular Value Decomposition (SVD) can be used to find the \"best\" such approximation. However, it takes time polynomial in m, n which is prohibitive for some modern applications. In this article, we develop an algorithm that is qualitatively faster, provided we may sample the entries of the matrix in accordance with a natural probability distribution. In many applications, such sampling can be done efficiently. Our main result is a randomized algorithm to find the description of a matrix D* of rank at most k so that holds with probability at least 1 − Δ (where v·vF is the Frobenius norm). The algorithm takes time polynomial in k,1 e, log(1 Δ) only and is independent of m and n. In particular, this implies that in constant time, it can be determined if a given matrix of arbitrary size has a good low-rank approximation." ] }
1304.1467
1623385256
We compute the singular values of an @math sparse matrix @math in a distributed setting, without communication dependence on @math , which is useful for very large @math . In particular, we give a simple nonadaptive sampling scheme where the singular values of @math are estimated within relative error with constant probability. Our proven bounds focus on the MapReduce framework, which has become the de facto tool for handling such large matrices that cannot be stored or even streamed through a single machine. On the way, we give a general method to compute @math . We preserve singular values of @math with @math relative error with shuffle size @math and reduce-key complexity @math . We further show that if only specific entries of @math are required and @math has nonnegative entries, then we can reduce the shuffle size to @math and reduce-key complexity to @math , where @math is the minimum cosine similarity for the entries being estimated. All of our bounds are independent of @math , the larger dimension. We provide open-source implementations in Spark and Scalding, along with experiments in an industrial setting.
There has been some effort to reduce the number of passes required through the matrix @math using little memory, in the streaming model . The question was posed by @cite_1 to determine in the streaming model various linear algebraic quantities. The problem was posed again by @cite_0 who asked about the time and space required for an algorithm not using too many passes. The streaming model is a good one if all the data can be streamed through a single machine, but with @math so large, it is not possible to stream @math through a single machine. Splitting the work of reading @math across many mappers is the job of the MapReduce implementation and one of its major advantages .
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2045390367", "1965972569" ], "abstract": [ "Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear ( 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + ) A - A_k _F approximation algorithms for the singular value decomposition of an m ? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O ( ( M( k + k k) + (n + m)( k + k k)^2 ) 1 ) --The first o(nd^ 2 ) time (1 + ) relative error approximation algorithm for n ? d linear ( ) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool.", "1 Introduction 2 Map 3 The Data Stream Phenomenon 4 Data Streaming: Formal Aspects 5 Foundations: Basic Mathematical Ideas 6 Foundations: Basic Algorithmic Techniques 7 Foundations: Summary 8 Streaming Systems 9 New Directions 10 Historic Notes 11 Concluding Remarks Acknowledgements References." ] }
1304.1467
1623385256
We compute the singular values of an @math sparse matrix @math in a distributed setting, without communication dependence on @math , which is useful for very large @math . In particular, we give a simple nonadaptive sampling scheme where the singular values of @math are estimated within relative error with constant probability. Our proven bounds focus on the MapReduce framework, which has become the de facto tool for handling such large matrices that cannot be stored or even streamed through a single machine. On the way, we give a general method to compute @math . We preserve singular values of @math with @math relative error with shuffle size @math and reduce-key complexity @math . We further show that if only specific entries of @math are required and @math has nonnegative entries, then we can reduce the shuffle size to @math and reduce-key complexity to @math , where @math is the minimum cosine similarity for the entries being estimated. All of our bounds are independent of @math , the larger dimension. We provide open-source implementations in Spark and Scalding, along with experiments in an industrial setting.
In addition to computing entries of @math , our sampling scheme can be used to implement many similarity measures. We can use the scheme to efficiently compute four similarity measures: Cosine, Dice, Overlap, and the Jaccard similarity measures, with details and experiments given in @cite_2 @cite_4 , whereas this paper is more focused on matrix computations and an open-source implementation.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "19684845", "2163972006" ], "abstract": [ "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development.", "We present a suite of algorithms for Dimension Independent Similarity Computation (DISCO) to compute all pairwise similarities between very high-dimensional sparse vectors. All of our results are provably independent of dimension, meaning that apart from the initial cost of trivially reading in the data, all subsequent operations are independent of the dimension; thus the dimension can be very large. We study Cosine, Dice, Overlap, and the Jaccard similarity measures. For Jaccard similarity we include an improved version of MinHash. Our results are geared toward the MapReduce framework. We empirically validate our theorems with large scale experiments using data from the social networking site Twitter. At time of writing, our algorithms are live in production at twitter.com." ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
The impact of the shadowing on the distribution of the interference factor is studied numerically in @cite_21 and analytically in @cite_7 . However, the above two articles do not take into account the modification of the network geometry induced by the shadowing, i.e., assume that mobiles are served by their geographically closest BS. This is not a realistic assumption and, as we will show in this paper, leads to misleading conclusions that the shadowing dramatically increases the mean interference factor.
{ "cite_N": [ "@cite_21", "@cite_7" ], "mid": [ "1989232709", "2125190448" ], "abstract": [ "Interference performance is among the most important issues especially in WCDMA cellular networks planning coverage and capacity. F-factor has been introduced in previous works to model the interference in WCDMA downlink dimensioning process. In this paper, we establish its PDF expression assuming 1 interferer in the server both with and without correlated shadowing effect. Uniform and non-uniform traffic situations are distinguished through the study of both uniform and non-uniform traffic load cases. Then, we generalize the expression for multiple interferers.", "This paper proposes an analytical study of the shadowing impact on the outage probability in cellular radio networks. We establish that the downlink other-cell interference factor, f, which is defined here as the ratio of outer cell received power to the inner cell received power, plays a fundamental role in the outage probability. From f, we are able to derive the outage probability of a mobile station (MS) initiating a new call. Taking into account the shadowing, f is expressed as a lognormal random variable. Analytical expressions of the interference factor's mean mf and standard deviation sf are provided in this paper. These expressions depend on the topology of the network characterized by a G factor. We show that shadowing increases the outage probability, and using our analytical method, we are able to quantify this impact. However, we establish that the network topology, or correlated received powers, may limit this increase." ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
The paper @cite_10 focuses on the interference factor averaged over a given cell, and in particular the effect of shadowing on this average. It is shown there that the cell shape modification induced by the shadowing affects significantly the mean interference factor. More precisely, that this mean decreases substantially if mobiles are served by the BS offering the smallest path-loss. We adopt this assumption throughout the present paper in the context of regular (hexagonal) and irregular (Poisson) geometry of BS, as proposed in @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "2079368560", "2168951033" ], "abstract": [ "This paper proposes scalable admission and congestion control schemes that allow each base station to decide independently of the others what set of voice users to serve and or what bit rates to offer to elastic traffic users competing for bandwidth. These algorithms are primarily meant for large CDMA networks with a random but homogeneous user distribution. They take into account in an exact way the influence of geometry on the combination of inter-cell and intra-cell interferences as well as the existence of maximal power constraints of the base stations and users. We also study the load allowed by these schemes when the size of the network tends to infinity and the mean bit rate offered to elastic traffic users. By load, we mean here the number of voice users that each base station can serve.", "An improved series of bounds is presented for the other-cell interference in cellular power-controlled CDMA. The bounds are based on allowing control by one of a limited set of base stations. In particular, it is shown that the choice of cellular base station with least interference among the set of N sub c >1 nearest base stations yields much lower total mean interference from the mobile subscribers than the choice of only the single nearest base station. >" ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
Some papers (see e.g. @cite_4 @cite_6 ) propose more explicit approximations of the interference factor and its moments (mean and variance) assuming only deterministic propagation loss models (without random shadowing). @cite_17 studies the distribution of the interference factor in such a case.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_17" ], "mid": [ "2128811167", "2100828585", "2128733450" ], "abstract": [ "We establish a closed form formula of the other-cell interference factor f for omni-directional and sectored cellular networks, as a function of the location of the mobile. That formula is based on a fluid model of cellular networks: The key idea is to consider the discrete base stations (BS) entities as a continuum of transmitters which are spatially distributed in the network. Simulations show that the obtained closed-form formula is a very good approximation, even for the traditional hexagonal network. From f, we are able to derive the global outage probability and the spatial outage probability, which depends on the location of a mobile station (MS) initiating a new call. Although initially focused on CDMA (UMTS, HSDPA) and OFDMA (WiMax) networks, we show this approach is applicable to any kind of wireless system such as TDMA (GSM) or even ad-hoc ones.", "The f-factor, which is roughly the interference to signal power ratio, plays a crucial role in the performance evaluation of wireless cellular networks. The objective of the present paper is to study the properties of the f-factor and establish approximations for it which we compare to previously proposed approximations. We consider the hexagonal network model, where the base stations are placed on a regular hexagonal grid which may be infinite. The propagation loss is assumed to be a power of the distance between the transmitter and the receiver. In this context, we build a reference method to calculate the f-factor to which previously proposed approximations as well as a new one are compared. It is shown that the previous approximations are not always close to the reference. One should choose the approximation carefully since the performance of cellular networks depend strongly on the f-factor. The results in our paper help to make the appropriate choice. This is particularly important for operational needs as for example dimensioning a real network.", "We present an analytical model for computing the othercell interference distribution in a third generation UMTS network with inhomogeneous user distribution. Our proposed model is based on an iterative calculation of a fixed-point equation which describes the interdependence of the interference levels at neighboring base stations. Furthermore, we develop an efficient algorithm based on lognormal approximations to compute the mean and standard deviation of the othercell interference. We show that our model is accurate and fast enough to be used efficiently in the planning process of large UMTS networks." ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
@cite_2 the authors partially confirm, by a different approach, our early observation from @cite_15 , that the average SIR (which is the inverse of the interference factor) might increase with the shadowing variance when the best server policy is chosen.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2018182850", "1988857404" ], "abstract": [ "The interference factor, defined for a given location in the network as the ratio of the sum of the path-gains form interfering base-stations (BS) to the path-gain from the serving BS is an important ingredient in the analysis of wireless cellular networks. It depends on the geometric placement of the BS in the network and the propagation gains between these stations and the given location. In this paper we study the mean interference factor taking into account the impact of these two elements. Regarding the geometry, we consider both the perfect hexagonal grid of BS and completely random Poisson pattern of BS. Regarding the signal propagation model, we consider not only a deterministic, signal-power-loss function that depends only on the distance between a transmitter and a receiver, and is mainly characterized by the so called path-loss exponent, but also random shadowing that characterizes in a statistical manner the way various obstacles on a given path modify this deterministic function. We present a detailed analysis of the impact of the path loss exponent, variance of the shadowing and the size of the network on the mean interference factor in the case of hexagonal and Poisson network architectures. We observe, as commonly expected, that small and moderate shadowing has a negative impact on regular networks as it increases the mean interference factor. However, as pointed out in the seminal paper [16], this impact can be largely reduced if the serving BS is chosen as the one which offers the smallest path-loss. Revisiting the model studied in this latter paper, we obtain a perhaps more surprising result saying that in large irregular (Poisson) networks the shadowing does not impact at all the interference factor, whose mean can be evaluated explicitly in a simple expression depending only on the path-loss exponent. Moreover, in small and moderate size networks, a very strong variability of the shadowing can be even beneficial in both hexagonal and Poisson networks.", "The evaluation of the Signal to Interference Ratio (SIR) in cellular networks is of primary importance for network dimensioning. For static studies, which evaluate cell capacity and coverage, as well as for dynamic studies, which consider arrivals and departures of mobile stations (MS), the SIR is always an important input. Contrary to most of the analytical works evaluating SIR, we assume in this paper that the MS is attached to the best server, i.e., to the base station (BS) from which it receives the highest power. This is a policy that is more realistic than the classical assumption that considers MSs to be attached to the nearest BS. The exact formulation of the SIR is however in this case uneasy to handle and numerical methods remain heavy. In this paper, we thus propose an approximate analytical study based on truncated lognormal distributions that provides very close results to Monte Carlo simulations." ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
The interference factor was recognized very early as a key element in the performance evaluation of cellular networks; cf. @cite_12 @cite_1 . Fundamental to our approach to the evaluation of the blocking probability are papers @cite_25 @cite_19 . They show how the power allocation problem without power limitations can be reduced to an algebraic system of linear inequalities. Moreover, they recognize that the spectral radius of the (non-negative) matrix corresponding to this system not greater than 1 is the necessary and sufficient condition of the feasibility of power allocation without power limitations. This approach lead to the development of a comprehensive framework of the evaluation of the blocking probability in CDMA, HSDPA and OFDMA, via a spatial version of the famous Erlang's formula in @cite_5 @cite_14 @cite_23 @cite_24 . QoS in data networks are studied using this approach in @cite_0 .
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_24", "@cite_19", "@cite_0", "@cite_23", "@cite_5", "@cite_25", "@cite_12" ], "mid": [ "2079368560", "1533766465", "2119200834", "2087332622", "2152816617", "2139386764", "2146288639", "2102157096", "2119996374" ], "abstract": [ "This paper proposes scalable admission and congestion control schemes that allow each base station to decide independently of the others what set of voice users to serve and or what bit rates to offer to elastic traffic users competing for bandwidth. These algorithms are primarily meant for large CDMA networks with a random but homogeneous user distribution. They take into account in an exact way the influence of geometry on the combination of inter-cell and intra-cell interferences as well as the existence of maximal power constraints of the base stations and users. We also study the load allowed by these schemes when the size of the network tends to infinity and the mean bit rate offered to elastic traffic users. By load, we mean here the number of voice users that each base station can serve.", "From the Publisher: Strengthen your knowledge of Code Division Multiple Access (CDMA) technology, and build a solid understanding of the technical details and engineering design principles behind the robust new IS-95 digital cellular system standard with this comprehensive reference tool. Based upon the authors' high-impact international training seminars on CDMA system engineering technology, this book helps practicing cellular engineers better understand the technical elements associated with the CDMA system, and how they are applied to the IS-95 standard. Practicing engineers who work with CDMA cellular, PCS, and WLL systems should have this guide close at hand, but even if you're engineering background isn't CDMA-specific, you'll appreciate the book's easy-to-follow, tutorial approach to the technology. Packed with nearly 2,000 equations, and supported by clearly presented, easy-to-understand explanations and examples, it not only shows you how to apply real-world CDMA system design techniques such as cell planning and optimization — it helps you understand the underlying reasons behind particular CDMA system design selections. Specifically, you learn... ? Proven-under-fire techniques that help you assess system modulation and convolutional code performance, and optimize cellular system and Erlang capacity ? How to design PN code generators for spread-spectrum applications, and how to use masks to control PN code phase ? How you can use RAKE diversity combining techniques to combat fading ? How to control CDMA forward link power allocation to help maximize system capacity An essential reference tool for CDMA system design engineers,service providers consultants, and technical managers, the book also equips R&D professionals with the knowledge they need to develop tools to enhance new or existing CDMA cellular, PCS, and WLL systems.", "In this paper we propose the following approach to the dimensioning of the radio part of the downlink in OFDMA networks. First, we use information theory to characterize the bit-rate in the channel from a base station to its mobile. It depends on the power and bandwidth allocated to this mobile. Then, we describe the resource (power and bandwidth) allocation problem and characterise feasible configurations of bit-rates of all users. As the key element, we propose some particular sufficient condition (in a multi-Erlang form) for a given configuration of bit-rates to be feasible. Finally, we consider an Erlang's loss model, in which streaming arrivals whose admission would lead to the violation of this sufficient condition are blocked and lost. In this model, the blocking probabilities can be calculated using Kaufman-Roberts algorithm. We propose it to evaluate the minimal density of base stations assuring acceptable blocking probabilities for a streaming traffic of a given load per surface unit. We validate this approach by comparison of the blocking probabilities to these simulated in the similar model in which the admission control is based on the original feasibility property (instead of its sufficient condition). Our sufficient bit-rate feasibility condition can also be used to dimension the network with respect to the elastic traffic.", "Distributed power control algorithms that use only the carrier-to-interference ratios (C I ratios) in those links actually in use are investigated. An algorithm that successfully approximates the behavior of the best known algorithms is proposed. The algorithm involves a novel distributed C I-balancing scheme. Numerical results show that capacity gains on the order of 3-4 times can be reached also with these distributed schemes. Further, the effects of imperfect C I estimates due to noise vehicle mobility, and fast multipath fading are considered. Results show that the balancing procedure is very robust to measurement noise, in particular if C I requirements are low or moderate. However, for required high C I levels or for a rapidly changing path loss matrix, convergence may be too slow to achieve substantial capacity improvements. >", "This paper deals with the performance evaluation of some congestion control schemes for elastic traffic in wireless cellular networks with power allocation control. These schemes allow us to identify the feasible configurations of instantaneous up-and downlink bit-rates of users; i.e., such that can be obtained by allocating respective powers, taking into account in an exact way the interference created in the whole, multicellular network. We consider the bit-rate configurations identified by these schemes as feasible sets for some classical, maximal fair resource allocation policies, and study their performance in the long-term evolution of the system. Specifically, we assume Markovian arrivals, departures and mobility of customers, which transmit some given data-volumes, as well as some temporal channel variability (fading), and study the mean number of users, the mean throughput i.e., the mean bit-rates, and the mean delay that these policies offer in different parts of a given cell. Explicit formulas are obtained in the case of proportional fair policies, which may or may-not take advantage of the fading, for null or infinitely rapid customer mobility. This approach applies also to a channel shared by the elastic traffic and a streaming, with predefined customer bit-rates, regulated by the respective admission policy.", "This paper builds upon the scalable admission control schemes for CDMA networks developed in F. (2003, December 2004). These schemes are based on an exact representation of the geometry of both the downlink and the uplink channels and ensure that the associated power allocation problems have solutions under constraints on the maximal power of each station user. These schemes are decentralized in that they can be implemented in such a way that each base station only has to consider the load brought by its own users to decide on admission. By load we mean here some function of the configuration of the users and of their bit rates that is described in the paper. When implemented in each base station, such schemes ensure the global feasibility of the power allocation even in a very large (infinite number of cells) network. The estimation of the capacity of large CDMA networks controlled by such schemes was made in these references. In certain cases, for example for a Poisson pattern of mobiles in an hexagonal network of base stations, this approach gives explicit formulas for the infeasibility probability, defined as the fraction of cells where the population of users cannot be entirely admitted by the base station. In the present paper we show that the notion of infeasibility probability is closely related to the notion of blocking probability, defined as the fraction of users that are rejected by the admission control policy in the long run, a notion of central practical importance within this setting. The relation between these two notions is not bound to our particular admission control schemes, but is of more general nature, and in a simplified scenario it can be identified with the well-known Erlang loss formula. We prove this relation using a general spatial birth-and-death process, where customer locations are represented by a spatial point process that evolves over time as users arrive or depart. This allows our model to include the exact representation of the geometry of inter-cell and intra-cell interferences, which play an essential role in the load indicators used in these cellular network admission control schemes.", "This paper is focused on the influence of geometry on the combination of intercell and intracell interferences in the downlink of large CDMA networks. We use an exact representation of the geometry of the downlink channels to define scalable admission and congestion control schemes, namely schemes that allow each base station to decide independently of the others what set of voice users to serve and or what bit rates to offer to elastic traffic users competing for bandwidth. We then study the load of these schemes when the size of the network tends to infinity using stochastic geometry tools. By load, we mean here the distribution of the number of voice users that each base station can serve and that of the bit rate offered to each elastic traffic user.", "Most cellular radio systems provide for the use of transmitter power control to reduce cochannel interference for a given channel allocation. Efficient interference management aims at achieving acceptable carrier-to-interference ratios in all active communication links in the system. Such schemes for the control of cochannel interference are investigated. The effect of adjacent channel interference is neglected. As a performance measure, the interference (outage) probability is used, i.e., the probability that a randomly chosen link is subject to excessive interference. In order to derive upper performance bounds for transmitter power control schemes, algorithms that are optimum in the sense that the interference probability is minimized are suggested. Numerical results indicate that these upper bounds exceed the performance of conventional systems by an order of magnitude regarding interference suppression and by a factor of 3 to 4 regarding the system capacity. The structure of the optimum algorithm shows that efficient power control and dynamic channel assignment algorithms are closely related. >", "This work presents an approach to the evaluation of the reverse link capacity of a code-division multiple access (CDMA) cellular voice system which employs power control and a variable rate vocoder based on voice activity. It is shown that the Erlang capacity of CDMA is many times that of conventional analog systems and several times that of other digital multiple access systems. >" ] }
1304.0863
2083668462
Shadowing is believed to degrade the quality of service (QoS) in wireless cellular networks. Assuming log-normal shadowing, and studying mobile's path-loss with respect to the serving base station (BS) and the corresponding interference factor (the ratio of the sum of the path-gains form interfering BS's to the path-gain from the serving BS), which are two key ingredients of the analysis and design of the cellular networks, we discovered a more subtle reality. We observe, as commonly expected, that a strong variance of the shadowing increases the mean path-loss with respect to the serving BS, which in consequence, may compromise QoS. However, in some cases, an increase of the variance of the shadowing can significantly reduce the mean interference factor and, in consequence, improve some QoS metrics in interference limited systems, provided the handover policy selects the BS with the smallest path loss as the serving one. We exemplify this phenomenon, similar to stochastic resonance and related to the "single big jump principle" of the heavy-tailed log-nornal distribution, studying the blocking probability in regular, hexagonal networks in a semi-analytic manner, using a spatial version of the Erlang's loss formula combined with Kaufman-Roberts algorithm. More detailed probabilistic analysis explains that increasing variance of the log-normal shadowing amplifies the ratio between the strongest signal and all other signals thus reducing the interference. The above observations might shed new light, in particular on the design of indoor communication scenarios.
Finally, recalling that the mean QoS pre-metrics studied in this paper do not depend on the spatial correlation of the shadowing, we remind @cite_9 @cite_16 as bringing models that can be used when studying the spatial distribution of the QoS metrics.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "2140452629", "2126152305" ], "abstract": [ "A simple autocorrelation model for shadow fading in mobile radio channels is proposed. The model is fitted to both large cells and microcells. Results show that the model fit is good for large to m ...", "This paper considers the influence of the cross-correlated shadowing between base-stations on network level simulations. A method to generate correlated shadow fading processes is proposed, which makes it possible to study position dependent correlation models between multiple base stations (BS). The method has been used for static network level simulations with different synthetic correlation models, both angle-of-arrival- and distance-dependent, in order to study the C I outage gain for different frequency reuse patterns and BS sectorizations." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
While the idea of programming heterogeneous systems has become prevalent over the last few years as GPU became popular, earlier work had been done on this topic. The Jade project @cite_15 @cite_1 addressed the problem of heterogeneous parallel programming and C language extensions starting from 1992.
{ "cite_N": [ "@cite_15", "@cite_1" ], "mid": [ "2170108472", "2028267160" ], "abstract": [ "The authors present Jade, a high-level parallel programming language for managing course-grain concurrency. Jade simplifies programming by providing the programmer with the abstractions of sequential execution and a shared address space. Jade programmers augment sequential, imperative programs with constructs that declare how parts of the program access data; the Jade implementation dynamically interprets this information to execute the program in parallel. This parallel execution preserves the serial semantics of the original program. Jade has been implemented as an extension to C on shared-memory multiprocessors, a message-passing machine, networks of heterogeneous workstations, and systems with special-purpose functional units. Programs written in Jade run on all of these platforms without modification. >", "Jade is a portable, implicitly parallel language designed for exploiting task-level concurrency.Jade programmers start with a program written in a standard serial, imperative language, then use Jade constructs to declare how parts of the program access data. The Jade implementation uses this data access information to automatically extract the concurrency and map the application onto the machine at hand. The resulting parallel execution preserves the semantics of the original serial program. We have implemented Jade as an extension to C, and Jade implementations exist for s hared-memory multiprocessors, homogeneous message-passing machines, and heterogeneous networks of workstations. In this atricle we discuss the design goals and decisions that determined the final form of Jade and present an overview of the Jade implementation. We also present our experience using Jade to implement several complete scientific and engineering applications. We use this experience to evaluate how the different Jade language features were used in practice and how well Jade as a whole supports the process of developing parallel applications. We find that the basic idea of preserving the serial semantics simplifies the program development process, and that the concept of using data access specifications to guide the parallelization offers significant advantages over more traditional control-based approaches. We also find that the Jade data model can interact poorly with concurrency patterns that write disjoint pieces of a single aggregate data structure, although this problem arises in only one of the applications." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
Unlike our C extensions, Jade's include new keywords and new syntax, which improves expressiveness at the expense of making Jade programs not compilable by standard C compilers. Jade's run-time support then distributes tasks across machines, and takes care of any necessary data transfers @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2170108472" ], "abstract": [ "The authors present Jade, a high-level parallel programming language for managing course-grain concurrency. Jade simplifies programming by providing the programmer with the abstractions of sequential execution and a shared address space. Jade programmers augment sequential, imperative programs with constructs that declare how parts of the program access data; the Jade implementation dynamically interprets this information to execute the program in parallel. This parallel execution preserves the serial semantics of the original program. Jade has been implemented as an extension to C on shared-memory multiprocessors, a message-passing machine, networks of heterogeneous workstations, and systems with special-purpose functional units. Programs written in Jade run on all of these platforms without modification. >" ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
HMPP provides allocate , release , and other directives for explicit data transfers between main memory and the GPUs. Unfortunately, this removes flexibility to its run-time support, and hinders performance portability, as has been shown by work on StarPU @cite_3 . As with OpenACC, it also assumes that a single GPU is in use.
{ "cite_N": [ "@cite_3" ], "mid": [ "2121893797" ], "abstract": [ "In the field of HPC, the current hardware trend is to design multiprocessor architectures featuring heterogeneous technologies such as specialized coprocessors (e.g. Cell BE) or data-parallel accelerators (e.g. GPUs). Approaching the theoretical performance of these architectures is a complex issue. Indeed, substantial efforts have already been devoted to efficiently offload parts of the computations. However, designing an execution model that unifies all computing units and associated embedded memory remains a main challenge. We therefore designed StarPU, an original runtime system providing a high-level, unified execution model tightly coupled with an expressive data management library. The main goal of StarPU is to provide numerical kernel designers with a convenient way to generate parallel tasks over heterogeneous hardware on the one hand, and easily develop and tune powerful scheduling algorithms on the other hand. We have developed several strategies that can be selected seamlessly at run-time, and we have analyzed their efficiency on several algorithms running simultaneously over multiple cores and a GPU. In addition to substantial improvements regarding execution times, we have obtained consistent superlinear parallelism by actually exploiting the heterogeneous nature of the machine. We eventually show that our dynamic approach competes with the highly optimized MAGMA library and overcomes the limitations of the corresponding static scheduling in a portable way. Copyright © 2010 John Wiley & Sons, Ltd." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
More recently, OmpSs has been developed to address heterogeneous programming on hybrid CPU GPU machines as well as clusters thereof @cite_10 . As for Jade and StarPU, this work includes both run-time support for dynamic scheduling, and C language extensions. OmpSs extensions are based on the pragma mechanism, which allows OmpSs-annotated programs to remain valid sequential programs, as with StarPU's C extensions.
{ "cite_N": [ "@cite_10" ], "mid": [ "2104008467" ], "abstract": [ "Clusters of GPUs are emerging as a new computational scenario. Programming them requires the use of hybrid models that increase the complexity of the applications, reducing the productivity of programmers. We present the implementation of OmpSs for clusters of GPUs, which supports asynchrony and heterogeneity for task parallelism. It is based on annotating a serial application with directives that are translated by the compiler. With it, the same program that runs sequentially in a node with a single GPU can run in parallel in multiple GPUs either local (single node) or remote (cluster of GPUs). Besides performing a task-based parallelization, the runtime system moves the data as needed between the different nodes and GPUs minimizing the impact of communication by using affinity scheduling, caching, and by overlapping communication with the computational task. We show several applications programmed with OmpSs and their performance with multiple GPUs in a local node and in remote nodes. The results show good tradeoff between performance and effort from the programmer." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
Additionally, OmpSs provides a target pragma for task -annotated functions, that specifies where the task it to run, and how it is implemented (for instance, cuda ). Unlike StarPU, it appears that tasks may have only one target, introduced with the device keyword @cite_10 . It is up to the programmer to specify which of the input and output arguments are to copied to and from the device, additional copy , copy , and copy clauses.
{ "cite_N": [ "@cite_10" ], "mid": [ "2104008467" ], "abstract": [ "Clusters of GPUs are emerging as a new computational scenario. Programming them requires the use of hybrid models that increase the complexity of the applications, reducing the productivity of programmers. We present the implementation of OmpSs for clusters of GPUs, which supports asynchrony and heterogeneity for task parallelism. It is based on annotating a serial application with directives that are translated by the compiler. With it, the same program that runs sequentially in a node with a single GPU can run in parallel in multiple GPUs either local (single node) or remote (cluster of GPUs). Besides performing a task-based parallelization, the runtime system moves the data as needed between the different nodes and GPUs minimizing the impact of communication by using affinity scheduling, caching, and by overlapping communication with the computational task. We show several applications programmed with OmpSs and their performance with multiple GPUs in a local node and in remote nodes. The results show good tradeoff between performance and effort from the programmer." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
OpenACC is a set of C and Fortran extensions, or , designed to simplify off-loading of tasks to accelerators. Version 1.0 of the specification was released in November 2011 @cite_12 . It defines a set of functions and compiler to specify parts of a program whose computation may be offloaded to GPUs, to transfer data between main memory and the GPUs, and to synchronize with the execution of those parts---the .
{ "cite_N": [ "@cite_12" ], "mid": [ "1534307734" ], "abstract": [ "In this report, we present X-Kaapi's programming model. A X-Kaapi parallel program is a C or C++ sequential program with code annotation using #pragma compiler directives that allow to create tasks. A specific source to source compiler translates X-Kaapi directives to runtime calls." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
Like UPC, XcalableMP (or XMP) is a PGAS extension for C and Fortran for programming distributed shared memory systems @cite_14 , recently extended for clusters that include GPUs @cite_22 . XMP provides directives for OpenMP-style work sharing, such as loop and reduction , along with UPC-style affinity clauses to specify which node executes each iteration. Similar to UPC's shared qualifier @cite_19 , XMP's template , distribute , and align pragmas allow programmers to map arrays to cluster nodes. Code blocks can be turned into OpenMP-style tasks using the task pragma.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_22" ], "mid": [ "", "2050930161", "1673512521" ], "abstract": [ "", "Although MPI is a de-facto standard for parallel programming on distributed memory systems, writing MPI programs is often a time-consuming and complicated process. XcalableMP is a language extension of C and Fortran for parallel programming on distributed memory systems that helps users to reduce those programming efforts. XcalableMP provides two programming models. The first one is the global view model, which supports typical parallelization based on the data and task parallel paradigm, and enables parallelizing the original sequential code using minimal modification with simple, OpenMP-like directives. The other one is the local view model, which allows using CAF-like expressions to describe inter-node communication. Users can even use MPI and OpenMP explicitly in our language to optimize performance explicitly. In this paper, we introduce XcalableMP, the implementation of the compiler, and the performance evaluation result. For the performance evaluation, we parallelized HPCC Benchmark in XcalableMP. It shows that users can describe the parallelization for distributed memory system with a small modification to the original sequential code.", "A GPU is a promising device for further increasing computing performance in high performance computing field. Currently, many programming langauges are proposed for the GPU offloaded from the host, as well as CUDA. However, parallel programming with a multi-node GPU cluster, where each node has one or more GPUs, is a hard work. Users have to describe multi-level parallelism, both between nodes and within the GPU using MPI and a GPGPU language like CUDA. In this paper, we will propose a parallel programming language targeting multi-node GPU clusters. We extend XcalableMP, a parallel PGAS (Partitioned Global Address Space) programming language for PC clusters, to provide a productive parallel programming model for multi-node GPU clusters. Our performance evaluation with the N-body problem demonstrated that not only does our model achieve scalable performance, but it also increases productivity since it only requires small modifications to the serial code." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
XMP-ACC, the XcalableMP extension for GPU programming, supports an offloading programming paradigm, similar in spirit to that of OpenACC @cite_22 . For instance, programmers must explicitly state which objects must be allocated on the GPU, using the replicate pragma, and when they are to be transferred, with the replicate pragma, which hampers performance portability, as already noted. The loop construct is extended with an acc clause, which explicitly instructs the compiler and run-time support to execute the loop on a GPU. The compiler automatically generates CUDA code for the loop.
{ "cite_N": [ "@cite_22" ], "mid": [ "1673512521" ], "abstract": [ "A GPU is a promising device for further increasing computing performance in high performance computing field. Currently, many programming langauges are proposed for the GPU offloaded from the host, as well as CUDA. However, parallel programming with a multi-node GPU cluster, where each node has one or more GPUs, is a hard work. Users have to describe multi-level parallelism, both between nodes and within the GPU using MPI and a GPGPU language like CUDA. In this paper, we will propose a parallel programming language targeting multi-node GPU clusters. We extend XcalableMP, a parallel PGAS (Partitioned Global Address Space) programming language for PC clusters, to provide a productive parallel programming model for multi-node GPU clusters. Our performance evaluation with the N-body problem demonstrated that not only does our model achieve scalable performance, but it also increases productivity since it only requires small modifications to the serial code." ] }
1304.0878
2244214024
Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the underlying architecture details: it adapts to the available CPUs and GPUs, and automatically transfers data between main memory and GPUs as needed. While StarPU's approach is successful at addressing run-time scheduling issues, being a C library makes for a poor and error-prone programming interface. This paper presents an effort started in 2011 to promote some of the concepts exported by the library as C language constructs, by means of an extension of the GCC compiler suite. Our main contribution is the design and implementation of language extensions that map to StarPU's task programming paradigm. We argue that the proposed extensions make it easier to get started with StarPU,eliminate errors that can occur when using the C library, and help diagnose possible mistakes. We conclude on future work.
XKaapi is another run-time support library for task scheduling over heterogeneous multi-CPU and multi-GPU machines developed at Inria @cite_16 . It has the same goals as StarPU, but addresses them differently: run-time task scheduling is based on , and tasks are launched using the spawn operator reminiscent of Cilk. XKaapi supports recursive task invocations, unlike StarPU.
{ "cite_N": [ "@cite_16" ], "mid": [ "1996632297" ], "abstract": [ "The race for Exascale computing has naturally led the current technologies to converge to multi-CPU multi-GPU computers, based on thousands of CPUs and GPUs interconnected by PCI-Express buses or interconnection networks. To exploit this high computing power, programmers have to solve the issue of scheduling parallel programs on hybrid architectures. And, since the performance of a GPU increases at a much faster rate than the throughput of a PCI bus, data transfers must be managed efficiently by the scheduler. This paper targets multi-GPU compute nodes, where several GPUs are connected to the same machine. To overcome the data transfer limitations on such platforms, the available soft wares compute, usually before the execution, a mapping of the tasks that respects their dependencies and minimizes the global data transfers. Such an approach is too rigid and it cannot adapt the execution to possible variations of the system or to the application's load. We propose a solution that is orthogonal to the above mentioned: extensions of the Xkaapi software stack that enable to exploit full performance of a multi-GPUs system through asynchronous GPU tasks. Xkaapi schedules tasks by using a standard Work Stealing algorithm and the runtime efficiently exploits concurrent GPU operations. The runtime extensions make it possible to overlap the data transfers and the task executions on current generation of GPUs. We demonstrate that the overlapping capability is at least as important as computing a scheduling decision to reduce completion time of a parallel program. Our experiments on two dense linear algebra problems (Matrix Product and Cholesky factorization) show that our solution is highly competitive with other soft wares based on static scheduling. Moreover, we are able to sustain the peak performance (approx. 310 GFlop s) on DGEMM, even for matrices that cannot be stored entirely in one GPU memory. With eight GPUs, we archive a speed-up of 6.74 with respect to single-GPU. The performance of our Cholesky factorization, with more complex dependencies between tasks, outperforms the state of the art single-GPU MAGMA code." ] }
1304.1250
2952668869
Minimization of the @math norm, which can be viewed as approximately solving the non-convex least median estimation problem, is a powerful method for outlier removal and hence robust regression. However, current techniques for solving the problem at the heart of @math norm minimization are slow, and therefore cannot scale to large problems. A new method for the minimization of the @math norm is presented here, which provides a speedup of multiple orders of magnitude for data with high dimension. This method, termed Fast @math Minimization, allows robust regression to be applied to a class of problems which were previously inaccessible. It is shown how the @math norm minimization problem can be broken up into smaller sub-problems, which can then be solved extremely efficiently. Experimental results demonstrate the radical reduction in computation time, along with robustness against large numbers of outliers in a few model-fitting problems.
Recently, the Lagrange dual problem of the @math minimization problem posed in @cite_38 was derived in @cite_23 . To further boost the efficiency of the method, the authors of @cite_23 proposed an @math -minimization algorithm for outlier removal. While the aforementioned methods add a single slack variable and repeatedly solve a feasibility problem, the @math algorithm adds one slack variable for each residual and then solves a single convex program. While efficient, this method is only successful on data drawn from particular statistical distributions.
{ "cite_N": [ "@cite_38", "@cite_23" ], "mid": [ "2161220183", "2026888851" ], "abstract": [ "We investigate the use of the L∞ cost function in geometric vision problems. This cost function measures the maximum of a set of model-fitting errors, rather than the sumof-squares, or L2 cost function that is commonly used (in least-squares fitting). We investigate its use in two problems; multiview triangulation and motion recovery from omnidirectionalcameras, though the results may also apply to other related problems. It is shown that for these problems the L∞ cost function is significantly simpler than the L2 cost. In particular L∞ minimization involvesfinding the minimum of a cost function with a single local (and hence global)minimumon a convexparameter domain. Theproblem may be recast as a constrained minimization problem and solved using commonly available software. The optimal solution was reliably achieved on problems of small dimension.", "In this paper we consider the problem of outlier removal for large scale multiview reconstruction problems. An efficient and very popular method for this task is RANSAC. However, as RANSAC only works on a subset of the images, mismatches in longer point tracks may go undetected. To deal with this problem we would like to have, as a post processing step to RANSAC, a method that works on the entire (or a larger) part of the sequence. In this paper we consider two algorithms for doing this. The first one is related to a method by Sim & Hartley where a quasiconvex problem is solved repeatedly and the error residuals with the largest error is removed. Instead of solving a quasiconvex problem in each step we show that it is enough to solve a single LP or SOCP which yields a significant speedup. Using duality we show that the same theoretical result holds for our method. The second algorithm is a faster version of the first, and it is related to the popular method of L 1 -optimization. While it is faster and works very well in practice, there is no theoretical guarantee of success. We show that these two methods are related through duality, and evaluate the methods on a number of data sets with promising results.1" ] }
1304.1250
2952668869
Minimization of the @math norm, which can be viewed as approximately solving the non-convex least median estimation problem, is a powerful method for outlier removal and hence robust regression. However, current techniques for solving the problem at the heart of @math norm minimization are slow, and therefore cannot scale to large problems. A new method for the minimization of the @math norm is presented here, which provides a speedup of multiple orders of magnitude for data with high dimension. This method, termed Fast @math Minimization, allows robust regression to be applied to a class of problems which were previously inaccessible. It is shown how the @math norm minimization problem can be broken up into smaller sub-problems, which can then be solved extremely efficiently. Experimental results demonstrate the radical reduction in computation time, along with robustness against large numbers of outliers in a few model-fitting problems.
Robust statistical techniques, including the aforementioned robust regression and outlier removal methods, can significantly improve the performance of their classic counterparts. However, they have rarely been applied in image analysis field, to problems such as visual recognition, due to their computational expense. The M-estimator method is utilized in @cite_24 for face recognition and achieved high accuracy even when illumination change and pixel corruption were present. In @cite_40 , the authors propose a theoretical framework combining reconstructive and discriminative subspace methods for robust classification and regression. This framework acts on subsets of pixels in images to detect outliers.
{ "cite_N": [ "@cite_24", "@cite_40" ], "mid": [ "2122211032", "2136860609" ], "abstract": [ "In this paper we address the problem of illumination invariant face recognition. Using a fundamental concept that in general, patterns from a single object class lie on a linear subspace [2], we develop a linear model representing a probe image as a linear combination of class-specific galleries. In the presence of noise, the well-conditioned inverse problem is solved using the robust Huber estimation and the decision is ruled in favor of the class with the minimum reconstruction error. The proposed Robust Linear Regression Classification (RLRC) algorithm is extensively evaluated for two standard databases and has shown good performance index compared to the state-of-art robust approaches.", "Linear subspace methods that provide sufficient reconstruction of the data, such as PCA, offer an efficient way of dealing with missing pixels, outliers, and occlusions that often appear in the visual data. Discriminative methods, such as LDA, which, on the other hand, are better suited for classification tasks, are highly sensitive to corrupted data. We present a theoretical framework for achieving the best of both types of methods: an approach that combines the discrimination power of discriminative methods with the reconstruction property of reconstructive methods which enables one to work on subsets of pixels in images to efficiently detect and reject the outliers. The proposed approach is therefore capable of robust classification with a high-breakdown point. We also show that subspace methods, such as CCA, which are used for solving regression tasks, can be treated in a similar manner. The theoretical results are demonstrated on several computer vision tasks showing that the proposed approach significantly outperforms the standard discriminative methods in the case of missing pixels and images containing occlusions and outliers." ] }
1304.1220
2951979836
We consider the models of distributed computation defined as subsets of the runs of the iterated immediate snapshot model. Given a task @math and a model @math , we provide topological conditions for @math to be solvable in @math . When applied to the wait-free model, our conditions result in the celebrated Asynchronous Computability Theorem (ACT) of Herlihy and Shavit. To demonstrate the utility of our characterization, we consider a task that has been shown earlier to admit only a very complex @math -resilient solution. In contrast, our generalized computability theorem confirms its @math -resilient solvability in a straightforward manner.
The topological conditions of wait-free task solvability were expressed by Herlihy and Shavit @cite_30 @cite_31 in the form of ACT. In the restricted case of tasks that, roughly, can be defined without taking process identifiers in mind, Herlihy and Rajsbaum @cite_4 @cite_17 derived task solvability conditions in adversarial shared-memory models @cite_0 . This paper proposes a characterization of generic (not necessarily colorless) tasks in any (not necessarily adversarial) sub-IIS model.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_0", "@cite_31", "@cite_17" ], "mid": [ "1965990175", "1870837219", "2618013937", "2149907923", "1976163479" ], "abstract": [ "We give necessary and sufficient combinatorial conditions characterizing the computational tasks that can be solved by N asynchronous processes, up to t of which can fail by halting. The range of possible input and output values for an asynchronous task can be associated with a high-dimensional geometric structure called a simplicial complex. Our main theorem characterizes computability y in terms of the topological properties of this complex. Most notably, a given task is computable only if it can be associated with a complex that is simply connected with trivial homology groups. In other words, the complex has “no holes!” Applications of this characterization include the first impossibility results for several long-standing open problems in distributed computing, such as the “renaming” problem of Attiya et. al., the “k-set agreement” problem of Chaudhuri, and a generalization of the approximate agreement problem.", "Roughly speaking, a simplicial complex is shellable if it can be constructed by gluing a sequence of n-simplexes to one another along (n - 1)-faces only. Shellable complexes have been studied in the combinatorial topology literature because they have many nice properties. It turns out that many standard models of concurrent computation can be captured either as shellable complexes, or as the simple union of shellable complexes. We consider general adversaries in the synchronous, asynchronous, and semi-synchronous message-passing models, as well as asynchronous shared memory augmented by consensus and set agreement objects. We show how to exploit their common shellability structure to derive new and remarkably succinct tight (or nearly so) lower bounds on connectivity of protocol complexes and hence on solutions to the k-set agreement task in these models.", "At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about all the other 22n−1−(n+1) adversaries that are not uniform in this sense and might crash certain combination of processes and not others? This paper presents a precise way to classify all adversaries. We introduce the notion of disagreement power: the biggest integer k for which the adversary can prevent processes from agreeing on k values. We show how to compute the disagreement power of an adversary and derive n equivalence classes of adversaries.", "We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebraic and combinatorial topology, in which a task''s possible input and output values are each associated with high-dimensional geometric structures called simplicial complexes. We characterize computability in terms of the topological properties of these complexes. This characterization has a surprising geometric interpretation: a task is solvable if and only if the complex representing the task''s allowable inputs can be mapped to the complex representing the task''s allowable outputs by a function satisfying certain simple regularity properties. Our formalism thus replaces the operational'''' notion of a wait-free decision task, expressed in terms of interleaved computations unfolding in time, by a static combinatorial'''' description expressed in terms of relations among topological spaces, allowing us to exploit powerful theorems from the classic literature on algebraic and combinatorial topology. This approach yields the first impossibility results for several long-standing open problems in distributed computing, such as the renaming'''' problem of , and the @math -set agreement'''' problem of Chaudhuri.", "Roughly speaking, a simplicial complex is shellable if it can be constructed by gluing a sequence of n-simplexes to one another along ((n-1) )-faces only. Shellable complexes have been widely studied because they have nice combinatorial properties. It turns out that several standard models of concurrent computation can be constructed from shellable complexes. We consider adversarial schedulers in the synchronous, asynchronous, and semi-synchronous message-passing models, as well as asynchronous shared memory. We show how to exploit their common shellability structure to derive new and remarkably succinct tight (or nearly so) lower bounds on connectivity of protocol complexes and hence on solutions to the (k )-set agreement task in these models. Earlier versions of material in this article appeared in the 2010 ACM Symposium on Principles of Distributed Computing (Herlihy and Rajsbaum 2010), and the International Conference on Distributed Computing (Herlihy and Rajsbaum 2010, doi: 10.1145 1835698.1835724)." ] }
1304.1220
2951979836
We consider the models of distributed computation defined as subsets of the runs of the iterated immediate snapshot model. Given a task @math and a model @math , we provide topological conditions for @math to be solvable in @math . When applied to the wait-free model, our conditions result in the celebrated Asynchronous Computability Theorem (ACT) of Herlihy and Shavit. To demonstrate the utility of our characterization, we consider a task that has been shown earlier to admit only a very complex @math -resilient solution. In contrast, our generalized computability theorem confirms its @math -resilient solvability in a straightforward manner.
The IIS model was introduced by Borowsky and Gafni @cite_18 and shown to precisely capture the standard chromatic subdivision of the input complex @cite_14 @cite_5 . Due to the elegance of its topological representation, IIS has been widely used topological reasoning about distributed computing @cite_30 @cite_9 @cite_18 @cite_31 @cite_2 . @cite_18 @cite_24 , IIS has been shown equivalent to SM in terms of task solvability. @cite_6 and, more recently, Raynal and Stainer @cite_20 relate proper subsets of sub-IIS and sub-SM models restricted using specific failure detectors. A recent paper @cite_10 extends these equivalences to arbitrary sub-SM and sub-IIS models, thus justifying the choice of IIS as a model of study.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_9", "@cite_6", "@cite_24", "@cite_2", "@cite_5", "@cite_31", "@cite_10", "@cite_20" ], "mid": [ "1965990175", "2016108929", "", "2024624176", "1538367796", "1498481277", "1526652053", "2093310293", "2149907923", "141391985", "148797143" ], "abstract": [ "We give necessary and sufficient combinatorial conditions characterizing the computational tasks that can be solved by N asynchronous processes, up to t of which can fail by halting. The range of possible input and output values for an asynchronous task can be associated with a high-dimensional geometric structure called a simplicial complex. Our main theorem characterizes computability y in terms of the topological properties of this complex. Most notably, a given task is computable only if it can be associated with a complex that is simply connected with trivial homology groups. In other words, the complex has “no holes!” Applications of this characterization include the first impossibility results for several long-standing open problems in distributed computing, such as the “renaming” problem of Attiya et. al., the “k-set agreement” problem of Chaudhuri, and a generalization of the approximate agreement problem.", "The invention relates to a fluid supply system and seeks to provide a simpler, safer but reliable system than at present known. The fluid supply system comprises a multi-charge solid propellant gas generator and a fluid expulsion unit integrated into a single unit, the gas generator being provided in a cap (8) for a chamber (1) having a gas portion (3) and a fluid portion (2) and a movable partition (4) between these two portions operable to expel fluid from the chamber when the gas generator is rendered operative. The system also comprises ignition control means (22) operable to control the ignition of the solid propellant charges (13) when required to give a fully controllable system output. The system can be designed for supplying under pressure fluids such as hydraulic oil, fuels, oxidants and water and is particularly useful in aerospace applications.", "", "", "In the Iterated Immediate Snapshotmodel ( @math ) the memory consists of a sequence of one-shot Immediate Snapshot( @math ) objects. Processes access the sequence of @math objects, one-by-one, asynchronously, in a wait-freemanner; any number of processes can crash. Its interest lies in the elegant recursive structure of its runs, hence of the ease to analyze it round by round. In a very interesting way, Borowsky and Gafni have shown that the @math model and the read write model are equivalent for the wait-free solvability of decision tasks. This paper extends the benefits of the @math model to partially synchronous systems. Given a shared memory model enriched with a failure detector, what is an equivalent @math model? The paper shows that an elegant way of capturing the power of a failure detector and other partially synchronous systems in the @math model is by restricting appropriately its set of runs, giving rise to the Iterated Restricted Immediate Snapshotmodel ( @math ).", "In round-by-round models of distributed computing processes run in a sequence of (synchronous or asynchronous) rounds. The advantage of the round-by-round approach is that invariants established in the first round are preserved in later rounds. An elegant asynchronous round-by-round shared memory model, is the iterated snapshots model (IS). Instead of the snapshots model where processes share an array m[ċ] that can be accessed any number of times, indexed by process ID, where Pi writes to m[i] and can take a snapshot of the entire array, we have processes share a two-dimensional array m[ċ, ċ], indexed by iteration number and by process ID, where Pi in iteration r writes once to m[r, i] and takes one snapshot of row r, m[r, ċ]. The IS model lends itself more easily to combinatorial analysis. However, to show that whenever a task is impossible in the IS model the task is impossible in the snapshots model, a simulation is needed. Such a simulation was presented by Borowsky and Gafni in PODC97; namely, it was shown how to take a wait-free protocol for the snapshots model, and transform it into a protocol for the IS model, solving the same task. In this paper we present a new simulation from the snapshots model to the IS model, and show that it can be extended to work with models stronger that wait-free. The main contribution is to show that the simulation can work with models that have access to certain communication objects, called 01-tasks. This extends the result of Gafni, Rajsbaum and Herlihy in DISC'2006 stating that renaming is strictly weaker than set agreement from the IS model to the usual non-iterated wait-free read write shared memory model. We also show that our simulation works with t-resilient models and the more general dependent process failure model of Junqueira and Marzullo. This version of the simulation extends previous results by Herlihy and Rajsbaum in PODC'2010 and DISC'2010 about the topological connectivity of a protocol complex in an iterated dependent process failure model, to the corresponding non-iterated model.", "Distributed Computing Through Combinatorial Topology describes techniques for analyzing distributed algorithms based on award winning combinatorial topology research. The authors present a solid theoretical foundation relevant to many real systems reliant on parallelism with unpredictable delays, such as multicore microprocessors, wireless networks, distributed systems, and Internet protocols. Today, a new student or researcher must assemble a collection of scattered conference publications, which are typically terse and commonly use different notations and terminologies. This book provides a self-contained explanation of the mathematics to readers with computer science backgrounds, as well as explaining computer science concepts to readers with backgrounds in applied mathematics. The first section presents mathematical notions and models, including message passing and shared-memory systems, failures, and timing models. The next section presents core concepts in two chapters each: first, proving a simple result that lends itself to examples and pictures that will build up readers' intuition; then generalizing the concept to prove a more sophisticated result. The overall result weaves together and develops the basic concepts of the field, presenting them in a gradual and intuitively appealing way. The book's final section discusses advanced topics typically found in a graduate-level course for those who wish to explore further. Gathers knowledge otherwise spread across research and conference papers using consistent notations and a standard approach to facilitate understandingPresents unique insights applicable to multiple computing fields, including multicore microprocessors, wireless networks, distributed systems, and Internet protocols Synthesizes and distills material into a simple, unified presentation with examples, illustrations, and exercises", "", "We give necessary and sufficient combinatorial conditions characterizing the tasks that can be solved by asynchronous processes, of which all but one can fail, that communicate by reading and writing a shared memory. We introduce a new formalism for tasks, based on notions from classical algebraic and combinatorial topology, in which a task''s possible input and output values are each associated with high-dimensional geometric structures called simplicial complexes. We characterize computability in terms of the topological properties of these complexes. This characterization has a surprising geometric interpretation: a task is solvable if and only if the complex representing the task''s allowable inputs can be mapped to the complex representing the task''s allowable outputs by a function satisfying certain simple regularity properties. Our formalism thus replaces the operational'''' notion of a wait-free decision task, expressed in terms of interleaved computations unfolding in time, by a static combinatorial'''' description expressed in terms of relations among topological spaces, allowing us to exploit powerful theorems from the classic literature on algebraic and combinatorial topology. This approach yields the first impossibility results for several long-standing open problems in distributed computing, such as the renaming'''' problem of , and the @math -set agreement'''' problem of Chaudhuri.", "The Iterated Immediate Snapshot model (IIS), due to its elegant topological representation, has become standard for applying topological reasoning to distributed computing. In this paper, we focus on relations between IIS and the more realistic (non-iterated) read-write Atomic- Snapshot memory model (AS). We grasp equivalences between subsets of runs of AS and IIS (we call them sub-IIS and sub-AS models). To establish an equivalence between a sub-AS model M and a sub-IIS model M', we need two algorithms, a forward simulation F : AS -> IIS and a backward simulation B: IIS -> AS, such that B(F(M)) is a subset of M and F(B(M')) is a subset of M'. AS and IIS are provided with such simulations and, thus, they have the same task computability power. However, the relations between proper sub-AS and sub-IIS models remained unclear until now. In this paper, we present a two-way simulation protocol that provides an equivalent sub-IIS model for any adversarial sub-AS model, i.e., for any sub-AS model specified by the sets of live processes. We achieve the result by ensuring that, under the two-way simulation, the set of live processes in an AS run coincides with the set of fast processes in the simulated IIS run, and vice versa.", "The base distributed asynchronous read write computation model is made up of n asynchronous processes which communicate by reading and writing atomic registers only. The distributed asynchronous iterated model is a more constrained model in which the processes execute an infinite number of rounds and communicate at each round with a new object called immediate snapshot object. Moreover, in both models up to n−1 processes may crash in an unexpected way. When considering computability issues, two main results are associated with the previous models. The first states that they are computationally equivalent for decision tasks. The second states that they are no longer equivalent when both are enriched with the same failure detector. This paper shows how to capture failure detectors in each model so that both models become computationally equivalent. To that end it introduces the notion of a \"strongly correct\" process which appears particularly well-suited to the iterated model, and presents simulations that prove the computational equivalence when both models are enriched with the same failure detector. The paper extends also these simulations to the case where the wait-freedom requirement is replaced by the notion of t-resilience." ] }
1304.1220
2951979836
We consider the models of distributed computation defined as subsets of the runs of the iterated immediate snapshot model. Given a task @math and a model @math , we provide topological conditions for @math to be solvable in @math . When applied to the wait-free model, our conditions result in the celebrated Asynchronous Computability Theorem (ACT) of Herlihy and Shavit. To demonstrate the utility of our characterization, we consider a task that has been shown earlier to admit only a very complex @math -resilient solution. In contrast, our generalized computability theorem confirms its @math -resilient solvability in a straightforward manner.
The difficulty of dealing with certain problems in certain non-compact models, such as consensus and @math -resilience, has been studied before by Lubitsch and Moran @cite_27 , Brit and Moran @cite_21 , Moses and Rajsbaum @cite_29 . By deriving topological solvability conditions for any task and any sub-IIS model, this paper brings this work to a higher level of generality. The continuous space @math has appeared previously in the work of Saks and Zaharoglou @cite_28 where it was used to derive the impossibility wait-free set agreement.
{ "cite_N": [ "@cite_29", "@cite_28", "@cite_27", "@cite_21" ], "mid": [ "1988336888", "1967858331", "2084155082", "182508520" ], "abstract": [ "This paper introduces a simple notion of layering as a tool for analyzing well-behaved runs of a given model of distributed computation. Using layering, a model-independent analysis of the consensus problem is performed and then applied to proving lower bounds and impossibility results for consensus in a number of familiar and less familiar models. The proofs are simpler and more direct than existing ones, and they expose a unified structure to the difficulty of reaching consensus. In particular, the proofs for the classical synchronous and asynchronous models now follow the same outline. A new notion of connectivity among states in runs of a consensus protocol, called potence connectivity, is introduced. This notion is more general than previous notions of connectivity used for this purpose and plays a key role in the uniform analysis of consensus.", "In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For @math it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "Analyzing distributed protocols in various models often involves a careful analysis of the set of admissible runs, for which the protocols should behave correctly. In particular, the admissible runs assumed by a t-resilient protocol are runs which are fair for all but at most t processors. In this paper we define closed sets of runs, and suggest a technique to prove impossibility results for t-resilient protocols, by restricting the corresponding sets of admissible runs to smaller sets, which are closed, as follows: For each protocol PR and for each initial configuration c, the set of admissible runs of PR which start from c defines a tree in a natural way: the root of the tree is the empty run, and each vertex in it denotes a finite prefix of an admissible run; a vertex u in the tree has a son v iff v is also a prefix of an admissible run, which extends u by one atomic step.The tree of admissible runs described above may contain infinite paths which are not admissible runs. A set of admissible runs is closed if for every possible initial configuration c, each path in the tree of admissible runs starting from c is also an admissible run. Closed sets of runs have the simple combinatorial structure of the set of paths of an infinite tree, which makes them easier to analyze. We introduce a unified method for constructing closed sets of admissible runs by using a model-independent construction of closed schedulers, and then mapping these schedulers to closed sets of runs. We use this construction to provide a unified proof of impossibility of consensus protocols.", "" ] }
1304.0793
2950541290
This study proposes an audio copy detection system that is robust to various attacks. These include the severe pitch shift and tempo change attacks which existing systems fail to detect. First, we propose a novel two dimensional representation for audio signals called the time-chroma image. This image is based on a modification of the concept of chroma in the music literature and is shown to achieve better performance in song identification. Then, we propose a novel fingerprinting algorithm that extracts local fingerprints from the time-chroma image. The proposed local fingerprinting algorithm is invariant to time frequency scale changes in audio signals. It also outperforms existing methods like SIFT by a great extent. Finally, we introduce a song identification algorithm that uses the proposed fingerprints. The resulting copy detection system is shown to significantly outperform existing methods. Besides being able to detect whether a song (or a part of it) has been copied, the proposed system can accurately estimate the amount of pitch shift and or tempo change that might have been applied to a song.
Probably the most well-known publicly available audio fingerprinting algorithm is Shazam @cite_11 . Shazam, is based on local audio fingerprints. With Shazam, people can find the song they are looking for, using their smart phones. Shazam uses the peaks (maxima) observed in the spectrogram of an audio signal as the local feature points of a song. Feature descriptors (fingerprints) are then generated from the attributes of pairs of these points. The frequency of every point in each pair as well as their time difference form a compact fingerprint for each pair. The extracted fingerprints are shown to be highly robust to audio compression, foreground noises, and other types of noise. However, they are not robust to tempo changes or pitch shifts.
{ "cite_N": [ "@cite_11" ], "mid": [ "122994913" ], "abstract": [ "We have developed and commercially deployed a flexible audio search engine. The algorithm is noise and distortion resistant, computationally efficient, and massively scalable, capable of quickly identifying a short segment of music captured through a cellphone microphone in the presence of foreground voices and other dominant noise, and through voice codec compression, out of a database of over a million tracks. The algorithm uses a combinatorially hashed time-frequency constellation analysis of the audio, yielding unusual properties such as transparency, in which multiple tracks mixed together may each be identified. Furthermore, for applications such as radio monitoring, search times on the order of a few milliseconds per query are attained, even on a massive music database." ] }
1304.0793
2950541290
This study proposes an audio copy detection system that is robust to various attacks. These include the severe pitch shift and tempo change attacks which existing systems fail to detect. First, we propose a novel two dimensional representation for audio signals called the time-chroma image. This image is based on a modification of the concept of chroma in the music literature and is shown to achieve better performance in song identification. Then, we propose a novel fingerprinting algorithm that extracts local fingerprints from the time-chroma image. The proposed local fingerprinting algorithm is invariant to time frequency scale changes in audio signals. It also outperforms existing methods like SIFT by a great extent. Finally, we introduce a song identification algorithm that uses the proposed fingerprints. The resulting copy detection system is shown to significantly outperform existing methods. Besides being able to detect whether a song (or a part of it) has been copied, the proposed system can accurately estimate the amount of pitch shift and or tempo change that might have been applied to a song.
Serra @cite_16 have proposed a global audio feature extraction algorithm based on a two dimensional representation of the audio signal which we call the time-chroma image. The time-chroma represents the chroma content @cite_4 of an audio signal over time. Through extensive experiments Serra have shown that the time-chroma representation of an audio signal is a promising platform for designing audio detection algorithms. However, the use of global features does not extend this method to mash-up attacks. We will discuss the time-chroma representation later in this paper.
{ "cite_N": [ "@cite_16", "@cite_4" ], "mid": [ "2137319814", "2111007352" ], "abstract": [ "We present a new technique for audio signal comparison based on tonal subsequence alignment and its application to detect cover versions (i.e., different performances of the same underlying musical piece). Cover song identification is a task whose popularity has increased in the music information retrieval (MIR) community along in the past, as it provides a direct and objective way to evaluate music similarity algorithms. This paper first presents a series of experiments carried out with two state-of-the-art methods for cover song identification. We have studied several components of these (such as chroma resolution and similarity, transposition, beat tracking or dynamic time warping constraints), in order to discover which characteristics would be desirable for a competitive cover song identifier. After analyzing many cross-validated results, the importance of these characteristics is discussed, and the best performing ones are finally applied to the newly proposed method. Multiple evaluations of this one confirm a large increase in identification accuracy when comparing it with alternative state-of-the-art approaches.", "A display device in which a cathode ray tube envelope is utilized with a plurality of conductive elements extending from the interior surface of the faceplate of the cathode ray to the exterior surface of the cathode ray tube. A high resolution faceplate is provided by fabricating the faceplate in a manner such that the conductive elements are formed on a surface transverse to the inner and outer surfaces of the faceplate and this transverse surface is then formed into the faceplate to provide a vacuum type window." ] }
1304.0637
2952845537
Parallel transmission, as defined in high-speed Ethernet standards, enables to use less expensive optoelectronics and offers backwards compatibility with legacy Optical Transport Network (OTN) infrastructure. However, optimal parallel transmission does not scale to large networks, as it requires computationally expensive multipath routing algorithms to minimize differential delay, and thus the required buffer size, optimize traffic splitting ratio, and ensure frame synchronization. In this paper, we propose a novel framework for high-speed Ethernet, which we refer to as network coded parallel transmission, capable of effective buffer management and frame synchronization without the need for complex multipath algorithms in the OTN layer. We show that using network coding can reduce the delay caused by packet reordering at the receiver, thus requiring a smaller overall buffer size, while improving the network throughput. We design the framework in full compliance with high-speed Ethernet standards specified in IEEE802.3ba and present solutions for network encoding, data structure of coded parallel transmission, buffer management and decoding at the receiver side. The proposed network coded parallel transmission framework is simple to implement and represents a potential major breakthrough in the system design of future high-speed Ethernet.
Since the early work from over a decade ago, the network coding has gained significant attention. @cite_4 showed that a multicast connection can achieve maximum flow capacity between source and any receiver using network coding, which is otherwise not achievable with traditional store-and-forward networking. Other than single source multicast network coding problem, multi-source multicast problem using network coding has also been studied. Here, multiple disjoint subgraphs are created of the given network, and network coding is applied simultaneously to multiple sessions. @cite_2 proposed a linear programming based optimization model to find an optimal solution to construct the subgraphs. In @cite_1 , the efficiency of various coding schemes was studied for multicast connections and the simplest linear network coding was shown to be sufficient to reach the optimum, i.e., the max-flow between source and every receiver. In @cite_6 , it has been shown algebraically that network coding can be reduced to operations on matrices, which allows for the use of random linear network coding.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_6", "@cite_2" ], "mid": [ "2106403318", "", "2138928022", "2953360229" ], "abstract": [ "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.", "", "We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.", "We present a capacity-achieving coding scheme for unicast or multicast over lossy packet networks. In the scheme, intermediate nodes perform additional coding yet do not decode nor even wait for a block of packets before sending out coded packets. Rather, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of previously received packets. All coding and decoding operations have polynomial complexity. We show that the scheme is capacity-achieving as long as packets received on a link arrive according to a process that has an average rate. Thus, packet losses on a link may exhibit correlation in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error with coding delay. Our analysis of the scheme shows that it is not only capacity-achieving, but that the propagation of packets carrying \"innovative\" information follows the propagation of jobs through a queueing network, and therefore fluid flow models yield good approximations. We consider networks with both lossy point-to-point and broadcast links, allowing us to model both wireline and wireless packet networks." ] }
1304.0637
2952845537
Parallel transmission, as defined in high-speed Ethernet standards, enables to use less expensive optoelectronics and offers backwards compatibility with legacy Optical Transport Network (OTN) infrastructure. However, optimal parallel transmission does not scale to large networks, as it requires computationally expensive multipath routing algorithms to minimize differential delay, and thus the required buffer size, optimize traffic splitting ratio, and ensure frame synchronization. In this paper, we propose a novel framework for high-speed Ethernet, which we refer to as network coded parallel transmission, capable of effective buffer management and frame synchronization without the need for complex multipath algorithms in the OTN layer. We show that using network coding can reduce the delay caused by packet reordering at the receiver, thus requiring a smaller overall buffer size, while improving the network throughput. We design the framework in full compliance with high-speed Ethernet standards specified in IEEE802.3ba and present solutions for network encoding, data structure of coded parallel transmission, buffer management and decoding at the receiver side. The proposed network coded parallel transmission framework is simple to implement and represents a potential major breakthrough in the system design of future high-speed Ethernet.
Furthermore, no prior work addresses applications of network coding for frame synchronization, buffer management and network throughput for high-speed Ethernet, to the best of our knowledge. We studied optical parallel transmission to support high-speed Ethernet in @cite_9 for the first time, with an optimized parallel transmission based on OTN WDM networks. Such frameworks usually rely on the OTN layer for synchronization and routing information management. Similarly to our work, most multipath routing proposals assume the existence of a complex off-line optimization tool for minimization of differential delay and buffer dimensioning. Such assumptions do not provide solutions that scale to larger networks, or can be feasible for on-line implementations. Our contribution in this paper is to design a network coded parallel transmission solution, which can be applied to any-size network with and without OTN layer and without complex multipath optimization algorithms. Our goal is to show that the overhead of introducing the network coding in the system design is a small price to pay to simplify the optical network control layer, as well as improve the system performance.
{ "cite_N": [ "@cite_9" ], "mid": [ "2118265507" ], "abstract": [ "The emerging high-speed Ethernet is expected to take full advantage of the currently deployed optical infrastructure, i.e., optical transport network over wavelength division multiplexing (OTN WDM) networks. Parallel transmission is a viable option towards this goal, as exemplified by several IEEE and ITU-T standards. The optical virtual concatenation protocol in the OTN layer defined in ITU-T G.709 enables high-speed Ethernet signals to be decoupled into low rate virtual containers. The multiple lane distribution layer defined in IEEE 802.3ba facilitates the optical parallel transmission by stripping Ethernet signals into multiple low rate lanes which can be mapped onto optical channels. In this paper, we propose a new optimization framework for parallel transmission in OTN WDM networks to support high-speed Ethernet. We formulate the parallel transmission optimization as an integer linear programming problem encompassing three sub-problems: parallel wavelength routing and assignment, usage of electronic buffering for skew compensation, and bufferless parallel transmission. To reduce computational complexity, we deploy multi-objective evolutionary optimization. The numerical results show that parallel transmission in OTN WDM networks is feasible, and optimal solutions can be obtained with minimum resource consumption and bufferless system design." ] }
1303.7054
1983429958
This work investigates the maximum broadcast throughput and its achievability in multi-hop wireless networks with half-duplex node constraint. We allow the use of physical-layer network coding (PNC). Although the use of PNC for unicast has been extensively studied, there has been little prior work on PNC for broadcast. Our specific results are as follows: 1) For single-source broadcast, the theoretical throughput upper bound is n (n+1), where n is the "min vertex-cut" size of the network. 2) In general, the throughput upper bound is not always achievable. 3) For grid and many other networks, the throughput upper bound n (n+1) is achievable. Our work can be considered as an attempt to understand the relationship between max-flow and min-cut in half-duplex broadcast networks with cycles (there has been prior work on networks with cycles, but not half-duplex broadcast networks).
In graph theory @cite_4 , the max-flow min-cut theorem specifies that the maximum throughput in a single-source unicast network is equal to the min-cut. Network coding @cite_2 provides a solution to achieve the upper bound min-cut throughput in a single-source multicast network. Linear network coding was showed to suffice to achieve the optimum for multicast problem in @cite_0 and @cite_10 . A polynomial complexity algorithm to construct deterministic network codes that achieve the multicast capacity is given in @cite_9 . Ref. @cite_3 and @cite_5 introduced random linear network coding and showed that it can achieve the multicast capacity with high probability. PNC, first proposed in @cite_1 , incorporates signal processing techniques to realize network coding operations at the physical layer when overlapped signals are simultaneously received from multiple transmitters. It is a foundation of our investigation here. Most existing works on PNC focus on the unicast scenario. For example, @cite_1 studied the unicast in a two-way-relay channel, line networks and 2D grid networks; @cite_8 and @cite_6 study the unicast in general networks by designing distributed MAC protocols. As far as we know, there has been little, if any, prior work on broadcast with physical-layer network coding.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "1515701089", "1531745682", "", "1975099099", "2090609496", "", "2106403318", "", "", "2138928022" ], "abstract": [ "From the reviews: \"Bla Bollob's introductory course on graph theory deserves to be considered as a watershed in the development of this theory as a serious academic subject...The book has chapters on electrical networks, flows, connectivity and matchings, extremal problems, colouring, Ramsey theory, random graphs, and graphs and groups. Each chapter starts at a measured and gentle pace. Classical results are proved and new insight is provided, with the examples at the end of each chapter fully supplementing the text...Even so this allows an introduction not only to some of the deeper results but, more vitally, provides outlines of, and firm insights into, their proofs. Thus in an elementary text book, we gain an overall understanding of well-known standard results, and yet at the same time constant hints of, and guidelines into, the higher levels of the subject. It is this aspect of the book which should guarantee it a permanent place in the literature.\"", "Physical layer (PHY) network coding (PLNC), which performs the operation of network coding at physical layer, is an attractive means to improve throughput of wireless multi-hop networks. The PLNC requires multiple nodes to transmit their packets with accurate synchronization at symbol-level. Therefore, in many works of literature, centralized scheduling with perfect synchronization has been assumed to be employed on top of PLNC. Another common assumption is perfect channel state information (CSI) availability for end-nodes to decode their desired packets. However, these assumptions are challenged when we attempt to apply PLNC to large-scale and distributed networks. Therefore, in this paper, we propose a distributed medium access control (MAC) protocol for wireless multi-hop networks employing PLNC. The proposed MAC protocol is a random access protocol, which solves the PHY-specific problems, such as synchronization and CSI, jointly with distributed scheduling at MAC layer. We evaluate throughput of PLNC employing the proposed MAC protocol by computer simulation, and compare it with that of conventional relaying. With numerical results, we investigate the effectiveness of PLNC in distributed wireless multi-hop networks.", "", "A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). The goal of this paper is to show how the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC) scheme to coordinate transmissions among nodes. In contrast to \"straightforward\" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM) waves for equivalent coding operation. PNC can yield higher capacity than straight-forward network coding when applied to wireless networks. We believe this is a first paper that ventures into EM-wave-based network coding at the physical layer and demonstrates its potential for boosting network capacity. PNC opens up a whole new research area because of its implications and new design requirements for the physical, MAC, and network layers of ad hoc wireless stations. The resolution of the many outstanding but interesting issues in PNC may lead to a revolutionary new paradigm for wireless ad hoc networking.", "Early simulation experience with wireless ad hoc networks suggests that their capacity can be surprisingly low, due to the requirement that nodes forward each others' packets. The achievable capacity depends on network size, traffic patterns, and detailed local radio interactions. This paper examines these factors alone and in combination, using simulation and analysis from first principles. Our results include both specific constants and general scaling relationships helpful in understanding the limitations of wireless ad hoc networks. We examine interactions of the 802.11 MAC and ad hoc forwarding and the effect on capacity for several simple configurations and traffic patterns. While 802.11 discovers reasonably good schedules, we nonetheless observe capacities markedly less than optimal for very simple chain and lattice networks with very regular traffic patterns. We validate some simulation results with experiments. We also show that the traffic pattern determines whether an ad hoc network's per node capacity will scale to large networks. In particular, we show that for total capacity to scale up with network size the average distance between source and destination nodes must remain small as the network grows. Non-local traffic-patterns in which this average distance grows with the network size result in a rapid decrease of per node capacity. Thus the question “Are large ad hoc networks feasible?” reduces to a question about the likely locality of communication in such networks.", "", "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.", "", "", "We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays." ] }
1303.6746
1823442564
We address the problem of finding the maximizer of a nonlinear smooth function, that can only be evaluated point-wise, subject to constraints on the number of permitted function evaluations. This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature. We introduce a Bayesian approach for this problem and show that it empirically outperforms both the existing frequentist counterpart and other Bayesian optimization methods. The Bayesian approach places emphasis on detailed modelling, including the modelling of correlations among the arms. As a result, it can perform well in situations where the number of arms is much larger than the number of allowed function evaluation, whereas the frequentist counterpart is inapplicable. This feature enables us to develop and deploy practical applications, such as automatic machine learning toolboxes. The paper presents comprehensive comparisons of the proposed approach, Thompson sampling, classical Bayesian optimization techniques, more recent Bayesian bandit approaches, and state-of-the-art best arm identification methods. This is the first comparison of many of these methods in the literature and allows us to examine the relative merits of their different features.
Many approaches to bandits and Bayesian optimization focus on online learning (, minimizing cumulative regret) as opposed to optimization . In the realm of optimizing deterministic functions, a few works have proven exponential rates of convergence for simple regret . A stochastic variant of the work of Munos:2011 has been recently proposed by @cite_5 ; this approach takes a tree-based structure for expanding areas of the optimization problem in question, but it requires one to evaluate each cell many times before expanding, and so may prove expensive in terms of the number of function evaluations.
{ "cite_N": [ "@cite_5" ], "mid": [ "57706852" ], "abstract": [ "We study the problem of global maximization of a function f given a finite number of evaluations perturbed by noise. We consider a very weak assumption on the function, namely that it is locally smooth (in some precise sense) with respect to some semi-metric, around one of its global maxima. Compared to previous works on bandits in general spaces (, 2008; , 2011a) our algorithm does not require the knowledge of this semi-metric. Our algorithm, StoSOO, follows an optimistic strategy to iteratively construct upper confidence bounds over the hierarchical partitions of the function domain to decide which point to sample next. A finite-time analysis of StoSOO shows that it performs almost as well as the best specifically-tuned algorithms even though the local smoothness of the function is not known." ] }
1303.6071
1965740534
Abstract Given n independent integer-valued random variables X 1 , X 2 , … , X n and an integer C , we study the fundamental problem of computing the probability that the sum X = X 1 + X 2 + ⋯ + X n is at most C . We assume that each random variable X i is implicitly given by an oracle O i , which given two input integers n 1 , n 2 returns the probability of n 1 ≤ X i ≤ n 2 . We give the first deterministic fully polynomial-time approximation scheme (FPTAS) to estimate the probability up to a relative error of 1 ± ϵ . Our algorithm is based on the technique for approximately counting knapsack solutions, developed in (2011).
Our problem is a generalization of the counting knapsack problem. For the counting knapsack problem, Morris and Sinclair @cite_8 obtained the first FPRAS (fully polynomial-time randomized approximation scheme) based on the Markov Chain Monte-Carlo (MCMC) method. Dyer @cite_12 provided a completely different FPRAS based on dynamic programming. The first deterministic FPTAS is obtained by @cite_0 (see also the journal version @cite_13 ).
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_12", "@cite_8" ], "mid": [ "1989921461", "1985985425", "2003554015", "2005107874" ], "abstract": [ "Given @math elements with non-negative integer weights @math and an integer capacity @math , we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most @math . We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error @math ). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows.", "Given @math elements with nonnegative integer weights @math and an integer capacity @math , we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most the given capacity. We give a deterministic algorithm that estimates the number of solutions to within relative error @math in time polynomial in @math and @math (fully polynomial approximation scheme). More precisely, our algorithm takes time @math . Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes were known first by Morris and Sinclair via Markov chain Monte Carlo techniques and subsequently by Dyer via dynamic programming and rejection sampling.", "We give efficient algorithms to sample uniformly, and count approximately, the solutions to a zero-one knapsack problem. The algorithm is based on using dynamic programming to provide a deterministic relative approximation. Then \"dart throwing\" techniques are used to give arbitrary approximation ratios. We also indicate how further improvements can be obtained using randomized rounding. We extend the approach to several related problems: the m-constraint zero-one knapsack, the general integer knapsack (including its m-constraint version) and contingency tables with constantly many rows.", "We solve an open problem concerning the mixing time of symmetric random walk on the n-dimensional cube truncated by a hyperplane, showing that it is polynomial in n. As a consequence, we obtain a fully polynomial randomized approximation scheme for counting the feasible solutions of a 0-1 knapsack problem. The results extend to the case of any fixed number of hyperplanes. The key ingredient in our analysis is a combinatorial construction we call a \"balanced almost uniform permutation,\" which is of independent interest." ] }
1303.5132
1555213928
Several algorithms have been proposed for discovering patterns from trajectories of moving objects, but only a few have concentrated on outlier detection. Existing approaches, in general, discover spatial outliers, and do not provide any further analysis of the patterns. In this paper we introduce semantic spatial and spatio-temporal outliers and propose a new algorithm for trajectory outlier detection. Semantic outliers are computed between regions of interest, where objects have similar movement intention, and there exist standard paths which connect the regions. We show with experiments on real data that the method finds semantic outliers from trajectory data that are not discovered by similar approaches.
By simply analyzing the physical properties of trajectories it is possible to extract several characteristics about the movement of an object, such as acceleration, speed, displacement, and position. Such information can be used to classify trajectories in pedestrians, cars, ships or planes @cite_8 . This same information can also be used to cluster trajectories @cite_22 , where those in the same cluster have similar position, velocity and direction.
{ "cite_N": [ "@cite_22", "@cite_8" ], "mid": [ "2050962120", "2015378554" ], "abstract": [ "Trajectory clustering and behavior pattern extraction are the foundations of research into activity perception of objects in motion. In this paper, a new framework is proposed to extract behavior patterns through trajectory analysis. Firstly, we introduce directional trimmed mean distance (DTMD), a novel method used to measure similarity between trajectories. DTMD has the attributes of anti-noise, self-adaptation and the capability to determine the direction for each trajectory. Secondly, we use a hierarchical clustering algorithm to cluster trajectories. We design a length-weighted linkage rule to enhance the accuracy of trajectory clustering and reduce problems associated with incomplete trajectories. Thirdly, the motion model parameters are estimated for each trajectory's classification, and behavior patterns for trajectories are extracted. Finally, the difference between normal and abnormal behaviors can be distinguished.", "We propose a segmentation and feature extraction method for trajectories of moving objects. The methodology consists of three stages: trajectory data preparation; global descriptors computation; and local feature extraction. The key element is an algorithm that decomposes the profiles generated for different movement parameters (velocity, acceleration, etc.) using variations in sinuosity and deviation from the median line. Hence, the methodology enables the extraction of local movement features in addition to global ones that are essential for modeling and analyzing moving objects in applications such as trajectory classification, simulation and extraction of movement patterns. As a case study, we show how the method can be employed in classifying trajectory data generated by unknown moving objects and assigning them to known types of moving objects, whose movement characteristics have been previously learned. We have conducted a series of experiments that provide evidence about the similarities and differences that exist among different types of moving objects. The experiments show that the methodology can be successfully applied in automatic transport mode detection. It is also shown that eye-movement data cannot be successfully used as a proxy of full-body movement of humans, or vehicles." ] }
1303.5194
2147868657
This paper studies the cooperation between a primary system and a cognitive system in a cellular network where the cognitive base station (CBS) relays the primary signal using amplify-and-forward or decode-and-forward protocols, and in return it can transmit its own cognitive signal. While the commonly used half-duplex (HD) assumption may render the cooperation less efficient due to the two orthogonal channel phases employed, we propose that the CBS can work in a full-duplex (FD) mode to improve the system rate region. The problem of interest is to find the achievable primary-cognitive rate region by studying the cognitive rate maximization problem. For both modes, we explicitly consider the CBS transmit imperfections, which lead to the residual self-interference associated with the FD operation mode. We propose closed-form solutions or efficient algorithms to solve the problem when the related residual interference power is non-scalable or scalable with the transmit power. Furthermore, we propose a simple hybrid scheme to select the HD or FD mode based on zero-forcing criterion, and provide insights on the impact of system parameters. Numerical results illustrate significant performance improvement by using the FD mode and the hybrid scheme.
In the area of cooperative cognitive radio, there have been very few works on the use of the FD mode. It is worth mentioning that a theoretical upper-bound for the rate region was found in @cite_25 @cite_15 @cite_23 , where the CBS employs dirty paper coding (DPC) to remove interference from the CU due to the primary signal. However, DPC requires non-causal information about the primary message at the CBS, in addition to its implementation complexity; therefore in practice, it is unknown how to achieve this region. FD for CR is first proposed in @cite_0 where the CBS uses AF protocol and superposition at the CU to improve the rate region. However, @cite_0 assumed that at the CBS, the separation between the transmit and receive antennas is perfect and there is no self-interference, therefore it only provides a performance upper bound for the FD.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_25", "@cite_23" ], "mid": [ "1979399087", "2532161418", "2102617152", "2179880807" ], "abstract": [ "For improvement of achievable rate region in cognitive radio channels, a transmission method for secondary users is proposed. At first, the rate region of the sensing based cognitive radio scheme in which secondary users access the channel based on their channel sensing results with sensing error is analyzed. In the multiuser information theory, it is well-known that interference channel cases where both users transmit their own signal simultaneously achieve better rate region than the time-sharing method based on sensing outcomes. Based on this theoretical result, we propose the new transmission scheme in which a secondary transmitter relays primary userpsilas signal with a full-duplex amplify and forward (AF) manner while transmitting its own data at the same time. In the proposed method, we take account of practical implementation issues which are ignored in the theoretical method guaranteeing an upper bound of a rate region. With different geometric locations and power ratios between the relaying signal and the secondary userpsilas one, we evaluate the achievable rate region of the proposed method and compare with those of the conventional ones.", "Cognitive radios are promising solutions to the problem of overcrowded and inefficient licensed spectrum. In this work we explore the throughput potential of cognitive communication. We summarize different cognitive radio techniques that underlay, overlay and interweave the transmissions of the cognitive user with those of the licensed users. Recently proposed models for cognitive radios based on the overlay technique are described. For the interweave technique, we present a two switch' cognitive radio model and develop inner and outer bounds on the secondary radio capacity. Using the two switch model, we investigate the inherent tradeoff between the sensitivity of primary detection and the cognitive link capacity. With numerical results, we compare the throughputs achieved by the secondary user in the different models.", "Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.", "In this paper, we consider a communication scenario in which the primary and the cognitive radios wish to communicate to different receivers, subject to mutual interference. In the model that we use, the cognitive radio has noncausal knowledge of the primary radio's codeword. We characterize the largest rate at which the cognitive radio can reliably communicate under the constraint that 1) no rate degradation is created for the primary user, and 2) the primary receiver uses a single-user decoder just as it would in the absence of the cognitive radio. The result holds in a ldquolow-interferencerdquo regime in which the cognitive radio is closer to its receiver than to the primary receiver. In this regime, our results are subsumed by the results derived in a concurrent and independent work (Wu , 2007). We also demonstrate that, in a ldquohigh-interferencerdquo regime, multiuser decoding at the primary receiver is optimal from the standpoint of maximal jointly achievable rates for the primary and cognitive users." ] }
1303.4924
2949089338
As mobile IP-access is becoming the dominant technology for providing wireless services, the demand for more spectrum for this type of access is increasing rapidly. Since IP-access can be used for all types of services, instead of a plethora of dedicated, single-service systems, there is a significant potential to make spectrum use more efficient. In this paper, the feasibility and potential benefit of replacing the current terrestrial UHF TV broadcasting system with a mobile, cellular data (IP-) network is analyzed. In the cellular network, TV content would be provided as one of the services, here referred to as CellTV. In the investigation we consider typical Swedish rural and urban environments. We use different models for TV viewing patterns and cellular technologies as expected in the year 2020. Results of the quantitative analysis indicate that CellTV distribution can be beneficial if the TV consumption trend goes towards more specialized programming, more local contents, and more on-demand requests. Mobile cellular systems, with their flexible unicast capabilities, will be an ideal platform to provide these services. However, the results also demonstrate that CellTV is not a spectrum-efficient replacement for terrestrial TV broadcasting with current viewing patterns (i.e. a moderate number of channels with each a high numbers of viewers). In this case, it is doubtful whether the expected spectrum savings can motivate the necessary investments in upgrading cellular sites and developing advanced TV receiver required for the success of CellTV distribution.
Using LTE technology to provide over-the-air TV service has been proposed in @cite_32 as a 'tower-overlay' system, where the DTT network employs a modified LTE standard for broadcasting TV content to both mobile and fixed reception. Recent studies have considered using not only LTE technology but also cellular infrastructure for providing TV services. In @cite_23 , the amount of spectrum needed for delivering today's over-the-air TV service is calculated by taking different cities in the USA as reference. Its focus is limited to densely populated (urban) areas where typical inter-site-distance (ISD) of cellular networks is smaller than 2km, which ensures good performance of the eMBMS network. Larger ISD which is typical in rural areas would considerably degrade the spectral efficiency of the SFN due to the long propagation delay as shown in @cite_2 , thus requiring far larger amount of spectrum to provide the same service. Therefore, it is not evident that replacing DTT service with mobile networks is feasible based on the results from urban scenarios alone. Besides, the possibility of employing unicast for less popular TV channels is not exploited in this analysis, although it may reduce the spectrum requirement as indicated by results from earlier studies.
{ "cite_N": [ "@cite_32", "@cite_23", "@cite_2" ], "mid": [ "", "2135690695", "2125848449" ], "abstract": [ "", "Wide-spread provisioning of TV services has strongly shaped the cultural development since the last century; terrestrial radio broadcast transmission has been the original form of TV distribution. Although the majority of TV reception is today based on alternative distribution means, like cable or satellite, TV broadcast enjoys still a significant amount of allocated terrestrial spectrum (∼300 MHz). However, it has been identified that TV broadcast does not efficiently use its allocated spectrum. At the same time, other spectrum users like mobile communication systems experience a tremendous growth and demand for spectrum. The scarcity of radio spectrum has led the US FCC rule that additional 500 MHz of spectrum are to be identified for mobile broadband systems in the next decade — out of which 120 MHz are to come from the TV band in the next 5 years. In this paper we identify an alternative transmission architecture for TV distribution based on cellular LTE MBMS, with densely placed low-power transmitters that transmit in a synchronized single frequency network. It is demonstrated that in this way a full frequency reuse at all sites is possible, in contrast to the large reuse distances in high-power high-tower TV transmission. As a result, we show that it is possible to support TV services with 84 MHz of spectrum via LTE MBMS, in contrast to the 300 MHz used by today's ATSC TV broadcast system. This approach can be realized in a cost-effective manner by re-using existing mobile network infrastructure and we also show that the total radiated power can be decreased.", "TV is regarded as a key service for mobile devices. In the past, Mobile TV was often associated with broadcast transmission. However, unicast technology is sufficient in many cases, especially since mobile users prefer to access content on-demand, rather than following a fixed schedule. In this paper we will focus on 3G mobile networks, which have been primarily optimized for unicast services. Based on a traffic model we will discuss the capacity limits of 3G networks for unicast distribution of Mobile TV. From the results it can be concluded that the capacity is sufficient for many scenarios. In order to address scenarios in which broadcast is a more appropriate technology, 3GPP has defined a broadcast extension, called Multimedia Broadcast Multicast Service (MBMS). MBMS introduces shared radio broadcast bearers and has thus the capabilities of a real broadcasting technology. We will give a short overview about MBMS including a discussion on MBMS capacity. Since MBMS is primarily a new transport technology, additional application and service layer technologies are required, like electronic service guide and service protection. These mechanisms are standardized by the Open Mobile Alliance (OMA) and are favorably combined with MBMS or 3G unicast distribution in order to create complete end-to-end solutions. In order to optimize a system for delivery of broadcast services over 3G networks, the advantages of broadcast and unicast should be combined. We argue that hybrid unicast-broadcast delivery offers the best system resource usage and also the best user experience, and is thus favorable not only for broadcast delivery in 3G networks, but actually also for non-cellular broadcast systems like DVB-H or DMB" ] }
1303.4823
2068860357
Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. This paper focuses on distributed denial-of-service (DDoS) attacks; in particular we address interest flooding, an attack that exploits key architectural features of NDN. We show that an adversary with limited resources can implement such attack, having a significant impact on network performance. We then introduce Poseidon: a framework for detecting and mitigating interest flooding attacks. Finally, we report on results of extensive simulations assessing proposed countermeasure.
There is lots of previous work on DoS DDoS attacks on the current Internet infrastructure. Current literature addresses both attacks and countermeasures on the routing infrastructure @cite_25 , packet flooding @cite_7 , reflection attacks @cite_16 , DNS cache poisoning @cite_4 and SYN flooding attacks @cite_14 . Proposed countermeasures are based on various strategies and heuristics, including: anomaly detection @cite_1 , packet filtering @cite_12 , IP trace back @cite_21 @cite_13 , ISP collaborative defenses @cite_10 and user-collaborative defenses @cite_15 . The authors of @cite_22 present a spectrum of possible DoS DDoS attacks in NDN. They classify those attacks in interest flooding and content cache poisoning, and provide a high-level overview of possible countermeasures. However, the paper does not analyze specific attacks or evaluate countermeasures.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_10", "@cite_21", "@cite_1", "@cite_15", "@cite_16", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2127273221", "2089219361", "2337767373", "2136056135", "2170810185", "2017648294", "2146467316", "2162219773", "2164686665", "206164581", "1520068592", "2612997322" ], "abstract": [ "We propose a simple and robust mechanism for detecting SYN flooding attacks. Instead of monitoring the ongoing traffic at the front end (like firewall or proxy) or a victim server itself, we detect the SYN flooding attacks at leaf routers that connect end hosts to the Internet. The simplicity of our detection mechanism lies in its statelessness and low computation overhead, which make the detection mechanism itself immune to flooding attacks. Our detection mechanism is based on the protocol behavior of TCP SYN-FIN (RST) pairs, and is an instance of the Seqnential Change Point Detection [l]. To make the detection mecbanism insensitive to site and access pattern, a non-parametric Cnmnlative Sum (CUSUM) method [4] is applied, thus making the detection mechanism much more generally applicable and its deployment much easier. The efficacy of this detection mechanism is validated by trace-driven simulations. The evaluation results show that the detection mechanism has short detection latency and high detection accuracy. Moreover, due to its proximity to the flooding sources, our mechanism not only sets alarms upon detection of ongoing SYN flooding attacks, but also reveals the location of the flooding sources without resorting to expensive IP traceback.", "The Domain Name System, DNS, is based on nameserver delegations, which introduce complex and subtle dependencies between names and nameservers. In this paper, we present results from a large scale survey of DNS, and show that these dependencies lead to a highly insecure naming system. We report specifically on three aspects of DNS security: the properties of the DNS trusted computing base, the extent and impact of existing vulnerabilities in the DNS infrastructure, and the ease with which attacks against DNS can be launched. The survey shows that a typical name depends on 46 servers on average, whose compromise can lead to domain hijacks, while names belonging to some countries depend on a few hundred servers. An attacker exploiting well-documented vulnerabilities in DNS nameservers can hijack more than 30 of the names appearing in the Yahoo and DMOZ.org directories. And certain nameservers, especially in educational institutions, control as much as 10 of the namespace.", "With the growing realization that current Internet protocols are reaching the limits of their senescence, several ongoing research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denialof-Service (DoS) attacks that plague today’s Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) – a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN’s resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "Internet users are increases, distributed denial of service (DDoS) attack present a very serious threat to the stability of the internet. The DDoS attack, which is consuming all of the computing or communication resources necessary for the service, is known very difficult to protect. The threat posed by network attacks on large network, such as the internet, demands effective detection method. Therefore, an intrusion detection system on large network is need to efficient real-time detection. In this paper, we propose the entropy-based detection mechanism against DDoS attacks in order to guarantee the transmission of normal traffic and prevent the flood of abnormal traffic. The OPNET simulation results show that our ideas can provide enough services in DDoS attack.", "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.", "In this paper, we model Probabilistic Packet Marking (PPM) schemes for IP traceback as an identification problem of a large number of markers. Each potential marker is associated with a distribution on tags, which are short binary strings. To mark a packet, a marker follows its associated distribution in choosing the tag to write in the IP header. Since there are a large number of (for example, over 4,000) markers, what the victim receives are samples from a mixture of distributions. Essentially, traceback aims to identify individual distribution contributing to the mixture. Guided by this model, we propose Random Packet Marking (RPM), a scheme that uses a simple but effective approach. RPM does not require sophisticated structure relationship among the tags, and employs a hop-by-hop reconstruction similar to AMS [16]. Simulations show improved scalability and traceback accuracy over prior works. For example, in a large network with over 100K nodes, 4,650 markers induce 63 of false positives in terms of edges identification using the AMS marking scheme; while RPM lowers it to 2 . The effectiveness of RPM demonstrates that with prior knowledge of neighboring nodes, a simple and properly designed marking scheme suffices in identifying large number of markers with high accuracy.", "Denial-of-service (DoS) detection techniques - such as activity profiling, change-point detection, and wavelet-based signal analysis - face the considerable challenge of discriminating network-based flooding attacks from sudden increases in legitimate activity or flash events. This survey of techniques and testing results provides insight into our ability to successfully identify DoS flooding attacks. Although each detector shows promise in limited testing, none completely solve the detection problem. Combining various approaches with experienced network operators most likely produce the best results.", "Peer-to-peer content distribution networks can suffer from malicious participants that corrupt content. Current systems verify blocks with traditional cryptographic signatures and hashes. However, these techniques do not apply well to more elegant schemes that use network coding techniques for efficient content distribution. Architectures that use network coding are prone to jamming attacks where the introduction of a few corrupted blocks can quickly result in a large number of bad blocks propagating through the system. Identifying such bogus blocks is difficult and requires the use of homomorphic hashing functions, which are computationally expensive. This paper presents a practical security scheme for network coding that reduces the cost of verifying blocks on-the-fly while efficiently preventing the propagation of malicious blocks. In our scheme, users not only cooperate to distribute the content, but (well-behaved) users also cooperate to protect themselves against malicious users by informing affected nodes when a malicious block is found. We analyze and study such cooperative security scheme and introduce elegant techniques to prevent DoS attacks. We show that the loss in the efficiency caused by the attackers is limited to the effort the attackers put to corrupt the communication, which is a natural lower bound in the damage of the system. We also show experimentally that checking as low as 1-5 of the received blocks is enough to guarantee low corruption rates.", "Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of \"reverse ITRACE\" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.", "Finding the source of forged Internet Protocol (IP) datagrams in a large, high-speed network is difficult due to the design of the IP protocol and the lack of sufficient capability in most high-speed, high-capacity router implementations. Typically, not enough of the routers in such a network are capable of performing the packet forwarding diagnostics required for this. As a result, tracking-down the source of a flood-type denial-of-service (DoS) attack is usually difficult or impossible in these networks. CenterTrack is an overlay network, consisting of IP tunnels or other connections, that is used to selectively reroute interesting datagrams directly from edge routers to special tracking routers. The tracking routers, or associated sniffers, can easily determine the ingress edge router by observing from which tunnel the datagrams arrive. The datagrams can be examined, then dropped or forwarded to the appropriate egress point. This system simplifies the work required to determine the ingress adjacency of a flood attack while bypassing any equipment which may be incapable of performing the necessary diagnostic functions.", "", "" ] }
1303.4823
2068860357
Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. This paper focuses on distributed denial-of-service (DDoS) attacks; in particular we address interest flooding, an attack that exploits key architectural features of NDN. We show that an adversary with limited resources can implement such attack, having a significant impact on network performance. We then introduce Poseidon: a framework for detecting and mitigating interest flooding attacks. Finally, we report on results of extensive simulations assessing proposed countermeasure.
NDN caching performance optimization has been recently investigated with respect to various metrics including energy impact @cite_32 @cite_27 @cite_24 . The work of Xie, et al @cite_0 address cache robustness in NDN. This work introduces CacheShield, a proactive mechanism that helps routers to prevent caching unpopular content and therefore maximizing the use of cache for popular one. To address the same attack, @cite_5 introduce a lightweight reactive mechanism for detecting cache pollution attacks.
{ "cite_N": [ "@cite_32", "@cite_24", "@cite_0", "@cite_27", "@cite_5" ], "mid": [ "1973179992", "2064237402", "1984778122", "2068309140", "2159587870" ], "abstract": [ "A variety of proposals call for a new Internet architecture focused on retrieving content by name, but it has not been clear that any of these approaches are general enough to support Internet applications like real-time streaming or email. We present a detailed description of a prototype implementation of one such application -- Voice over IP (VoIP) -- in a content-based paradigm. This serves as a good example to show how content-based networking can offer advantages for the full range of Internet applications, if the architecture has certain key properties.", "Our energy efficiency analysis of various content dissemination strategies reveals that a change in network architecture from host-oriented to content-centric networking (CCN) can open new possibilities for energy-efficient content dissemination. In this paper, we consider energy-efficient CCN architecture and validate its energy efficiency via trace-based simulations. The results confirm that CCN is more energy efficient than conventional CDNs and P2P networks, even under incremental deployment of CCN-enabled routers.", "With the advent of content-centric networking (CCN) where contents can be cached on each CCN router, cache robustness will soon emerge as a serious concern for CCN deployment. Previous studies on cache pollution attacks only focus on a single cache server. The question of how caching will behave over a general caching network such as CCN under cache pollution attacks has never been answered. In this paper, we propose a novel scheme called CacheShield for enhancing cache robustness. CacheShield is simple, easy-to-deploy, and applicable to any popular cache replacement policy. CacheShield can effectively improve cache performance under normal circumstances, and more importantly, shield CCN routers from cache pollution attacks. Extensive simulations including trace-driven simulations demonstrate that CacheShield is effective for both CCN and today's cache servers. We also study the impact of cache pollution attacks on CCN and reveal several new observations on how different attack scenarios can affect cache hit ratios unexpectedly.", "Many systems employ caches to improve performance. While isolated caches have been studied in-depth, multi-cache systems are not well understood, especially in networks with arbitrary topologies. In order to gain insight into and manage these systems, a low-complexity algorithm for approximating their behavior is required. We propose a new algorithm, termed a-Net, that approximates the behavior of multi-cache networks by leveraging existing approximation algorithms for isolated LRU caches. We demonstrate the utility of a-Net using both per- cache and network-wide performance measures. We also perform factor analysis of the approximation error to identify system parameters that determine the precision of a-Net.", "Content-Centric Networking (CCN) is an emerging paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. In CCN, named content - rather than addressable hosts - becomes a first-class entity. Content is therefore decoupled from its location. This allows, among other things, the implementation of ubiquitous caching. Named-Data Networking (NDN) is a prominent example of CCN. In NDN, all nodes (i.e., hosts, routers) are allowed to have a local cache, used to satisfy incoming requests for content. This makes NDN a good architecture for efficient large scale content distribution. However, reliance on caching allows an adversary to perform attacks that are very effective and relatively easy to implement. Such attacks include cache poisoning (i.e., introducing malicious content into caches) and cache pollution (i.e., disrupting cache locality). This paper focuses on cache pollution attacks, where the adversary's goal is to disrupt cache locality to increase link utilization and cache misses for honest consumers. We show, via simulations, that such attacks can be implemented in NDN using limited resources, and that their effectiveness is not limited to small topologies. We then illustrate that existing proactive countermeasures are ineffective against realistic adversaries. Finally, we introduce a new technique for detecting pollution attacks. Our technique detects high and low rate attacks on different topologies with high accuracy." ] }
1303.4823
2068860357
Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. This paper focuses on distributed denial-of-service (DDoS) attacks; in particular we address interest flooding, an attack that exploits key architectural features of NDN. We show that an adversary with limited resources can implement such attack, having a significant impact on network performance. We then introduce Poseidon: a framework for detecting and mitigating interest flooding attacks. Finally, we report on results of extensive simulations assessing proposed countermeasure.
A slightly different approach has been proposed by in @cite_17 . Their technique relies on the collaboration between routers and producers in charge of the namespaces to which fake interests are directed. In @cite_26 , W " independently investigate how data-driven state can be used to implement various DoS DDoS attacks. Relevant to our work, their analysis includes: resource exhaustion, which is analogous to our interest flooding attack; mobile blockade, in which a wireless node issues a large number of interests and then disconnects from the network, causing the returned content to consume a large portion of the shared network bandwidth; and state decorrelation attacks, in which an adversary issues updates of local content or cache appearances at a frequency that exceeds the content request routing convergence. Attacks are tested on two physical (i.e., not simulated) topologies comprised of three and five NDN routers.
{ "cite_N": [ "@cite_26", "@cite_17" ], "mid": [ "1534162064", "2025584455" ], "abstract": [ "Information-centric networking (ICN) raises data objects to first class routable entities in the network and changes the Internet paradigm from host-centric connectivity to data-oriented delivery. However, current approaches to content routing heavily rely on data-driven protocol events and thereby introduce a strong coupling of the control to the data plane in the underlying routing infrastructure. In this paper, threats to the stability and security of the content distribution system are analyzed in theory, simulations, and practical experiments. We derive relations between state resources and the performance of routers, and demonstrate how this coupling can be misused in practice. We further show how state-based forwarding tends to degrade by decorrelating resources. We identify intrinsic attack vectors present in current content-centric routing, as well as possibilities and limitations to mitigate them. Our overall findings suggest that major architectural refinements are required prior to global ICN deployment in the real world.", "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) – an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN’s packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper" ] }
1303.3592
2138613547
Achieving homophily, or association based on similarity, between a human user and a robot holds a promise of improved perception and task performance. However, no previous studies that address homophily via ethnic similarity with robots exist. In this paper, we discuss the difficulties of evoking ethnic cues in a robot, as opposed to a virtual agent, and an approach to overcome those difficulties based on using ethnically salient behaviors. We outline our methodology for selecting and evaluating such behaviors, and culminate with a study that evaluates our hypotheses of the possibility of ethnic attribution of a robot character through verbal and nonverbal behaviors and of achieving the homophily effect.
Anthropology has been a traditional source of qualitative data on behaviors observed in particular social contexts, presented as ethnographies. Such ethnographies produce descriptions of the communities in terms of : the differences between the ethnographer's own expectations and what he observes @cite_10 . For example, a university faculty member in the US may find it unusual the first time a foreign student addresses her as professor.'' The term of address would be a rich point between the professor's and the student's ways of using the language in the context. Note that this rich point can be a cue to the professor that the student is a foreigner, but may not be sufficient to further specify the student's ethnic identity.
{ "cite_N": [ "@cite_10" ], "mid": [ "1993996398" ], "abstract": [ "Ethnography Reconstructed: The Stranger at Fifteen. The Concepts of Fieldwork. Getting Started. Who Are You to Do This? Ethnography. Beginning Fieldwork. Narrowing the Focus. Informal to Formal: Some Examples. The Ethnographic Research Proposal. Ethnography in Context." ] }
1303.3592
2138613547
Achieving homophily, or association based on similarity, between a human user and a robot holds a promise of improved perception and task performance. However, no previous studies that address homophily via ethnic similarity with robots exist. In this paper, we discuss the difficulties of evoking ethnic cues in a robot, as opposed to a virtual agent, and an approach to overcome those difficulties based on using ethnically salient behaviors. We outline our methodology for selecting and evaluating such behaviors, and culminate with a study that evaluates our hypotheses of the possibility of ethnic attribution of a robot character through verbal and nonverbal behaviors and of achieving the homophily effect.
The importance of context has motivated several efforts to collect and analyze corpora of context-specific interactions. Iacobelli and Cassell, for example, coded the verbal and nonverbal behaviors, such as gaze, of African American children during spontaneous play @cite_3 . CUBE-G project collected a cross-cultural multimodal corpus of dyadic interactions @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_3" ], "mid": [ "2157891642", "1549296319" ], "abstract": [ "Trying to adapt the behavior of an interactive system to the cultural background of the user requires information on how relevant behaviors differ as a function of the user's cultural background. To gain such insights in the interrelation of culture and behavior patterns, the information from the literature is often too anecdotal to serve as the basis for modeling a system's behavior, making it necessary to collect multimodal corpora in a standardized fashion in different cultures. In this chapter, the challenges of such an endeavor are introduced and solutions are presented by examples from a German-Japanese project that aims at modeling culture-specific behaviors for Embodied Conversational Agents.", "In this paper we present the design, development and initial evaluation of a virtual peer that models ethnicity through culturally authentic verbal and non-verbal behaviors. The behaviors chosen for the implementation come from an ethnographic study with African-American and Caucasian children and the evaluation of the virtual peer consists of a study in which children interacted with an African American or a Caucasian virtual peer and then assessed its ethnicity. Results suggest that it may be possible to tip the ethnicity of a embodied conversational agent by changing verbal and non-verbal behaviors instead of surface attributes, and that children engage with those virtual peers in ways that have promise for educational applications." ] }
1303.3962
1681228741
Recent FCC regulations on TV white spaces allow geo-location databases to be the sole source of spectrum information for White Space Devices (WSDs). Geo-location databases protect TV band incumbents by keeping track of TV transmitters and their protected service areas based on their location, transmission parameters and sophisticated propagation models. In this article, we argue that keeping track of both TV transmitters and TV receivers (i.e. TV sets) can achieve significant improvement in the availability of white spaces. We first identify wasted spectrum opportunities, both temporal and spatial, due to the current approach of white spaces detection. We then propose DynaWhite, a cloud-based architecture that orchestrates the detection and dissemination of highly-dynamic, real-time, and fine-grained TV white space information. DynaWhite introduces the next generation of geo-location databases by combining traditional sensing techniques with a novel unconventional sensing approach based on the detection of the passive TV receivers using standard cell phones. We present a quantitative evaluation of the potential gains in white space availability for large scale deployments of DynaWhite. We finally identify challenges that need to be addressed in the research community in order to exploit this potential for leveraging dynamic real-time fine-grained TV white spaces.
There are currently two approaches to ensuring the protection of TV white space incumbents both based on TV transmitters information. The first approach, adopted by the FCC, relies on geo-location databases that keep track of TV transmitters' parameters and propagation models in order to estimate the areas that need to be protected @cite_5 . The work presented in @cite_7 extends this approach through using sophisticated propagation models and presents a scalable architecture of geo-location databases. The second approach, adopted by the IEEE 802.22 standard, relies on collaborative spectrum sensing between the WSDs. In this approach, WSDs submit their spectrum view to a central entity that is responsible for performing spectrum sharing functionalities @cite_0 . incorporates these two approaches and extends them by its unconventional sensing approach for TV receivers.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_7" ], "mid": [ "2150397642", "2133039196", "" ], "abstract": [ "Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum's utilization. However, detection performance in practice is often compromised with multipath fading, shadowing and receiver uncertainty issues. To mitigate the impact of these issues, cooperative spectrum sensing has been shown to be an effective method to improve the detection performance by exploiting spatial diversity. While cooperative gain such as improved detection performance and relaxed sensitivity requirement can be obtained, cooperative sensing can incur cooperation overhead. The overhead refers to any extra sensing time, delay, energy, and operations devoted to cooperative sensing and any performance degradation caused by cooperative sensing. In this paper, the state-of-the-art survey of cooperative sensing is provided to address the issues of cooperation method, cooperative gain, and cooperation overhead. Specifically, the cooperation method is analyzed by the fundamental components called the elements of cooperative sensing, including cooperation models, sensing techniques, hypothesis testing, data fusion, control channel and reporting, user selection, and knowledge base. Moreover, the impacting factors of achievable cooperative gain and incurred cooperation overhead are presented. The factors under consideration include sensing time and delay, channel impairments, energy efficiency, cooperation efficiency, mobility, security, and wideband sensing issues. The open research challenges related to each issue in cooperative sensing are also discussed.", "The opening of the television bands in the United States presents an exciting opportunity for secondary spectrum utilization. Protecting licensed broadcast television viewers from harmful interference due to secondary spectrum usage is critical to the successful deployment of TV white space devices. A wide variety of secondary system operating scenarios must be considered in any potential interference analysis, as described below. Several different types of licensed television transmitters currently exist in the TV bands, along with secondary licensed services, such as wireless microphones. All licensed services must be adequately protected from harmful interference, which can readily and reliably be achieved with the described geo-location database methods. Specific implementation details of geo-location databases are discussed, including several complexity reduction techniques. Geo-location database techniques are also shown to more efficiently utilize available spectrum than other spectrum access techniques.", "" ] }
1303.3962
1681228741
Recent FCC regulations on TV white spaces allow geo-location databases to be the sole source of spectrum information for White Space Devices (WSDs). Geo-location databases protect TV band incumbents by keeping track of TV transmitters and their protected service areas based on their location, transmission parameters and sophisticated propagation models. In this article, we argue that keeping track of both TV transmitters and TV receivers (i.e. TV sets) can achieve significant improvement in the availability of white spaces. We first identify wasted spectrum opportunities, both temporal and spatial, due to the current approach of white spaces detection. We then propose DynaWhite, a cloud-based architecture that orchestrates the detection and dissemination of highly-dynamic, real-time, and fine-grained TV white space information. DynaWhite introduces the next generation of geo-location databases by combining traditional sensing techniques with a novel unconventional sensing approach based on the detection of the passive TV receivers using standard cell phones. We present a quantitative evaluation of the potential gains in white space availability for large scale deployments of DynaWhite. We finally identify challenges that need to be addressed in the research community in order to exploit this potential for leveraging dynamic real-time fine-grained TV white spaces.
On the other hand, detecting TV receivers has been addressed before using either special hardware, that senses the power leakage of a receiver's local oscillator @cite_10 @cite_6 , or using central trusted -updated databases @cite_5 @cite_2 . The former technique requires the usage of special hardware that needs to be setup in the vicinity of the TV set. Such techniques do not scale and are hard to deploy. The latter technique does not scale to a large scale.studies the effect of knowing TV receivers information on the available white spaces, in terms of amount of additional available frequencies, they assume that in some countries, e.g. Norway, everyone that has TV receiver have to register TV receiver information in order to pay the broadcasting license fees.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_6", "@cite_2" ], "mid": [ "2133039196", "2165305598", "", "2082311026" ], "abstract": [ "The opening of the television bands in the United States presents an exciting opportunity for secondary spectrum utilization. Protecting licensed broadcast television viewers from harmful interference due to secondary spectrum usage is critical to the successful deployment of TV white space devices. A wide variety of secondary system operating scenarios must be considered in any potential interference analysis, as described below. Several different types of licensed television transmitters currently exist in the TV bands, along with secondary licensed services, such as wireless microphones. All licensed services must be adequately protected from harmful interference, which can readily and reliably be achieved with the described geo-location database methods. Specific implementation details of geo-location databases are discussed, including several complexity reduction techniques. Geo-location database techniques are also shown to more efficiently utilize available spectrum than other spectrum access techniques.", "Measurements performed at several locations clearly show that frequency spectrum is under-utilized. Cognitive radio is a strong candidate to ensure better spectrum utilization by providing access in an opportunistic manner. Simulations performed on the algorithm that we proposed show promising results in sensing vacant channels in TV bands. Here we present the work, which implements the algorithm on a real-time prototype, and measure its sensing performance in actual environments. Results show that our algorithm is robust under realistic conditions. This confirmation will give much needed confidence in the capability of cognitive radio systems to detect the operation of primary users and protect their use of spectrum.", "", "This paper discusses the increase in available white space spectrum using TV receiver information. TV receivers can increase the amount of available spectrum for cognitive radio devices in the TV bands by as much as 120 MHz." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
Recent research also shows that online daters daters take action when they suspect misrepresentation. Gibbs, Ellison, and Lai @cite_35 found that many participants often engaged in information seeking activities, such as Googling'' a potential date.
{ "cite_N": [ "@cite_35" ], "mid": [ "2103933416" ], "abstract": [ "This study investigates relationships between privacy concerns, uncertainty reduction behaviors, and self-disclosure among online dating participants, drawing on uncertainty reduction theory and the warranting principle. The authors propose a conceptual model integrating privacy concerns, self-efficacy, and Internet experience with uncertainty reduction strategies and amount of self-disclosure and then test this model on a nationwide sample of online dating participants ( N = 562). The study findings confirm that the frequency of use of uncertainty reduction strategies is predicted by three sets of online dating concerns—personal security, misrepresentation, and recognition—as well as self-efficacy in online dating. Furthermore, the frequency of uncertainty reduction strategies mediates the relationship between these variables and amount of self-disclosure with potential online dating partners. The authors explore the theoretical implications of these findings for our understanding of uncertainty reductio..." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
In addition to trust issues, there seems to be some privacy concerns associated with using ODSs, including disclosing one's presence on a dating site. @cite_9 described how the risk of exposure'' -- i.e., a coworker or acquaintance stumbling across one's profile. However, Couch's participants who reported exposure concerns were users of specialty fetish sites and or sites geared specifically towards extremely short-term relationships. Conversely, our interviews focused on how users looked for medium to long term relationships, and we feel there is no longer a social bias against users seeking long term, monogamous relationships using ODSs.
{ "cite_N": [ "@cite_9" ], "mid": [ "2108329522" ], "abstract": [ "In this paper, we examine the behaviours and experiences of people who use online dating and how they may or may not address risk in their use of online dating. Fifteen people who used online dating took part in in-depth, online chat interviews. We found that online daters use a variety of methods for managing and understanding the risks they perceived to be associated with online dating. Online daters compared the risks of online dating with other activities in their lives to justify their use of the medium. Many felt self-confident in their personal ability to manage and limit any risks they might encounter and, for some, the ability to be able to scapegoat risk (that is to blame others) was a method by which they could contextualize their own experiences and support their own risk strategies. For many, the control offered by the online environment was central to risk management. Additionally, the social context in which an individual encountered a potential risk would shape how they perceived the risk and responded to it. People who use online dating do consider the risks involved and they demonstrate personal autonomy in their risk management. From a public health perspective, it is important to understand how risk is experienced from an individual perspective, but it is imperative that any interventions are implemented at a population level." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
Finally, @cite_13 discussed the concept of social inference , finding that that 11 utilizing out-of-band knowledge. For instance, someone may know there is only one female, hispanic local soccer team member, and use that outside knowledge to de-anonymize a seemingly anonymous profile.
{ "cite_N": [ "@cite_13" ], "mid": [ "2158600876" ], "abstract": [ "New Web 2.0 applications, with their emphasis on collaboration and communication, hold the promise of major advances in social connectivity and coordination; however, they also increase the threats to user privacy. An important, yet under-researched privacy risk results from social inferences about user identity, location, and activities. In this paper, we frame the ‘social inference problem’. We then present the results from a 292 subject experiment that highlights: 1) the prevalence of social inference risks; 2) people’s difficulties in accurately predicting social inference risks; and 3) the relation between information entropy and social inference. We also show how to predict possible social inferences by modeling users’ background knowledge and calculating information entropy and discuss how social inference support systems can be deployed that protect user privacy." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
Privacy, trust, and security issues are often associated with the collection, retention, and sharing of personal information. One reason privacy concerns are pervasive in OSNs is because security is not a primary task. As @cite_11 pointed out, users often view security as a barrier preventing them from accomplishing their goals. Furthermore, users may be unaware of the risks associated with sharing personal information. Data posted on social networks can be subject to subpoena or, even after years, can regrettably re-surface, e.g., during job hunting or an electoral campaign. Furthermore, social networking data can be used for social engineering scams. For instance, @cite_5 showed that extremely effective phishing messages could be constructed by data mining social networking profiles to personalize phishing messages.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2100052779", "2016310229" ], "abstract": [ "This article explores how the efficiency of Internet search is changing the way Americans find romantic partners. We use a new data source, the How Couples Meet and Stay Together survey. Results show that for 60 years, family and grade school have been steadily declining in their influence over the dating market. In the past 15 years, the rise of the Internet has partly displaced not only family and school, but also neighborhood, friends, and the workplace as venues for meeting partners. The Internet increasingly allows Americans to meet and form relationships with perfect strangers, that is, people with whom they had no previous social tie. Individuals who face a thin market for potential partners, such as gays, lesbians, and middle-aged heterosexuals, are especially likely to meet partners online. One result of the increasing importance of the Internet in meeting partners is that adults with Internet access at home are substantially more likely to have partners, even after controlling for other factors....", "Ubiquitous and mobile technologies create new challenges for system security. Effective security solutions depend not only on the mathematical and technical properties of those solutions, but also on people’s ability to understand them and use them as part of their work. As a step towards solving this problem, we have been examining how people experience security as a facet of their daily life, and how they routinely answer the question, “is this system secure enough for what I want to do?” We present a number of findings concerning the scope of security, attitudes towards security, and the social and organizational contexts within which security concerns arise, and point towards emerging technical solutions." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
Motivated by the significance of associated threats, a considerable amount of work has been dedicated to user-centered design of privacy and trust enhanced OSNs. Privacy and trust are similar, but separate concepts. Nissenbaum @cite_19 discussed the concept of contextual integrity'', pointing out that personal information is not simply private or public -- privacy depends on context.
{ "cite_N": [ "@cite_19" ], "mid": [ "2075476409" ], "abstract": [ "Philosophical and legal theories of privacy have long recognized the relationship between privacy and information about persons. They have, however, focused on personal, intimate, and sensitive information, assuming that with public information, and information drawn from public spheres, either privacy norms do not apply, or applying privacy norms is so burdensome as to be morally and legally unjustifiable. Against this preponderant view, I argue that information and communications technology, by facilitating surveillance, by vastly enhancing the collection, storage, and analysis of information, by enabling profiling, data mining and aggregation, has significantly altered the meaning of public information. As a result, a satisfactory legal and philosophical understanding of a right to privacy, capable of protecting the important values at stake in protecting privacy, must incorporate, in addition to traditional aspects of privacy, a degree of protection for privacy in public." ] }
1303.4155
2949303277
Online dating is an increasingly thriving business which boasts billion-dollar revenues and attracts users in the tens of millions. Notwithstanding its popularity, online dating is not impervious to worrisome trust and privacy concerns raised by the disclosure of potentially sensitive data as well as the exposure to self-reported (and thus potentially misrepresented) information. Nonetheless, little research has, thus far, focused on how to enhance privacy and trustworthiness. In this paper, we report on a series of semi-structured interviews involving 20 participants, and show that users are significantly concerned with the veracity of online dating profiles. To address some of these concerns, we present the user-centered design of an interface, called Certifeye, which aims to bootstrap trust in online dating profiles using existing social network data. Certifeye verifies that the information users report on their online dating profile (e.g., age, relationship status, and or photos) matches that displayed on their own Facebook profile. Finally, we present the results of a 161-user Mechanical Turk study assessing whether our veracity-enhancing interface successfully reduced concerns in online dating users and find a statistically significant trust increase.
It could be argued that acts of security theater are attempts to create . By operationalizing in this manner, we can see that it is important to increase trust in a system, as well as see that failed attempts to increase trust can lead to user frustration. Another issue typical of OSN is over-sharing . When social networks do not embed privacy into their designs, users tend to over-share and make dangerous errors. For instance, @cite_14 surveyed 569 Facebook users and found that 21 and Acquisti @cite_20 crawled the profiles of Carnegie Mellon University's Facebook population in 2005, and found that 90.8 also found that most users had not changed their privacy settings from Facebook's defaults. Sharing this kind of information can be harmful, aiding an attacker in various re-identification attacks, such as guessing a user's Social Security numbers based on publicly available information @cite_0 . In summary, while prior work has focused on privacy and trust in OSNs, or analyzed misrepresentation in ODSs, our work is the first, to the best of our knowledge, to present a user-driven and user-centered design of an ODS interface that enhances trust by leveraging information that is already available in OSNs.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_20" ], "mid": [ "2000830239", "1993722065", "2016563917" ], "abstract": [ "Information about an individual's place and date of birth can be exploited to predict his or her Social Security number (SSN). Using only publicly available information, we observed a correlation between individuals' SSNs and their birth data and found that for younger cohorts the correlation allows statistical inference of private SSNs. The inferences are made possible by the public availability of the Social Security Administration's Death Master File and the widespread accessibility of personal information from multiple sources, such as data brokers or profiles on social networking sites. Our results highlight the unexpected privacy consequences of the complex interactions among multiple data sources in modern information economies and quantify privacy risks associated with information revelation in public forums.", "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences." ] }
1303.3943
2950547906
This paper considers the problem of compressive sensing over a finite alphabet, where the finite alphabet may be inherent to the nature of the data or a result of quantization. There are multiple examples of finite alphabet based static as well as time-series data with inherent sparse structure; and quantizing real values is an essential step while handling real data in practice. We show that there are significant benefits to analyzing the problem while incorporating its finite alphabet nature, versus ignoring it and employing a conventional real alphabet based toolbox. Specifically, when the alphabet is finite, our techniques (a) have a lower sample complexity compared to real-valued compressive sensing for sparsity levels below a threshold; (b) facilitate constructive designs of sensing matrices based on coding-theoretic techniques; (c) enable one to solve the exact @math -minimization problem in polynomial time rather than a approach of convex relaxation followed by sufficient conditions for when the relaxation matches the original problem; and finally, (d) allow for smaller amount of data storage (in bits).
The fact that real-valued compressive sensing allows for recovery of sparse signals based on linear measurements is reminiscent of error correction in linear channel codes and compression by lossless source codes over finite alphabet or fields @cite_11 @cite_28 . Such similarities have been identified in existing literature to serve varied goals. For example, the use of bipartite expander graphs to design real-valued sensing matrices is investigated in @cite_12 . The connection between real-valued compressive sensing and linear channel codes is explored in @cite_20 , by viewing sparse signal compression as syndrome-based source coding over real numbers and making use of linear codes over finite fields of large sizes. The design of real-valued sensing matrices based on LDPC codes is examined in @cite_27 and @cite_4 . The connection between sparse learning problems and coding theory is studied in @cite_0 . For real-valued compressive sensing over finite alphabet, the sparse signal recovery approaches that have been examined include approximate message passing @cite_14 , sphere decoding and semi-definite relaxation @cite_1 . However, an algebraic understanding of compressive sensing, particularly over finite fields, is yet limited, which is the main contribution of this paper.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_28", "@cite_1", "@cite_0", "@cite_27", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2097947585", "2145403642", "2088587246", "", "2963664181", "", "2033375925", "2009101269", "628021441" ], "abstract": [ "In this paper we consider Basis Pursuit De-Noising (BPDN) problems in which the sparse original signal is drawn from a finite alphabet. To solve this problem we propose an iterative message passing algorithm, which capitalises not only on the sparsity but by means of a prior distribution also on the discrete nature of the original signal. In our numerical experiments we test this algorithm in combination with a Rademacher measurement matrix and a measurement matrix derived from the random demodulator, which enables compressive sampling of analogue signals. Our results show in both cases significant performance gains over a linear programming based approach to the considered BPDN problem. We also compare the proposed algorithm to a similar message passing based algorithm without prior knowledge and observe an even larger performance improvement.", "This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.", "For Slepian-Wolf source networks, the error exponents obtained by Korner,Marton, and the author are shown to be universally attainable by linear codes also. Improved exponents are derived for linear codes with \"large rates.\" Specializing the results to simple discrete memoryless sources reveals their relationship to the random coding and expurgated bounds for channels with additive noise. One corollary is that there are universal linear codes for this class of channels which attain the random coding error exponent for each channel in the class. The combinatorial approach of Csiszar-Korner-Marton is used. In particular, all results are derived from a lemma specifying good encoders in terms of purely combinatorial properties.", "", "We review connections between coding-theoretic objects and sparse learning problems. In particular, we show how seemingly different combinatorial objects such as error-correcting codes, combinatorial designs, spherical codes, compressed sensing matrices and group testing designs can be obtained from one another. The reductions enable one to translate upper and lower bounds on the parameters attainable by one object to another. We survey some of the well-known reductions in a unified presentation, and bring some existing gaps to attention. New reductions are also introduced; in particular, we bring up the notion of minimum L-wise distance of codes and show that this notion closely captures the combinatorial structure of RIP-2 matrices. Moreover, we show how this weaker variation of the minimum distance is related to combinatorial list-decoding properties of codes.", "", "Compressive sensing is an emerging technology which can recover a sparse signal vector of dimension n via a much smaller number of measurements than n. However, the existing compressive sensing methods may still suffer from relatively high recovery complexity, such as O(n3), or can only work efficiently when the signal is super sparse, sometimes without deterministic performance guarantees. In this paper, we propose a compressive sensing scheme with deterministic performance guarantees using expander-graphs-based measurement matrices and show that the signal recovery can be achieved with complexity O(n) even if the number of nonzero elements k grows linearly with n. We also investigate compressive sensing for approximately sparse signals using this new method. Moreover, explicit constructions of the considered expander graphs exist. Simulation results are given to show the performance and complexity of the new method.", "Compressed sensing (CS) is a relatively new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, there is actually a very close relationship between CS and error-correcting codes over large discrete alphabets. This connection leads naturally to new reconstruction methods and analysis. In some cases, the resulting methods provably require many fewer measurements than previous approaches.", "Preface 1. Coding and capacity 2. Finite fields, vector spaces, finite geometries and graphs 3. Linear block codes 4. Convolutional codes 5. Low-density parity-check codes 6. Computer-based design of LDPC codes 7. Turbo codes 8. Ensemble enumerators for turbo and LDPC codes 9. Ensemble decoding thresholds for LDPC and turbo codes 10. Finite geometry LDPC codes 11. Constructions of LDPC codes 12. LDPC codes based on combinatorial designs, graphs, and superposition 13. LDPC codes for binary erasure channels 14. Non-binary LDPC codes 15. LDPC code applications and advanced topics Index." ] }
1303.4293
2148695050
We describe a semantic wiki system with an underlying controlled natural language grammar implemented in Grammatical Framework (GF). The grammar restricts the wiki content to a well-defined subset of Attempto Controlled English (ACE), and facilitates a precise bidirectional automatic translation between ACE and language fragments of a number of other natural languages, making the wiki content accessible multilingually. Additionally, our approach allows for automatic translation into the Web Ontology Language (OWL), which enables automatic reasoning over the wiki content. The developed wiki environment thus allows users to build, query and view OWL knowledge bases via a user-friendly multilingual natural language interface. As a further feature, the underlying multilingual grammar is integrated into the wiki and can be collaboratively edited to extend the vocabulary of the wiki or even customize its sentence structures. This work demonstrates the combination of the existing technologies of Attempto Controlled English and Grammatical Framework, and is implemented as an extension of the existing semantic wiki engine AceWiki.
The research on GF has not yet focused on a wiki-like tool built on top of a GF-based grammar or application. Tool support exists mostly for users constructing single sentences (not texts) and working alone (not in collaboration). A notable exception is @cite_7 , which investigates using GF in a multilingual wiki context, to write restaurant reviews on the abstract language-independent level by constructing GF abstract trees.
{ "cite_N": [ "@cite_7" ], "mid": [ "2120553454" ], "abstract": [ "We present an approach to multilingual web content based on multilingual grammars and syntax editing for a controlled language. Content can be edited in any supported language and it is automatically kept within a controlled language fragment. We have implemented a web-based syntax editor for Grammatical Framework (GF) grammars which allows both direct abstract syntax tree manipulation and text input in any of the languages supported by the grammar. With this syntax editor and the GF JavaScript API, GF grammars can be used to build multilingual web applications. As a demonstration, we have implemented an example application in which users can add, edit and review restaurants in English, Spanish and Swedish." ] }
1303.4293
2148695050
We describe a semantic wiki system with an underlying controlled natural language grammar implemented in Grammatical Framework (GF). The grammar restricts the wiki content to a well-defined subset of Attempto Controlled English (ACE), and facilitates a precise bidirectional automatic translation between ACE and language fragments of a number of other natural languages, making the wiki content accessible multilingually. Additionally, our approach allows for automatic translation into the Web Ontology Language (OWL), which enables automatic reasoning over the wiki content. The developed wiki environment thus allows users to build, query and view OWL knowledge bases via a user-friendly multilingual natural language interface. As a further feature, the underlying multilingual grammar is integrated into the wiki and can be collaboratively edited to extend the vocabulary of the wiki or even customize its sentence structures. This work demonstrates the combination of the existing technologies of Attempto Controlled English and Grammatical Framework, and is implemented as an extension of the existing semantic wiki engine AceWiki.
Ontology languages (such as RDF, OWL and SKOS) typically support language-specific labels as attachments to ontological entities (such as classes and properties). Although the ontological axioms can thus be presented multilingually, their keywords (e.g. SubClassOf , some , only ) are still in English and their syntactic structure is not customizable. This is clearly insufficient for true ontology verbalization, especially for expressive ontology languages like OWL as argued in @cite_19 , which describes a sophisticated lexical annotation ontology to be attached to the domain ontology as linguistic knowledge. Our work can also be seen as attaching (multilingual) linguistic knowledge to a semantic web ontology. @cite_29 discusses a multilingual CNL-based verbalization of business rules. It is similar to our approach by being implemented in GF but differs by not using OWL as the ontology language.
{ "cite_N": [ "@cite_19", "@cite_29" ], "mid": [ "2069206951", "45390577" ], "abstract": [ "Abstract: In this paper we motivate why it is crucial to associate linguistic information with ontologies and why more expressive models, beyond the label systems implemented in RDF, OWL and SKOS, are needed to capture the relation between natural language constructs and ontological structures. We argue that in the light of tasks such as ontology-based information extraction (i.e., ontology population) from text, ontology learning from text, knowledge-based question answering and ontology verbalization, currently available models do not suffice as they only allow us to associate literals as labels to ontology elements. Using literals as labels, however, does not allow us to capture additional linguistic structure or information which is definitely needed as we argue. In this paper we thus present a model for linguistic grounding of ontologies called LexInfo. LexInfo allows us to associate linguistic information to elements in an ontology with respect to any level of linguistic description and expressivity. LexInfo has been implemented as an OWL ontology and is freely available together with an API. Our main contribution is the model itself, but even more importantly a clear motivation why more elaborate models for associating linguistic information with ontologies are needed. We also further discuss the implementation of the LexInfo API, different tools that support the creation of LexInfo lexicons as well as some preliminary applications.", "This paper presents an approach to multilingual ontology verbalisation of controlled language based on the Grammatical Framework (GF) and the lemon model. It addresses specific challenges that arise when classes are used to create a consensus-based conceptual framework, in which many parties individually contribute instances. The approach is presented alongside a concrete case, in which ontologies are used to capture business processes by linguistically untrained stakeholders across business disciplines. GF is used to create multilingual grammars that enable transparent multilingual verbalisation. Capturing the instance labels in lemon lexicons reduces the need for GF engineering to the class level: The lemon lexicons with the labels of the instances are converted into GF grammars based on a mapping described in this paper. The grammars are modularised in accordance with the ontology modularisation and can deal with the different styles of label choosing that occur in practice." ] }
1303.2553
2949790443
Network coding is an elegant technique where, instead of simply relaying the packets of information they receive, the nodes of a network are allowed to combine packets together for transmission and this technique can be used to achieve the maximum possible information flow in a network and save the needed number of packet transmissions. Moreover, in an energy-constraint wireless network such as Wireless Sensor Network (a typical type of wireless ad hoc network), applying network coding to reduce the number of wireless transmissions can also prolong the life time of sensor nodes. Although applying network coding in a wireless sensor network is obviously beneficial, due to the operation that one transmitting information is actually combination of multiple other information, it is possible that an error propagation may occur in the network. This special characteristic also exposes network coding system to a wide range of error attacks, especially Byzantine attacks. When some adversary nodes generate error data in the network with network coding, those erroneous information will be mixed at intermeidate nodes and thus corrupt all the information reaching a destination. Recent research efforts have shown that network coding can be combined with classical error control codes and cryptography for secure communication or misbehavior detection. Nevertheless, when it comes to Byzantine attacks, these results have limited effect. In fact, unless we find out those adversary nodes and isolate them, network coding may perform much worse than pure routing in the presence of malicious nodes. In this paper, a distributed hierarchical algorithm based on random linear network coding is developed to detect, locate and isolate malicious nodes.
Misbehavior detection applies error control technique or information-theoretic frameworks of encryptography to detect the modification introduced by Byzantine attackers. By types of nodes who take care of coding burden, misbehavior detection can be further divided into and . detection takes similar advantage as error-correcting codes and lays expensive computation tasks on destination nodes. As long as enough information is retrieved by destinations, modification can be detected. @cite_9 proposes an information-theoretic approach for detecting Byzantine modification in networks employing RLNC. Each exogenous source packet is augmented with a flexible number of hash symbols that are obtained as a polynomial function of the data symbol. This approach depends only on the adversary not knowing the random coefficient of all other packets received by the sink nodes when designing its adversarial packets. The hash schemes can be used without the need of secret key distribution but the use of block code forces an priori decision on the coding rate. Moreover, the main disadvantage of generation-based detection schemes is that only nodes with enough packets from a generation are able to detect modifications and thus, result in large end-to-end delays.
{ "cite_N": [ "@cite_9" ], "mid": [ "2563871604" ], "abstract": [ "Distributed randomized network coding, a robust approach to multicasting in distributed network settings, can be extended to provide Byzantine modification detection without the use of cryptographic functions is presented in this paper." ] }
1303.2643
1595936045
In this paper, we propose a path following replicator dynamic, and investigate its potentials in uncovering the underlying cluster structure of a graph. The proposed dynamic is a generalization of the discrete replicator dynamic. The replicator dynamic has been successfully used to extract dense clusters of graphs; however, it is often sensitive to the degree distribution of a graph, and usually biased by vertices with large degrees, thus may fail to detect the densest cluster. To overcome this problem, we introduce a dynamic parameter, called path parameter, into the evolution process. The path parameter can be interpreted as the maximal possible probability of a current cluster containing a vertex, and it monotonically increases as evolution process proceeds. By limiting the maximal probability, the phenomenon of some vertices dominating the early stage of evolution process is suppressed, thus making evolution process more robust. To solve the optimization problem with a fixed path parameter, we propose an efficient fixed point algorithm. The time complexity of the path following replicator dynamic is only linear in the number of edges of a graph, thus it can analyze graphs with millions of vertices and tens of millions of edges on a common PC in a few minutes. Besides, it can be naturally generalized to hypergraph and graph with edges of different orders. We apply it to four important problems: maximum clique problem, densest k-subgraph problem, structure fitting, and discovery of high-density regions. The extensive experimental results clearly demonstrate its advantages, in terms of robustness, scalability and flexility.
Cluster analysis is a basic problem in various disciplines @cite_2 , such as pattern recognition, data mining and computer vision, and a huge number of such methods have been proposed. It is beyond the scope of this paper to list all of them, therefore, we focus on methods closely related to ours.
{ "cite_N": [ "@cite_2" ], "mid": [ "2153233077" ], "abstract": [ "Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed." ] }
1303.2643
1595936045
In this paper, we propose a path following replicator dynamic, and investigate its potentials in uncovering the underlying cluster structure of a graph. The proposed dynamic is a generalization of the discrete replicator dynamic. The replicator dynamic has been successfully used to extract dense clusters of graphs; however, it is often sensitive to the degree distribution of a graph, and usually biased by vertices with large degrees, thus may fail to detect the densest cluster. To overcome this problem, we introduce a dynamic parameter, called path parameter, into the evolution process. The path parameter can be interpreted as the maximal possible probability of a current cluster containing a vertex, and it monotonically increases as evolution process proceeds. By limiting the maximal probability, the phenomenon of some vertices dominating the early stage of evolution process is suppressed, thus making evolution process more robust. To solve the optimization problem with a fixed path parameter, we propose an efficient fixed point algorithm. The time complexity of the path following replicator dynamic is only linear in the number of edges of a graph, thus it can analyze graphs with millions of vertices and tens of millions of edges on a common PC in a few minutes. Besides, it can be naturally generalized to hypergraph and graph with edges of different orders. We apply it to four important problems: maximum clique problem, densest k-subgraph problem, structure fitting, and discovery of high-density regions. The extensive experimental results clearly demonstrate its advantages, in terms of robustness, scalability and flexility.
Since dense subgraphs correspond to high-density regions in the data, the evolution process of path following replicator dynamic can be considered as a shrink process of high-density regions. The task of estimating high-density regions from data samples is a fundamental problem in a number of works, such as outlier detection and cluster analysis @cite_30 @cite_34 . The advantage of our method for this task is that our method can gradually reveal the landscape of multiple high-density regions of various shape at different scales. High density regions usually represent modes of data, and in this sense, our method is also closely related to mode-finding methods, such as mean shift @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_34" ], "mid": [ "2132870739", "2067191022", "2151996692" ], "abstract": [ "Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a \"simple\" subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.", "A general non-parametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure: the mean shift. For discrete data, we prove the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and, thus, its utility in detecting the modes of the density. The relation of the mean shift procedure to the Nadaraya-Watson estimator from kernel regression and the robust M-estimators; of location is also established. Algorithms for two low-level vision tasks discontinuity-preserving smoothing and image segmentation - are described as applications. In these algorithms, the only user-set parameter is the resolution of the analysis, and either gray-level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.", "In this paper, we investigate the problem of estimating high-density regions from univariate or multivariate data samples. We estimate minimum volume sets, whose probability is specified in advance, known in the literature as density contour clusters. This problem is strongly related to one-class support vector machines (OCSVM). We propose a new method to solve this problem, the one-class neighbor machine (OCNM) and we show its properties. In particular, the OCNM solution asymptotically converges to the exact minimum volume set prespecified. Finally, numerical results illustrating the advantage of the new method are shown." ] }
1303.2130
2952041493
Multitask clustering tries to improve the clustering performance of multiple tasks simultaneously by taking their relationship into account. Most existing multitask clustering algorithms fall into the type of generative clustering, and none are formulated as convex optimization problems. In this paper, we propose two convex Discriminative Multitask Clustering (DMTC) algorithms to address the problems. Specifically, we first propose a Bayesian DMTC framework. Then, we propose two convex DMTC objectives within the framework. The first one, which can be seen as a technical combination of the convex multitask feature learning and the convex Multiclass Maximum Margin Clustering (M3C), aims to learn a shared feature representation. The second one, which can be seen as a combination of the convex multitask relationship learning and M3C, aims to learn the task relationship. The two objectives are solved in a uniform procedure by the efficient cutting-plane algorithm. Experimental results on a toy problem and two benchmark datasets demonstrate the effectiveness of the proposed algorithms.
In @cite_12 , Argyriou proposed to minimize the empirical risk of all tasks with a Frobenius norm penalty on the differences of the task-specific models, which is a non-convex optimization problem. Then, they proved that the problem is equivalent to a convex optimization problem -- Multitask Feature Learning (MTFL). In @cite_45 Best Paper Award of , Zhang and Yeung first tried to learn the task covariance matrix of the multivariate Gaussian prior in the regularization framework. Because the concave function with respect to the covariance matrix variable makes the objective non-convex, they further replaced the concave function by two convex constraints, which results in a convex optimization problem, named MTRL.
{ "cite_N": [ "@cite_45", "@cite_12" ], "mid": [ "1648933886", "2165644552" ], "abstract": [ "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well-known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks." ] }
1303.1952
2950437846
Delay-Tolerant Networks (DTNs) have emerged as an exciting research area with a number of useful applications. Most of these applications would benefit greatly by a reduction in the message delivery delay experienced in the network. The delay performance of DTNs is adversely affected by contention, especially severe in the presence of higher traffic rates and node densities. Many-to-Many (M2M) communication can handle this contention much better than traditional one-to- one communication employing CSMA. In this paper, for the first time, we analytically model the expected delivery delay of a DTN employing epidemic routing and M2M communication. The accuracy of our model is demonstrated by matching the analytical results against those from simulations. We also show using simulations that M2M communication significantly improves the delay performance (with respect to one-to-one CSMA) for high contention scenarios. We believe our work will enable the effective application of M2M communication to reduce delivery delays in DTNs.
Initial analytical models developed for DTN routing performance study @cite_4 , @cite_9 , @cite_12 worked under the assumption that whenever two nodes are in contact with each other, all messages could always be successfully transferred from one node to the other ( @math they assumed both buffer capacity and bandwidth to be infinite). While papers such as @cite_5 have modeled DTN performance with bounded buffer capacity, they have assumed that infinite bandwidth is available and hence that there is no contention. The motivation for not considering contention has been that DTNs are sparse networks and such sparsity yields negligible contention. However, this conjecture has been disproved with the help of simulations in works such as @cite_9 , @cite_12 . The authors in @cite_10 show via simulations that, irrespective of whether the network is sparse or dense, the contention is substantial for high traffic rates; and also that the contention increases with an increase in network density. Realizing the importance of contention in the routing performance, the authors have attempted to include contention in the analysis of routing @cite_10 , @cite_1 . Their analysis assumes the one-to-one CSMA communication scheme.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2155059902", "2125957038", "2112626645", "2109528718", "2121368972", "2129849999" ], "abstract": [ "This paper considers (p, q )-Epidemic Routing, a class of store-carry-forward routing schemes, for sparsely populated mobile ad hoc networks. Our forwarding scheme includes Two-Hop Forwarding and the conventional Epidemic Routing as special cases. In such forwarding schemes, the original packet is copied many times and its packet copies spread over the network. Therefore those packet copies should be deleted after a packet reaches the destination. We analyze the performance of (p, q)-Epidemic Routing with VACCINE recovery scheme. Unlike most of the existing studies, we discuss the performance of (p, q)-Epidemic Routing in depth, taking account of the recovery process that deletes unnecessary packets from the network.", "Intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination. These networks fall into the general category of Delay Tolerant Networks. There are many real networks that follow this paradigm, for example, wildlife tracking sensor networks, military networks, inter-planetary networks, etc. In this context, conventional routing schemes would fail.To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention, which can significantly degrade their performance. Furthermore, proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays. With this in mind, we introduce a new routing scheme, called Spray and Wait, that \"sprays\" a number of copies into the network, and then \"waits\" till one of these nodes meets the destination.Using theory and simulations we show that Spray and Wait outperforms all existing schemes with respect to both average message delivery delay and number of transmissions per message delivered; its overall performance is close to the optimal scheme. Furthermore, it is highly scalable retaining good performance under a large range of scenarios, unlike other schemes. Finally, it is simple to implement and to optimize in order to achieve given performance goals in practice.", "Epidemic routing has been proposed as a robust transmission scheme for sparse mobile ad hoc networks. Under the assumption of no contention, epidemic routing has the minimum end-to-end delay amongst all the routing schemes proposed for such networks. The assumption of no contention was justified by arguing that since the network is sparse, there will be very few simultaneous transmissions. Some recent papers have shown through simulations that this argument is not correct and that contention cannot be ignored while analyzing the performance of routing schemes, even in sparse networks.Incorporating contention in the analysis has always been a hard problem and hence its effect has been studied mostly through simulations only. In this paper, we find analytical expressions for the delay performance of epidemic routing with contention. We include all the three main manifestations of contention, namely (i) the finite bandwidth of the link which limits the number of packets two nodes can exchange, (ii) the scheduling of transmissions between nearby nodes which is needed to avoid excessive interference, and (iii) the interference from transmissions outside the scheduling area. The accuracy of the analysis is verified via simulations.", "In this paper, we develop a rigorous, unified framework based on ordinary differential equations (ODEs) to study epidemic routing and its variations. These ODEs can be derived as limits of Markovian models under a natural scaling as the number of nodes increases. While an analytical study of Markovian models is quite complex and numerical solution impractical for large networks, the corresponding ODE models yield closed-form expressions for several performance metrics of interest, and a numerical solution complexity that does not increase with the number of nodes. Using this ODE approach, we investigate how resources such as buffer space and the number of copies made for a packet can be traded for faster delivery, illustrating the differences among various forwarding and recovery schemes considered. We perform model validations through simulation studies. Finally we consider the effect of buffer management by complementing the forwarding models with Markovian and fluid buffer models.", "A large body of work has theoretically analyzed the performance of mobility-assisted routing schemes for intermittently connected mobile networks. But the vast majority of these prior studies have ignored wireless contention. Recent papers have shown through simulations that ignoring contention leads to inaccurate and misleading results, even for sparse networks. In this paper, we analyze the performance of routing schemes under contention. First, we introduce a mathematical framework to model contention. This framework can be used to analyze any routing scheme with any mobility and channel model. Then, we use this framework to compute the expected delays for different representative mobility-assisted routing schemes under random direction, random waypoint and community-based mobility models. Finally, we use these delay expressions to optimize the design of routing schemes while demonstrating that designing and optimizing routing schemes using analytical expressions which ignore contention can lead to suboptimal or even erroneous behavior.", "Intermittently connected mobile networks are wireless networks where most of the time there does not exist a complete path from the source to the destination. There are many real networks that follow this model, for example, wildlife tracking sensor networks, military networks, vehicular ad hoc networks, etc. In this context, conventional routing schemes fail, because they try to establish complete end-to-end paths, before any data is sent. To deal with such networks researchers have suggested to use flooding-based routing schemes. While flooding-based schemes have a high probability of delivery, they waste a lot of energy and suffer from severe contention which can significantly degrade their performance. Furthermore, proposed efforts to reduce the overhead of flooding-based schemes have often been plagued by large delays. With this in mind, we introduce a new family routing schemes that \"spray\" a few message copies into the network, and then route each copy independently towards the destination. We show that, if carefully designed, spray routing not only performs significantly fewer transmissions per message, but also has lower average delivery delays than existing schemes; furthermore, it is highly scalable and retains good performance under a large range of scenarios. Finally, we use our theoretical framework proposed in our 2004 paper to analyze the performance of spray routing. We also use this theory to show how to choose the number of copies to be sprayed and how to optimally distribute these copies to relays." ] }
1303.1700
1580046135
Introduction. Case Based Reasoning (CBR) is an emerg- ing decision making paradigm in medical research where new cases are solved relying on previously solved similar cases. Usually, a database of solved cases is provided, and every case is described through a set of attributes (inputs) and a label (output). Extracting useful information from this database can help the CBR system providing more reliable results on the yet to be solved cases. Objective. For that purpose we suggest a general frame- work where a CBR system, viz. K-Nearest Neighbor (K-NN) algorithm, is combined with various information obtained from a Logistic Regression (LR) model. Methods. LR is applied, on the case database, to assign weights to the attributes as well as the solved cases. Thus, five possible decision making systems based on K-NN and or LR were identified: a standalone K-NN, a standalone LR and three soft K-NN algorithms that rely on the weights based on the results of the LR. The evaluation of the described approaches is performed in the field of renal transplant access waiting list. Results and conclusion. The results show that our suggested approach, where the K-NN algorithm relies on both weighted attributes and cases, can efficiently deal with non relevant attributes, whereas the four other approaches suffer from this kind of noisy setups. The robustness of this approach suggests interesting perspectives for medical problem solving tools using CBR methodology.
As for Chuang s paper, the author points out classification improvements relying on Hybrid CBR approach compared to a standalone CBR. Huang's publication also compares several kinds of hybrid approaches: a neural network with or without fuzzy logic and two hybrid CBR systems, one combining CBR with a decision tree and one combining CBR with LM. The neural networks show superior performances, but the authors emphasized rapidity of cases retrieval and the more easily interpretable results of CBR methodology. In the present study, the CBR hybrid approaches did not show significant improvements for patient classification, compared to standalone CBR approach. However, the hybrid CBR system combing both attribute weighting and case weighting seems to be very robust to artifacts in the database that might occur in all realistic scenarios. From our point of view, this interesting observation provides new perspectives for future CBR system, particularly for integrating CBR systems into large and unspecific knowledge database as electronic health records @cite_8 @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2012244825", "1965889239" ], "abstract": [ "Electronic patient records (EPRs) contain a wealth of patient-related data and capture clinical problem-solving experiences and decisions. Excelicare is such a system which is also a platform for the national generic clinical system in the UK. Objective: This paper presents, ExcelicareCBR, a case-based reasoning (CBR) system which has been developed to complement Excelicare. Objective of this work is to integrate CBR to support clinical decision making by harnessing electronic patient records for clinical experience reuse. Methods: CBR is a proven problem solving methodology in which past solutions are reused to solve new problems. A key challenge that we address in this paper is how to extract and represent a case from an EPR. Using an example from the lung cancer domain we demonstrate our generic case representation approach where Excelicare fields are mapped to case features. Once the case base is populated with cases containing data from the EPRs database a standard weighted k-nearest neighbour algorithm combined with a genetic algorithm based feature weighting mechanism is used for case retrieval and reuse. Conclusions: We conclude that incorporating case authoring functionality and a generic retrieval mechanism were key to successful integration of ExcelicareCBR. This paper also demonstrates how the application of CBR can enable sharing of lessons learned through the retrieval and reuse of EPRs captured as cases in a case base.", "Objectives: This paper presents current work in case-based reasoning (CBR) in the health sciences, describes current trends and issues, and projects future directions for work in this field. Methods and material: It represents the contributions of researchers at two workshops on case-based reasoning in the health sciences. These workshops were held at the Fifth International Conference on Case-Based Reasoning (ICCBR-03) and the Seventh European Conference on Case-Based Reasoning (ECCBR-04). Results: Current research in CBR in the health sciences is marked by its richness. Highlighted trends include work in bioinformatics, support to the elderly and people with disabilities, formalization of CBR in biomedicine, and feature and case mining. Conclusion: CBR systems are being better designed to account for the complexity of biomedicine, to integrate into clinical settings and to communicate and interact with diverse systems and methods." ] }
1303.1749
180788212
Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex high-order energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach.
Our patch-based curvature models could be seen as extensions of functional lifting @cite_1 or label elevation @cite_37 . Analogously to the line processes in @cite_28 , these second-order regularization methods use variables describing both location and orientation of the boundary. Thus, their curvature is the first-order (pair-wise) energy. Our patch variables include enough information about the local boundary to reduce the curvature to unary terms.
{ "cite_N": [ "@cite_28", "@cite_37", "@cite_1" ], "mid": [ "2020999234", "", "2057572731" ], "abstract": [ "We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states ( annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.", "", "We investigate a class of variational problems that incorporate in some sense curvature information of the level lines. The functionals we consider incorporate metrics defined on the orientations of pairs of line segments that meet in the vertices of the level lines. We discuss two particular instances: One instance that minimizes the total number of vertices of the level lines and another instance that minimizes the total sum of the absolute exterior angles between the line segments. In case of smooth level lines, the latter corresponds to the total absolute curvature. We show that these problems can be solved approximately by means of a tractable convex relaxation in higher dimensions. In our numerical experiments we present preliminary results for image segmentation, image denoising and image inpainting." ] }
1303.1749
180788212
Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex high-order energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach.
Grid patches were also recently used for curvature evaluation in @cite_32 . Unlike our integral geometry in Fig.(c), their method computes a minimum response over a number of affine filters encoding some learned soft'' patterns. The response to each filter combines deviation from the pattern and the cost of the pattern. The mathematical justification of this approach to curvature estimation is not fully explained and several presented plots indicate its limited accuracy. As stated in @cite_32 , the plots do also reveal the fact that we consistently overestimate the true curvature cost.'' The extreme hard'' case of this method may reduce to our technique if the cost of each pattern is assigned according to our integral geometry equations in Fig.(c). However, this case makes redundant the filter response minimization and the pattern costs learning, which are the key technical ideas in @cite_32 .
{ "cite_N": [ "@cite_32" ], "mid": [ "1582030780" ], "abstract": [ "Graph cut algorithms [9], commonly used in computer vision, solve a first-order MRF over binary variables. The state of the art for this NP-hard problem is QPBO [1,2], which finds the values for a subset of the variables in the global minimum. While QPBO is very effective overall there are still many difficult problems where it can only label a small subset of the variables. We propose a new approach that, instead of optimizing the original graphical model, instead optimizes a tractable sub-model, defined as an energy function that uses a subset of the pairwise interactions of the original, but for which exact inference can be done efficiently. Our Bounded Treewidth Subgraph (k-BTS) algorithm greedily computes a large weight treewidth-k subgraph of the signed graph, then solves the energy minimization problem for this subgraph by dynamic programming. The edges omitted by our greedy method provide a per-instance lower bound. We demonstrate promising experimental results for binary deconvolution, a challenging problem used to benchmark QPBO [2]: our algorithm performs an order of magnitude better than QPBO or its common variants [4], both in terms of energy and accuracy, and the visual quality of our output is strikingly better as well. We also obtain a significant improvement in energy and accuracy on a stereo benchmark with 2nd order priors [5], although the improvement in visual quality is more modest. Our method's running time is comparable to QPBO." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
The COPLINK system @cite_4 and the related suite of tools has a twofold goal: to ease the extraction of information from police case reports and to analyze criminal networks. A conceptual space of entities and objects is built exploiting data mining techniques in order to help in finding relations between entities. It also provides a visualization support consisting of a hyperbolic tree view and a spring-embedder graph layout of relevant entities. Furthermore, COPLINK is able to optimize the management of information exploited by police forces integrating in a unique environment data regarding different cases. This is done in order to enhance the possibility of linking data from different criminal investigations to get additional insights and to compare them in an analytic fashion.
{ "cite_N": [ "@cite_4" ], "mid": [ "1989377254" ], "abstract": [ "In response to the September 11 terrorist attacks, major government efforts to modernize federal law enforcement authorities' intelligence collection and processing capabilities have been initiated. At the state and local levels, crime and police report data is rapidly migrating from paper records to automated records management systems in recent years, making them increasingly accessible." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
TRIST @cite_11 allows analysts to formulate, refine, organize and execute queries over large document collections. Its user interface provides different perspectives on search results including clustering, trend analysis, comparisons, and difference. Information retrieved by TRIST then can be loaded into the SANDBOX system @cite_14 , an analytical sense-making environment that helps to sort, organize, and analyze large amounts of data. The system offers interactive visualization techniques including gestures for placing, moving, and grouping information, as well as templates for building visual models of information and visual assessment of evidence. Similarly to COPLINK, TRIST is optimized to query large databases and to analytically compare results.
{ "cite_N": [ "@cite_14", "@cite_11" ], "mid": [ "1992928607", "92453977" ], "abstract": [ "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sense-making component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include \"put-this-there\" cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.", "TRIST (“The Rapid Information Scanning Tool”) is the information retrieval and triage component for the analytical environment called nSpace. TRIST uses Human Information Interaction (HII) techniques to interact with massive data in order to quickly uncover the relevant, novel and unexpected. TRIST provides query planning, rapid scanning over thousands of search results in one display, and includes multiple linked dimensions for result characterization and correlation. It also forms a cohesive platform for integrating computational linguistic capabilities such as entity extraction, document clustering and other new techniques. Analysts work with TRIST to triage their massive data and to extract information into the Sandbox evidence marshalling environment. Initial experiments with TRIST show that analyst work product quality is increased, in half the time, while reading double the documents." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
Another remarkable tool is GeoTime @cite_6 , that visualizes the spatial interconnectedness of information over time overlaid onto a geographical substrate. It uses an interactive 3D view to visualize and track events, objects, and activities both temporally and geo-spatially. One difference between GeoTime and is that the feature regarding the spacial dependency of data is not yet allowed by our tool, and this makes GeoTime a useful addition to for such type of investigations. On the other hand, the functionalities provided by in terms of analysis of temporal dependencies of data improve those provided by GeoTime , as highlithed in Section --.
{ "cite_N": [ "@cite_6" ], "mid": [ "1695704110" ], "abstract": [ "Analyzing observations over time and geography is a common task but typically requires multiple, separate tools. The objective of our research has been to develop a method to visualize, and work with, the spatial interconnectedness of information over time and geography within a single, highly interactive 3D view. A novel visualization technique for displaying and tracking events, objects and activities within a combined temporal and geospatial display has been developed. This technique has been implemented as a demonstratable prototype called GeoTime in order to determine potential utility. Initial evaluations have been with military users. However, we believe the concept is applicable to a variety of government and business analysis tasks" ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
As an example of the various general-purpose tools for analyzing social networks (differently from tools specifically designed to investigate telecom networks), we mention NodeXL @cite_8 , an extensible toolkit for network overview, discovery and exploration implemented as an add-on to the Microsoft Excel 2007 2010 spreadsheet. NodeXL is open source and was designed to facilitate learning the concepts and methods of Social Network Analysis with visualization as a key component. It integrates metrics, statistical methods, and visualization to gain the benefit of all the three approaches. As for the usage of network metrics to assess the importance of actors in the network, NodeXL shares a paradigm similar to that we adopted in , although it lacks of all the relevant features of our tools related to the temporal analysis of the networks.
{ "cite_N": [ "@cite_8" ], "mid": [ "2135844668" ], "abstract": [ "We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
Regarding those researches that apply Social Network Analysis to relevant topics related to this work, recently T. von @cite_20 surveyed the available techniques for the visual analysis of large graphs. Graph visualization techniques are shown and various graph algorithmic aspects are discussed, which are useful for the different stages of the visual graph analysis process. In this work we received a number of challenges proposed by @cite_20 , trying to address for example the problem of large-scale network visualization for ad-hoc problems (in our case, to study phone telecom networks).
{ "cite_N": [ "@cite_20" ], "mid": [ "2158453355" ], "abstract": [ "The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques cover techniques that had been introduced until 2000 or concentrate only on graph layouts published until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as timevarying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review first considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process. We also present main open research challenges in this field." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
Also the analysis of phone call networks has been a subject of intensive study. Mellars @cite_16 investigated the principal ways a phone call network operates and how data are processed. Particular attention has been given to the methodology of investigation of data about the phone activity that it is possible to collect directly from the devices.
{ "cite_N": [ "@cite_16" ], "mid": [ "2046525174" ], "abstract": [ "The proliferation of mobile phones in society has led to a concomitant increase in their use in and connected to criminal activity. The examination and analysis of all telecommunications equipment has become an important aid to law enforcement in the investigation of crime. An understanding of the mechanism of the mobile phone network is vital to appreciate the worth of data retrieved during such an examination. This paper describes in principle the way a cellular mobile phone network operates and how the data is processed. In addition it discusses some of the tools available to examine mobile phones and SIM cards and some of their strengths and weaknesses. It also presents a short overview of the legal position of an analyst when examining a mobile phone." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
More recently, different works @cite_15 @cite_32 @cite_0 @cite_5 used mobile phone call data to examine and characterize the social interactions among cell phone users. They analyze phone traffic networks consisting of the mobile phone call records of million individuals.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_32" ], "mid": [ "2105585871", "2131681506", "2092124750", "2141113219" ], "abstract": [ "We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societ al level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.", "Electronic databases, from phone to e-mails logs, currently provide detailed records of human communication patterns, offering novel avenues to map and explore the structure of social and communication networks. Here we examine the communication patterns of millions of mobile phone users, allowing us to simultaneously study the local and the global structure of a society-wide communication network. We observe a coupling between interaction strengths and the network's local structure, with the counterintuitive consequence that social networks are robust to the removal of the strong ties but fall apart after a phase transition if the weak ties are removed. We show that this coupling significantly slows the diffusion process, resulting in dynamic trapping of information in communities and find that, when it comes to information diffusion, weak and strong ties are both simultaneously ineffective." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
In details, in @cite_32 @cite_0 the authors present the statistical features of a large-scale Belgian phone call network constituted by 4.6 millions users and 7 millions links. That study highlights some features typical of large social networks @cite_24 that characterize also telecom networks, such as the fission in small clusters and the presence of strong and weak ties among individuals. In addition, in @cite_15 the authors discuss an exceptional feature of that network, which is the division in two large communities corresponding to two different language users (i.e., English and French speakers of the Belgian network).
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_32", "@cite_24" ], "mid": [ "2105585871", "2092124750", "2141113219", "1844634983" ], "abstract": [ "We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societ al level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing", "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.", "Electronic databases, from phone to e-mails logs, currently provide detailed records of human communication patterns, offering novel avenues to map and explore the structure of social and communication networks. Here we examine the communication patterns of millions of mobile phone users, allowing us to simultaneously study the local and the global structure of a society-wide communication network. We observe a coupling between interaction strengths and the network's local structure, with the counterintuitive consequence that social networks are robust to the removal of the strong ties but fall apart after a phase transition if the weak ties are removed. We show that this coupling significantly slows the diffusion process, resulting in dynamic trapping of information in communities and find that, when it comes to information diffusion, weak and strong ties are both simultaneously ineffective.", "The importance of modeling and analyzing Social Networks is a consequence of the success of Online Social Networks during last years. Several models of networks have been proposed, reflecting the different characteristics of Social Networks. Some of them fit better to model specific phenomena, such as the growth and the evolution of the Social Networks; others are more appropriate to capture the topological characteristics of the networks. Because these networks show unique and different properties and features, in this work we describe and exploit several models in order to capture the structure of popular Online Social Networks, such as Arxiv, Facebook, Wikipedia and YouTube. Our experimentation aims at verifying the structural characteristics of these networks, in order to understand what model better depicts their structure, and to analyze the inner community structure, to illustrate how members of these Online Social Networks interact and group together into smaller communities." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
The community structure of phone telecom networks has been further investigated in @cite_5 . The authors exploited an efficient community detection algorithm called @cite_5 @cite_21 to assess the presence of the community structure and to study its features, in a large phone network of 2.6 millions individuals.
{ "cite_N": [ "@cite_5", "@cite_21" ], "mid": [ "2131681506", "1966119405" ], "abstract": [ "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "In this paper we present a novel strategy to discover the community structure of (possibly, large) networks. This approach is based on the well-know concept of network modularity optimization. To do so, our algorithm exploits a novel measure of edge centrality, based on the κ-paths. This technique allows to efficiently compute a edge ranking in large networks in near linear time. Once the centrality ranking is calculated, the algorithm computes the pairwise proximity between nodes of the network. Finally, it discovers the community structure adopting a strategy inspired by the well-known state-of-the-art Louvain method (henceforth, LM), efficiently maximizing the network modularity. The experiments we carried out show that our algorithm outperforms other techniques and slightly improves results of the original LM, providing reliable results. Another advantage is that its adoption is naturally extended even to unweighted networks, differently with respect to the LM." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
In conclusion, during the latest years @cite_19 @cite_1 investigated the possibility of inferring a friendship social network based on the data from mobile phone traffic of the same individuals. This problem attracted the attention of other recent studies @cite_13 @cite_34 , particularly devoted to understand the dynamics of social connections among individuals by means of mobile phone networks.
{ "cite_N": [ "@cite_13", "@cite_19", "@cite_34", "@cite_1" ], "mid": [ "2108432893", "318730972", "2038256515", "2166692930" ], "abstract": [ "Novel aspects of human dynamics and social interactions are investigated by means of mobile phone data. Using extensive phone records resolved in both time and space, we study the mean collective behavior at large scales and focus on the occurrence of anomalous events. We discuss how these spatiotemporal anomalies can be described using standard percolation theory tools. We also investigate patterns of calling activity at the individual level and show that the interevent time of consecutive calls is heavy-tailed. This finding, which has implications for dynamics of spreading phenomena in social networks, agrees with results previously reported on other human activities.", "We analyze 330,000 hours of continuous behavioral data logged by the mobile phones of 94 subjects, and compare these observations with self-report relational data. The information from these two data sources is overlapping but distinct, and the accuracy of self-report data is considerably affected by such factors as the recency and salience of particular interactions. We present a new method for precise measurements of large-scale human behavior based on contextualized proximity and communication data alone, and identify characteristic behavioral signatures of relationships that allowed us to accurately predict 95 of the reciprocated friendships in the study. Using these behavioral signatures we can predict, in turn, individual-level outcomes such as job satisfaction.", "To understand the diffusive spreading of a product in a telecom network, whether the product is a service, handset, or subscription, it can be very useful to study the structure of the underlying social network. By combining mobile traffic data and product adoption history from one of Telenor’s markets, we can define and measure an adoption network—roughly, the social network of adopters. By studying the time evolution of adoption networks, we can observe how different products diffuses through the network, and measure potential social influence. This paper presents an empirical and comparative study of three adoption networks evolving over time in a large telecom network. We believe that the strongest spreading of adoption takes place in the dense core of the underlying network, and gives rise to a dominant largest connected component (LCC) in the adoption network, which we call “the social network monster”. We believe that the size of the monster is a good indicator for whether or not a product is taking off. We show that the evolution of the LCC, and the size distribution of the other components, vary strongly with different products. The products studied in this article illustrate three distinct cases: that the social network monsters can grow or break down over time, or fail to occur at all. Some of the reasons a product takes off are intrinsic to the product; there are also aspects of the broader social context that can play in. Tentative explanations are offered for these phenomena. Also, we present two statistical tests which give an indication of the strength of the spreading over the social network. We find evidence that the spreading is dependent on the underlying social network, in particular for the early adopters.", "Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95 of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
has been originally presented in a preliminary version during late 2010 @cite_10 and has received a positive critique by the research community of and .
{ "cite_N": [ "@cite_10" ], "mid": [ "1984801506" ], "abstract": [ "In this paper we present our tool LogAnalysis for forensic visual statistical analysis of mobile phone traffic. LogAnalysis graphically represents the relationships among mobile phone users with a node-link layout. Its aim is to explore the structure of a large graph, measure connectivity among users and give support to visual search and automatic identification of organizations. To do so, LogAnalysis integrates graphical representation of network elements with measures typical of Social Network Analysis (SNA) in order to help detectives or forensic analysts to systematically examine relationships. The analysis of data extracted from mobile phone traffic logs has a fundamental relevance in forensic investigations since it allows to unveil the structure of relationships among individuals suspected to be part of criminal organizations together with the role they play inside the organization itself. To this purpose, the Social Network Analysis (SNA) methods were heavily employed in order to understand the importance of relationships. Interpretation and visual exploration of graphs representing phone contacts over a given time interval may become demanding, due to the presence of numerous nodes and edges. Our main contribution is an interface that enables systematic analysis of social relationships using visual different techniques and statistical information. LogAnalysis allows a deeper and clearer understanding of criminal associations while evidencing key members inside the criminal ring, and or those working as link among different associations" ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
We argue that the further developments of this tool have increased its potential and performance. In particular, the research direction that we are following with is devoted to include the possibility of analyzing temporal information from , and the tool has been specifically optimized to study , whose analysis has attracted relevant research efforts in the recent period @cite_35 . Additional efforts have been carried out so that to improve the possibilities provided by to unveil and study the community structure of the networks, whose importance has been assessed during latest years in a number of works @cite_18 @cite_31 , by means of different community detection techniques @cite_27 @cite_21 @cite_23 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_21", "@cite_27", "@cite_23", "@cite_31" ], "mid": [ "2061094040", "33275482", "1966119405", "2127048411", "2157527521", "2071703875" ], "abstract": [ "Social network analysis and mining has been highly influenced by the online social web sites, telecom consumer data and instant messaging systems and has widely analyzed the presence of dense communities using graph theory and machine learning techniques. Mobile social network analysis is the mapping and measuring of interactions and flows between people, groups, and organizations based on the usage of their mobile communication services. Community identification and mining is one of the recent major directions in social network analysis. In this paper we find the communities in the network based on a modularity factor. Then we propose a graph theory-based algorithm for further split of communities resulting in smaller sized and closely knit sub-units, to drill down and understand consumer behavior in a comprehensive manner. These sub-units are then analyzed and labeled based on their group behavior pattern. The analysis is done using two approaches:—rule-based, and cluster-based, for comparison and the able usage of information for suitable labeling of groups. Moreover, we measured and analyzed the uniqueness of the structural properties for each small unit; it is another quick and dynamic way to assign suitable labels for each distinct group. We have mapped the behavior-based labeling with unique structural properties of each group. It reduces considerably the time taken for processing and identifying smaller sub-communities for effective targeted marketing. The efficiency of the employed algorithms was evaluated on a large telecom dataset in three different stages of our work.", "", "In this paper we present a novel strategy to discover the community structure of (possibly, large) networks. This approach is based on the well-know concept of network modularity optimization. To do so, our algorithm exploits a novel measure of edge centrality, based on the κ-paths. This technique allows to efficiently compute a edge ranking in large networks in near linear time. Once the centrality ranking is calculated, the algorithm computes the pairwise proximity between nodes of the network. Finally, it discovers the community structure adopting a strategy inspired by the well-known state-of-the-art Louvain method (henceforth, LM), efficiently maximizing the network modularity. The experiments we carried out show that our algorithm outperforms other techniques and slightly improves results of the original LM, providing reliable results. Another advantage is that its adoption is naturally extended even to unweighted networks, differently with respect to the LM.", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "Many real-world networks are intimately organized according to a community structure. Much research effort has been devoted to develop methods and algorithms that can efficiently highlight this hidden structure of a network, yielding a vast literature on what is called today community detection. Since network representation can be very complex and can contain different variants in the traditional graph model, each algorithm in the literature focuses on some of these properties and establishes, explicitly or implicitly, its own definition of community. According to this definition, each proposed algorithm then extracts the communities, which typically reflect only part of the features of real communities. The aim of this survey is to provide a ‘user manual’ for the community discovery problem. Given a meta definition of what a community in a social network is, our aim is to organize the main categories of community discovery methods based on the definition of community they adopt. Given a desired definition of community and the features of a problem (size of network, direction of edges, multidimensionality, and so on) this review paper is designed to provide a set of approaches that researchers could focus on. The proposed classification of community discovery methods is also useful for putting into perspective the many open directions for further research. © 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 512–546, 2011 © 2011 Wiley Periodicals, Inc.", "Detection of community structures in social networks has attracted lots of attention in the domain of sociology and behavioral sciences. Social networks also exhibit dynamic nature as these networks change continuously with the passage of time. Social networks might also present a hierarchical structure led by individuals who play important roles in a society such as managers and decision makers. Detection and visualization of these networks that are changing over time is a challenging problem where communities change as a function of events taking place in the society and the role people play in it. In this paper, we address these issues by presenting a system to analyze dynamic social networks. The proposed system is based on dynamic graph discretization and graph clustering. The system allows detection of major structural changes taking place in social communities over time and reveals hierarchies by identifying influential people in social networks. We use two different data sets for the empirical evaluation and observe that our system helps to discover interesting facts about the social and hierarchical structures present in these social networks." ] }
1303.1827
2075686808
In the context of preventing and fighting crime, the analysis of mobile phone traffic, among actors of a criminal network, is helpful in order to reconstruct illegal activities on the basis of the relationships connecting those specific individuals. Thus, forensic analysts and investigators require new advanced tools and techniques which allow them to manage these data in a meaningful and efficient way. In this paper we present LogAnalysis, a tool we developed to provide visual data representation and filtering, statistical analysis features and the possibility of a temporal analysis of mobile phone activities. Its adoption may help in unveiling the structure of a criminal network and the roles and dynamics of communications among its components. Using LogAnalysis, forensic investigators could deeply understand hierarchies within criminal organizations, for e.g., discovering central members who provide connections among different sub-groups, etc. Moreover, by analyzing the temporal evolution of the contacts among individuals, or by focusing on specific time windows they could acquire additional insights on the data they are analyzing. Finally, we put into evidence how the adoption of LogAnalysis may be crucial to solve real cases, providing as example a number of case studies inspired by real forensic investigations led by one of the authors.
Furthermore, our tool provides a system model which aims at improving the quality of the analysis of social relationships of the network through the integration of visualization and SNA-based statistical techniques, which is a relevant topic in the ongoing research in Social Network Analysis @cite_33 .
{ "cite_N": [ "@cite_33" ], "mid": [ "2010133698" ], "abstract": [ "This paper reviews the development of social network analysis and examines its major areas of application in sociology. Current developments, including those from outside the social sciences, are examined and their prospects for advances in substantive knowledge are considered. A concluding section looks at the implications of data mining techniques and highlights the need for interdisciplinary cooperation if significant work is to ensue." ] }
1303.1170
2068278049
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system. The progression and severity of MS varies by individual, but it is generally a disabling disease. Although medications have been developed to slow the disease progression and help manage symptoms, MS research has yet to result in a cure. Early diagnosis and treatment of the disease have been shown to be effective at slowing the development of disabilities. However, early MS diagnosis is difficult because symptoms are intermittent and shared with other diseases. Thus most previous works have focused on uncovering the risk factors associated with MS and predicting the progression of disease after a diagnosis rather than disease prediction. This paper investigates the use of data available in electronic medical records (EMRs) to create a risk prediction model; thereby helping clinicians perform the difficult task of diagnosing an MS patient. Our results demonstrate that even given a limited time window of patient data, one can achieve reasonable classification with an area under the receiver operating characteristic curve of 0.724. By restricting our features to common EMR components, the developed models also generalize to other healthcare systems.
Shifts in an individual's hormone levels have also been suggested as factors in the disease process. A decrease in the number of MS relapses during pregnancy suggests the transient benefits of higher levels of estrogen @cite_33 @cite_19 . A study on British women showed that the recent use of oral contraceptives reduced the risk of MS @cite_7 . However, a subsequent US study @cite_24 was unable to obtain evidence that supported the benefits of oral contraceptives.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_33", "@cite_7" ], "mid": [ "2103792336", "", "2046451999", "2134285047" ], "abstract": [ "Background: Experimental and clinical data suggest a protective effect of estrogens on the development and progression of MS. Methods: We assessed whether MS incidence was associated with oral contraceptive use or parity in two cohort studies of U.S. women, the Nurses’ Health Study (NHS; 121,700 women aged 30 to 55 years at baseline in 1976) and the Nurses’ Health Study II (NHS II; 116,671 women aged 25 to 42 years at baseline in 1989). Participants with a diagnosis of MS before baseline were excluded. Oral contraceptive history and parity were assessed at baseline and updated biennially. During follow-ups of 18 years (NHS) and 8 years (NHS II) we documented a total of 315 definite or probable cases of MS. Results: Neither use of oral contraceptives nor parity were significantly associated with the risk of MS. As compared with women who never used oral contraceptives, the age-adjusted relative risk (95 CI) was 1.2 (0.9, 1.5) for past users, and 1.0 (0.6, 1.7) for current users. Similar results were obtained after adjustment for latitude, ancestry, and other potential confounding factors. There was no clear trend of MS risk with either increasing duration of use or time elapsed since last use. Age at first birth was also not associated with the risk of MS. Conclusions: These prospective results do not support a lasting protective effect of oral contraceptive use or pregnancy on the risk of MS. The decision to use hormonal contraception should not be affected by its effects on the risk of MS.", "", "As discussed in Part I of this review, the geographic distribution of multiple sclerosis (MS) and the change in risk among migrants provide compelling evidence for the existence of strong environmental determinants of MS, where “environmental” is broadly defined to include differences in diet and other behaviors. As we did for infections, we focus here primarily on those factors that may contribute to explain the geographic variations in MS prevalence and the change in risk among migrants. Among these, sunlight exposure emerges as being the most likely candidate. Because the effects of sun exposure may be mediated by vitamin D, we also examine the evidence linking vitamin D intake or status to MS risk. Furthermore, we review the evidence on cigarette smoking, which cannot explain the geographic variations in MS risk, but may contribute to the recently reported increases in the female male ratio in MS incidence. Other proposed risk factors for MS are mentioned only briefly; although we recognize that some of these might be genuine, evidence is usually sparse and unpersuasive. Ann Neurol 2007", "Background:Exogenous estrogens affect the onset and clinical course of experimental allergic encephalomyelitis. Oral contraceptives, a frequent source of exogenous estrogens in humans, could have a role in the development of multiple sclerosis (MS). Objective: To examine whether recent oral contraceptive use and pregnancy history are associated with the risk of MS. Design and Setting: A case-control study nested in the General Practice Research Database. This database contains prospective health information (drug prescriptions andclinicaldiagnoses)onmorethan3millionBritonswho are enrolled with selected general practitioners. Participants:One hundred six female incident cases of" ] }
1303.1170
2068278049
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system. The progression and severity of MS varies by individual, but it is generally a disabling disease. Although medications have been developed to slow the disease progression and help manage symptoms, MS research has yet to result in a cure. Early diagnosis and treatment of the disease have been shown to be effective at slowing the development of disabilities. However, early MS diagnosis is difficult because symptoms are intermittent and shared with other diseases. Thus most previous works have focused on uncovering the risk factors associated with MS and predicting the progression of disease after a diagnosis rather than disease prediction. This paper investigates the use of data available in electronic medical records (EMRs) to create a risk prediction model; thereby helping clinicians perform the difficult task of diagnosing an MS patient. Our results demonstrate that even given a limited time window of patient data, one can achieve reasonable classification with an area under the receiver operating characteristic curve of 0.724. By restricting our features to common EMR components, the developed models also generalize to other healthcare systems.
Other autoimmune disorders and specific cancers have been proposed as potential comorbidities to MS. In a paper that summarized the environmental features researched in etiological research on MS @cite_9 , Lauer noted that inflammatory bowel disease (IBD), ulcerative colitis, and Type 1 diabetes have the strongest correlations to MS amongst the various autoimmune disorders. The paper also referenced potential associations with Hodgkin's, oral, and colon cancers with the caveat that there was insufficient evidence to support these connections.
{ "cite_N": [ "@cite_9" ], "mid": [ "2172294899" ], "abstract": [ "The etiology of multiple sclerosis is, at present, not definitely known, but genetic and environmental factors play a role in its causation. Environmental causes have a predominant impact. Epidemiologic research has contributed considerably to the identification of external risk factors in this multifactorial setting, but methodological constraints still play a major part. Viral and other microbial agents have drawn much attention, although none of them is a necessary condition for the disease. This is true also for the Epstein–Barr virus, for which most data, including prospective data, supports a role in the majority of multiple sclerosis patients. In parallel, the hypothesis is still attractive in that it is not the virus per se, but rather more the age when it infects the human being that is the crucial matter. Other risk factors, such as tobacco smoking and vitamin D deficiency, which have immunomodulating properties, may also play some role, although the latter is not compatible with all data of the..." ] }
1303.1170
2068278049
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system. The progression and severity of MS varies by individual, but it is generally a disabling disease. Although medications have been developed to slow the disease progression and help manage symptoms, MS research has yet to result in a cure. Early diagnosis and treatment of the disease have been shown to be effective at slowing the development of disabilities. However, early MS diagnosis is difficult because symptoms are intermittent and shared with other diseases. Thus most previous works have focused on uncovering the risk factors associated with MS and predicting the progression of disease after a diagnosis rather than disease prediction. This paper investigates the use of data available in electronic medical records (EMRs) to create a risk prediction model; thereby helping clinicians perform the difficult task of diagnosing an MS patient. Our results demonstrate that even given a limited time window of patient data, one can achieve reasonable classification with an area under the receiver operating characteristic curve of 0.724. By restricting our features to common EMR components, the developed models also generalize to other healthcare systems.
Predictive studies have primarily focused on the progression of the disease. Bergamaschi et. al @cite_36 identified clinical features that could help predict the onset of secondary progression, defined by an increase in the Kurtzke's Expanded Disability Status Scale (EDSS), using patient data collected in the first year of the disease. The factors discovered in the study were then used to propose a Bayesian Risk Estimate for Multiple Sclerosis (BREMS) score to predict the risk of reaching the secondary progression @cite_1 . A recent study suggested the use of EDSS ranking to identify patients at risk for high progression rates 5 years from the onset of the disease @cite_34 .
{ "cite_N": [ "@cite_36", "@cite_34", "@cite_1" ], "mid": [ "2039323421", "2107132304", "2144359736" ], "abstract": [ "Abstract With the aid of a Bayesian statistical model of the natural course of relapsing remitting Multiple Sclerosis (MS), we identify short-term clinical predictors of long-term evolution of the disease, with particular focus on predicting onset of secondary progressive course (failure event) on the basis of patient information available at an early stage of disease. The model specifies the full joint probability distribution for a set of variables including early indicator variables (observed during the early stage of disease), intermediate indicator variables (observed throughout the course of disease, prefailure) and the time to failure. Our model treats the intermediate indicators as a surrogate response event, so that in right-censored patients, these indicators provide supplementary information pointing towards the unobserved failure times. Moreover, the full probability modelling approach allows the considerable uncertainty which affects certain early indicators, such as the early relapse rates, to be incorporated in the analysis. With such a model, the ability of early indicators to predict failure can be assessed more accurately and reliably, and explained in terms of the relationship between early and intermediate indicators. Moreover, a model with the aforementioned features allows us to characterize the pattern of disease course in high-risk patients, and to identify short-term manifestations which are strongly related to long-term evolution of disease, as potential surrogate responses in clinical trials. Our analysis is based on longitudinal data from 186 MS patients with a relapsing–remitting initial course. The following important early predictors of the time to progression emerged: age; number of neurological functional systems (FSs) involved; sphincter, or motor, or motor-sensory symptoms; presence of sequelae after onset. During the first 3 years of follow up, to reach EDSS≥4 outside relapse, to have sphincter or motor relapses and to reach moderate pyramidal involvement were also found to be unfavourable prognostic factors.", "Background: The Expanded Disability Status Scale (EDSS) is widely used to rate multiple sclerosis (MS) disability, but lack of disease duration information limits utility in assessing severity. EDSS ranking at specific disease durations was used to devise the MS Severity Score, which is gaining popularity for predicting outcomes. As this requires validation in longitudinal cohorts, we aimed to assess the utility of EDSS ranking as a predictor of 5-year outcome in the MSBase Registry. Methods: Rank stability of EDSS over time was examined in the MSBase Registry, a large multicentre MS cohort. Scores were ranked for 5-year intervals, and correlation of rank across intervals was assessed using Spearman's rank correlation. EDSS progression outcomes at 10 years were disaggregated by 5-year EDSS scores. Results: Correlation coefficients for EDSS rank over 5-year intervals increased with MS duration: years 1-6=0.55, years 4-9=0.74, years 7-12=0.80 and years 10-15=0.83. EDSS progression risk at 10 years after onset was highly dependent on EDSS at 5 years; one-point progression risk was greater for EDSS score of >2 than ≤2. Two-point progression was uncommon for EDSS score of <2 and more common at EDSS score of 4. Conclusions: EDSS rank stability increases with disease duration, probably due to reduced relapses and less random variation in later disease. After 4 years duration, EDSS rank was highly predictive of EDSS rank 5 years later. Risk of progression by 10 years was highly dependent on EDSS score at 5 years duration. We confirm the utility of EDSS ranking to predict 5-year outcome in individuals 4 years after disease onset.", "Aim: We propose a simple tool for early prediction of unfavorable long-term evolution of multiple sclerosis (MS). Methods: A Bayesian model allowed us to calculate, within the first year of disease and for each patient, the Bayesian Risk Estimate for MS (BREMS) score that represents the risk of reaching secondary progression (SP). Results: The median BREMS were higher in 158 patients who reached SP within 10 years in comparison with 1087 progression-free patients (0.69 vs. 0.30, p<0.0001). BREMS value was related to SP-risk in the whole cohort (p<0.0001) and in the subgroup of 535 patients who had never been treated with immune therapies, thus fairly representing the natural history of disease (p<0.000001). Conclusions: BREMS can be useful both to identify the patients who are candidates or not for early or for more aggressive therapies, and to improve the design and the analysis of clinical therapeutic trials and of observational studies." ] }
1303.1170
2068278049
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system. The progression and severity of MS varies by individual, but it is generally a disabling disease. Although medications have been developed to slow the disease progression and help manage symptoms, MS research has yet to result in a cure. Early diagnosis and treatment of the disease have been shown to be effective at slowing the development of disabilities. However, early MS diagnosis is difficult because symptoms are intermittent and shared with other diseases. Thus most previous works have focused on uncovering the risk factors associated with MS and predicting the progression of disease after a diagnosis rather than disease prediction. This paper investigates the use of data available in electronic medical records (EMRs) to create a risk prediction model; thereby helping clinicians perform the difficult task of diagnosing an MS patient. Our results demonstrate that even given a limited time window of patient data, one can achieve reasonable classification with an area under the receiver operating characteristic curve of 0.724. By restricting our features to common EMR components, the developed models also generalize to other healthcare systems.
Scoring systems have also been developed to assess the risk of disability. A study showed that MS Functional Composite, originally proposed as a clinical outcome measure, could be used to determine risk of severe physical disability @cite_8 . The Magnetic Resonance Disease Severity Scale (MRDSS) combined MRI measures into a composite score to predict the progression of physical disabilities @cite_13 . Bazelier @cite_10 derived a score using Cox proportional hazard models to estimate the long-term risk of osteoporotic and hip fractures in MS patients. Another study conducted by Margaritella et. al @cite_30 used Evoked Potentials score to predict the progression of disability and identify patients with benign MS.
{ "cite_N": [ "@cite_30", "@cite_10", "@cite_13", "@cite_8" ], "mid": [ "1969247853", "2108588186", "2126496021", "1999341130" ], "abstract": [ "Background The prognostic value of evoked potentials (EPs) in multiple sclerosis (MS) has not been fully established. The correlations between the Expanded Disability Status Scale (EDSS) at First Neurological Evaluation (FNE) and the duration of the disease, as well as between EDSS and EPs, have influenced the outcome of most previous studies. To overcome this confounding relations, we propose to test the prognostic value of EPs within an appropriate patient population which should be based on patients with low EDSS at FNE and short disease duration.", "Objective: To derive a simple score for estimating the long-term risk of osteoporotic and hip fracture in individual patients with MS. Methods: Using the UK General Practice Research Database linked to the National Hospital Registry (1997–2008), we identified patients with incident MS (n 5,494). They were matched 1:6 by year of birth, sex, and practice with patients without MS (control subjects). Cox proportional hazards models were used to calculate the long-term risk of osteoporotic and hip fracture. We fitted the regression model with general and specific risk factors, and the final Cox model was converted into integer risk scores. Results: In comparison with the FRAX calculator, our risk score contains several new risk factors that have been linked with fracture, which include MS, use of antidepressants, use of anticonvulsants, history of falling, and history of fatigue. We estimated the 5- and 10-year risks of osteoporotic and hip fracture in relation to the risk score. The C-statistic was moderate (0.67) for the prediction of osteoporotic fracture and excellent (0.89) for the prediction of hip fracture.", "Background Individual magnetic resonance imaging (MRI) disease severity measures, such as atrophy or lesions, show weak relationships to clinical status in patients with multiple sclerosis (MS). Objective To combine MS-MRI measures of disease severity into a composite score. Design Retrospective analysis of prospectively collected data. Setting Community-based and referral subspecialty clinic in an academic hospital. Patients A total of 103 patients with MS, with a mean (SD) Expanded Disability Status Scale (EDSS) score of 3.3 (2.2), of whom 62 (60.2 ) had the relapsing-remitting, 33 (32.0 ) the secondary progressive, and 8 (7.8 ) the primary progressive form. Main Outcome Measures Brain MRI measures included baseline T2 hyperintense (T2LV) and T1 hypointense (T1LV) lesion volume and brain parenchymal fraction (BPF), a marker of global atrophy. The ratio of T1LV to T2LV (T1:T2) assessed lesion severity. A Magnetic Resonance Disease Severity Scale (MRDSS) score, on a continuous scale from 0 to 10, was derived for each patient using T2LV, BPF, and T1:T2. Results The MRDSS score averaged 5.1 (SD, 2.6). Baseline MRI and EDSS correlations were moderate for BPF, T1:T2, and MRDSS and weak for T2LV. The MRDSS showed a larger effect size than the individual MRI components in distinguishing patients with the relapsing-remitting form from those with the secondary progressive form. Models containing either T2LV or MRDSS were significantly associated with disability progression during the mean (SD) 3.2 (0.3)–year observation period, when adjusting for baseline EDSS score. Conclusion Combining brain MRI lesion and atrophy measures can predict MS clinical progression and provides the basis for developing an MRI-based continuous scale as a marker of MS disease severity.", "Objective: To determine whether the MS Functional Composite (MSFC) can predict future disease progression in patients with relapsing remitting MS (RR-MS). Background: The MSFC was recommended by the Clinical Outcomes Assessment Task Force of the National MS Society as a new clinical outcome measure for clinical trials. The MSFC, which contains a test of walking speed, arm dexterity, and cognitive function, is expressed as a single score on a continuous scale. It was thought to offer improved reliability and responsiveness compared with traditional clinical MS outcome measures. The predictive value of MSFC scores in RR-MS has not been determined. Methods: The authors conducted a follow-up study of patients with RR-MS who participated in a phase III study of interferon β-1a (AVONEX) to determine the predictive value of MSFC scores. MSFC scores were constructed from data obtained during the phase III trial. Patients were evaluated by neurologic and MRI examinations after an average interval of 8.1 years from the start of the clinical trial. The relationships between MSFC scores during the clinical trial and follow-up status were determined. Results: MSFC scores from the phase III clinical trial strongly predicted clinical and MRI status at the follow-up visit. Baseline MSFC scores, and change in MSFC score over 2 years correlated with both disability status and the severity of whole brain atrophy at follow-up. There were also significant correlations between MSFC scores during the clinical trial and patient-reported quality of life at follow-up. The correlation with whole brain atrophy at follow-up was stronger for baseline MSFC than for baseline EDSS. Conclusion: MSFC scores in patients with RR-MS predict the level of disability and extent of brain atrophy 6 to 8 years later. MSFC scores may prove useful to assign prognosis, monitor patients during early stages of MS, and to assess treatment effects." ] }
1303.1170
2068278049
Multiple sclerosis (MS) is a chronic autoimmune disease that affects the central nervous system. The progression and severity of MS varies by individual, but it is generally a disabling disease. Although medications have been developed to slow the disease progression and help manage symptoms, MS research has yet to result in a cure. Early diagnosis and treatment of the disease have been shown to be effective at slowing the development of disabilities. However, early MS diagnosis is difficult because symptoms are intermittent and shared with other diseases. Thus most previous works have focused on uncovering the risk factors associated with MS and predicting the progression of disease after a diagnosis rather than disease prediction. This paper investigates the use of data available in electronic medical records (EMRs) to create a risk prediction model; thereby helping clinicians perform the difficult task of diagnosing an MS patient. Our results demonstrate that even given a limited time window of patient data, one can achieve reasonable classification with an area under the receiver operating characteristic curve of 0.724. By restricting our features to common EMR components, the developed models also generalize to other healthcare systems.
Limited research has been done with regards to predicting the risk of developing MS. One work predicted MS in patients with mono symptomatic optic neuritis using MRI examination findings, oligoclonal bands in cerebrospinal fluid (CSF), immunoglobulin (Ig) G index, and the seasonal time of onset @cite_0 . Thrower @cite_25 suggested the use of clinical characteristics of optic neuritis and traverse myelitis to identify high-risk MS patients. More recently, De Jager et. al @cite_4 proposed a weighted genetic risk score (wGRS) based on genetic susceptibility loci in the context of environmental risk factors. However, prior research relies on specialized measurements that are performed to confirm a MS diagnosis. The approaches suggested do not generalize to all patients and fail to allow for early diagnosis and intervention of high-risk MS patients. @PARASPLIT
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_25" ], "mid": [ "2146020893", "", "2029975686" ], "abstract": [ "Using multivariate analyses, individual risk of clinically definite multiple sclerosis (C DMS) after monosymptomatic optic neuritis (MO N) was quantified in a prospective study with clinical MO N onset during 1990 -95 in Stockholm, Sweden. During a mean follow-up time of 3.8 years, the presence of MS-like brain magnetic resonance imaging (MRI) lesions and oligoclonal immunoglobulin (Ig) G bands in cerebrospinal fluid (CSF) were strong prognostic markers of C DMS, with relative hazard ratios of 4.68 95 confidence interval (CI) 2.21 -9.91 and 5.39 (95 C I 1.56 -18.61), respectively. Age and season of clinical onset were also significant predictors, with relative hazard ratios of 1.76 (95 C I 1.02 -3.04) and 2.21 (95 C I 1.13 -3.98), respectively. Based on the above two strong predicto rs, individual probability of C DMS development after MO N was calculated in a three-quarter sample drawn from a cohort, with completion of follow-up at three years. The highest probability, 0.66 (95 C I 0.48 -0.80), wa...", "", "Multiple sclerosis (MS) represents a spectrum of demyelination that depends on disease duration and clinical categorization. Most patients present with the relapsing–remitting form of the disease. The earliest clinical presentation of relapsing–remitting MS (RRMS) is the clinically isolated syndrome (CIS). Predicting which CIS patients are at high risk for MS is complicated by the disparity between clinical attacks and the extent of axon pathology. However, recent interferon-beta (IFN-β) trials have demonstrated a delay in time to the second demyelinating event with early treatment, and early treatment could also slow the progression from RRMS to secondary-progressive MS (SPMS). Clinical findings in combination with brain MRI and CSF analysis can be used in CIS patients to evaluate their risk for clinically definite MS (CDMS). Application of the McDonald criteria also allows an earlier MS diagnosis by using new MRI lesions to define dissemination in time. Early immunomodulatory therapy for selected CIS patients may eventually prevent future axon pathology and progression of disability in this lifelong disease." ] }
1303.1208
1489441240
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are "mutually irreducible," a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to "mixture proportion estimation," which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.
Generative models have also been applied in the context of random label noise. These impose parametric models on the data-generating distributions, and include the label noise as part of the model. The parameters are then estimated using an EM algorithm . The method of @cite_4 employs kernels in this approach, allowing for the modeling of more flexible distributions.
{ "cite_N": [ "@cite_4" ], "mid": [ "1580256954" ], "abstract": [ "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples with noisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets." ] }
1303.0665
2154908680
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
In this work, we use recommender systems to suggest relevant and interesting news articles to readers. In general, there are two classes of recommender systems @cite_38 : collaborative filtering @cite_19 which recommend items based on preferences of similar users, and content-based systems @cite_13 which use content similarity of the items.
{ "cite_N": [ "@cite_19", "@cite_38", "@cite_13" ], "mid": [ "2100235918", "2171960770", "2116206254" ], "abstract": [ "As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.", "This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.", "Recommender systems have the effect of guiding users in a personal- ized way to interesting objects in a large space of possible options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process performed by a content-based recom- mender consists in matching up the attributes of a user profile in which preferences and interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items. This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the basic concepts and terminology of content- based recommender systems, a high level architecture, and their main advantages and drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application domains, by thoroughly describ- ing both classical and advanced techniques for representing items and user profiles. The most widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and future research which might lead towards the next generation of systems, by describing the role of User Generated Content as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations, that is to say surprisingly interesting items that they might not have otherwise discovered." ] }
1303.0665
2154908680
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
The earliest example where collaborative filtering is used for news recommendation is the Grouplens project which applies it to newsgroups @cite_5 @cite_21 . News aggregation systems such as Google News @cite_9 also implement such algorithms. In their work, they use Probabilistic Latent Semantic Indexing and MinHash for clustering news items, and item covisitation for recommendation (i.e. where two news are clicked by the same user within a time frame). Their system builds a graph in which the nodes are the news stories and the edges represent the number of covisitations. Each of the approaches generates a score for a given news, which are aggregated into single score thanks to a linear combination.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_21" ], "mid": [ "2155106456", "2123427850", "" ], "abstract": [ "Collaborative filters help people make choices based on the opinions of other people. GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles. News reader clients display predicted scores and make it easy for users to rate articles after they read them. Rating servers, called Better Bit Bureaus, gather and disseminate the ratings. The rating servers predict scores based on the heuristic that people who agreed in the past will probably agree again. Users can protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction. The entire architecture is open: alternative software for news clients and Better Bit Bureaus can be developed independently and can interoperate with the components we have developed.", "Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.", "" ] }
1303.0665
2154908680
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
It is also possible to combine the two types in a hybrid system @cite_22 @cite_28 @cite_32 . For example, @cite_33 extend the Google News study by looking at the user click behaviour in order to create accurate user profiles. They propose a Bayesian model to recommend news based on the user's interests and the news trend of a group of users. They combine this approach with the one by @cite_9 to generate personalized recommendations. @cite_16 introduce an algorithm based on a contextual bandit which learns to recommend by selecting news stories to serve users based on contextual information about the users and stories. At the same time, the algorithm adapts its selection strategy based on user-click feedback to maximize the total user clicks.
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_28", "@cite_9", "@cite_32", "@cite_16" ], "mid": [ "281665770", "2153111836", "", "2123427850", "", "2112420033" ], "abstract": [ "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering.", "Online news reading has become very popular as the web provides access to news articles from millions of sources around the world. A key challenge of news websites is to help users find the articles that are interesting to read. In this paper, we present our research on developing personalized news recommendation system in Google News. For users who are logged in and have explicitly enabled web history, the recommendation system builds profiles of users' news interests based on their past click behavior. To understand how users' news interests change over time, we first conducted a large-scale analysis of anonymized Google News users click logs. Based on the log analysis, we developed a Bayesian framework for predicting users' current news interests from the activities of that particular user and the news trends demonstrated in the activity of all users. We combine the content-based recommendation mechanism which uses learned user profiles with an existing collaborative filtering mechanism to generate personalized news recommendations. The hybrid recommender system was deployed in Google News. Experiments on the live traffic of Google News website demonstrated that the hybrid method improves the quality of news recommendation and increases traffic to the site.", "", "Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News.", "", "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce." ] }
1303.0665
2154908680
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
We focus on a class of recommender systems based on context trees. Usually, these trees are used to estimate Variable-order Markov Models (VMM). VMMs have been originally applied to lossless data compression, in which a long sequence of symbols is represented as a set of contexts and statistics about symbols are combined into a predictive model @cite_12 . VMMs have many other applications @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_12" ], "mid": [ "2103960658", "2082967074" ], "abstract": [ "This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a \"decomposed\" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems.", "A universal data compression algorithm is described which is capable of compressing long strings generated by a \"finitely generated\" source, with a near optimum per symbol length without prior knowledge of the source. This class of sources may be viewed as a generalization of Markov sources to random fields. Moreover, the algorithm does not require a working storage much larger than that needed to describe the source generating parameters." ] }
1303.0665
2154908680
The proliferation of online news creates a need for filtering interesting articles. Compared to other products, however, recommending news has specific challenges: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader. In this paper, we introduce a class of news recommendation systems based on context trees. They can provide high-quality news recommendations to anonymous visitors based on present browsing behaviour. Using an unbiased testing methodology, we show that they make accurate and novel recommendations, and that they are sufficiently flexible for the challenges of news recommendation.
Closely related, variable-order hidden Markov models @cite_7 , hidden Markov models @cite_2 and Markov models @cite_24 @cite_36 @cite_14 have been extensively studied for the related problem of click prediction. These models suffer from high state complexity. Although techniques @cite_4 exist to decrease this complexity, the main drawback is that multiple models have to be maintained, making these approaches not scalable and not suitable for online learning.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_36", "@cite_24", "@cite_2" ], "mid": [ "", "2002038995", "2168781105", "", "1667916464", "2125441033" ], "abstract": [ "", "We present VOGUE, a novel, variable order hidden Markov model with state durations, that combines two separate techniques for modeling complex patterns in sequential data: pattern mining and data modeling. VOGUE relies on a variable gap sequence mining method to extract frequent patterns with different lengths and gaps between elements. It then uses these mined sequences to build a variable order hidden Markov model (HMM), that explicitly models the gaps. The gaps implicitly model the order of the HMM, and they explicitly model the duration of each state. We apply VOGUE to a variety of real sequence data taken from domains such as protein sequence classification, Web usage logs, intrusion detection, and spelling correction. We show that VOGUE has superior classification accuracy compared to regular HMMs, higher-order HMMs, and even special purpose HMMs like HMMER, which is a state-of-the-art method for protein classification. The VOGUE implementation and the datasets used in this article are available as open-source.1", "In this paper, we propose a novel and general approach for time-series data mining. As an alternative to traditional ways of designing specific algorithm to mine certain kind of pattern directly from the data, our approach extracts the temporal structure of the time-series data by learning Markovian models, and then uses well established methods to efficiently mine a wide variety of patterns from the topology graph of the learned models. We consolidate the approach by explaining the use of some well-known Markovian models on mining several kinds of patterns. We then present a novel high-order hidden Markov model, the variable-length hidden Markov model (VLHMM), which combines the advantages of well- known Markovian models and has the superiority in both efficiency and accuracy. Therefore, it can mine a much wider variety of patterns than each of prior Markovian models. We demonstrate the power of VLHMM by mining four kinds of interesting patterns from 3D motion capture data, which is typical for the high-dimensionality and complex dynamics.", "", "Modeling and predicting user surfing paths involves tradeoffs between model complexity and predictive accuracy. In this paper we explore predictive modeling techniques that attempt to reduce model complexity while retaining predictive accuracy. We show that compared to various Markov models, longest repeating subsequence models are able to significantly reduce model size while retaining the ability to make accurate predictions. In addition, sharp increases in the overall predictive capabilities of these models are achievable by modest increases to the number of predictions made.", "Clickstream data provide information about the sequence of pages or the path viewed by users as they navigate a website. We show how path information can be categorized and modeled using a dynamic multinomial probit model of Web browsing. We estimate this model using data from a major online bookseller. Our results show that the memory component of the model is crucial in accurately predicting a path. In comparison, traditional multinomial probit and first-order Markov models predict paths poorly. These results suggest that paths may reflect a user's goals, which could be helpful in predicting future movements at a website. One potential application of our model is to predict purchase conversion. We find that after only six viewings purchasers can be predicted with more than 40 accuracy, which is much better than the benchmark 7 purchase conversion prediction rate made without path information. This technique could be used to personalize Web designs and product offerings based upon a user's path." ] }
1303.0166
2101921379
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
Barnes @cite_5 proposed a reconstruction procedure for coherent illumination. He uses the assumption of bounded support to write the convolution operator in the imaging equation in such a way that it can be decomposed into prolate spheroidal wave functions @cite_14 . This allows inversion of that operator, similar to division in Fourier space. Rushforth and Harris @cite_6 study the influence of noise on reconstruction methods to overcome the diffraction limit. Their conclusion is that the Rayleigh criterion is an approximate measure of the resolution which can be achieved easily.''
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_6" ], "mid": [ "2061231286", "54116340", "2081808367" ], "abstract": [ "This paper presents a formal solution to the problem of object restoration in a one-dimensional, diffraction-limited imaging system. It is found that if the illumination in the object space is confined to a finite region, then the imaging equation can be solved for the object in terms of the image. The solution can be expressed as a series expansion on the eigenfunctions of the imaging operator.", "", "This paper treats the problem of restoring the detail to an optical image which has been degraded by diffraction and noise. The particular contribution of the paper is a more complete analysis of the effects of various types of noise on system performance than has been given previously. Background noise, measurement noise, and computer roundoff error are considered, and the errors in the reconstructed image caused by these noise processes are evaluated. Numerical results for the special case of a perfect one-dimensional slit aperture are obtained. A general conclusion is that the reconstruction technique described here is most useful when the smoothing is severe and when a modest improvement of resolution may be worthwhile." ] }
1303.0166
2101921379
We establish a link between Fourier optics and a recent construction from the machine learning community termed the kernel mean map. Using the Fraunhofer approximation, it identifies the kernel with the squared Fourier transform of the aperture. This allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of Fraunhofer diffraction, showing that imaging processes with arbitrarily small apertures can in principle be invertible, i.e., do not lose information, provided the objects to be imaged satisfy a generic condition. A real world experiment shows that we can super-resolve beyond the Rayleigh limit.
Gerchberg @cite_12 (and independently Papoulis @cite_0 ) proposed an algorithm analogous to Gerchberg and Saxton's phase retrieval method @cite_15 incorporating also positivity. As Jones @cite_24 points out, this algorithms converges under certain conditions only rather slowly.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_24", "@cite_12" ], "mid": [ "2108943734", "1484412996", "2067298775", "1982061533" ], "abstract": [ "If only a segment of a function f (t) is given, then its Fourier spectrum F( ) is estimated either as the transform of the product of f(t) with a time-limited window w(t) , or by certain techniques based on various a priori assumptions. In the following, a new algorithm is proposed for computing the transform of a band-limited function. The algorithm is a simple iteration involving only the fast Fourier transform (FFT). The effect of noise and the error due to aliasing are determined and it is shown that they can be controlled by early termination of the iteration. The proposed method can also be used to extrapolate bandlimited functions.", "An algorithm is presented for the rapid solution of the phase of the complete wave function whose intensity in the diffraction and imaging planes of an imaging system are known. A proof is given showing that a defined error between the estimated function and the correct function must decrease as the algorithm iterates. The problem of uniqueness is discussed and results are presented demonstrating the power of the method.", "The discrete version of the Gerchberg algorithm for iterative restoration of a time-constrained function from only partial knowledge of its spectrum (or vice versa) is analyzed. Although convergence is guaranteed, eigenvalues close to unity inhibit iteration to the limit. Identification of these large eigenvalues, allowing extrapolation to the limit, is described.", "A new view of the problem of continuing a given segment of the spectrum of a finite object is presented. Based on this, the problem is restated in terms of reducing a defined ‘error energy’ which is implicit in the truncated spectrum. A computational procedure, which is readily implemented on general purpose computers, is devised which must reduce this error. It is demonstrated that by so doing, resolution well beyond the diffraction limit is attained. The procedure is shown to be very effective against noisy data." ] }