aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1210.6382
2204542369
Mobile multi-robot teams deployed for monitoring or search-and-rescue missions in urban disaster areas can greatly improve the quality of vital data collected on-site. Analysis of such data can identify hazards and save lives. Unfortunately, such real deployments at scale are cost prohibitive and robot failures lead to data loss. Moreover, scaled-down deployments do not capture significant levels of interaction and communication complexity. To tackle this problem, we propose novel mobility and failure generation frameworks that allow realistic simulations of mobile robot networks for large scale disaster scenarios. Furthermore, since data replication techniques can improve the survivability of data collected during the operation, we propose an adaptive, scalable data replication technique that achieves high data survivability with low overhead. Our technique considers the anticipated robot failures and robot heterogeneity to decide how aggressively to replicate data. In addition, it considers survivability priorities, with some data requiring more effort to be saved than others. Using our novel simulation generation frameworks, we compare our adaptive technique with flooding and broadcast-based replication techniques and show that for failure rates of up to 60 it ensures better data survivability with lower communication costs.
Significant research has been done to provide realistic mobility models for MANETs. Of particular relevance for our work are group mobility models, such as the ones presented in @cite_19 , @cite_14 , @cite_43 and @cite_49 . These can be used to model the robot mobility inside the areas defined in this paper, instead of the Random Waypoint Model used. Also, a model for realistic representation of the movement of civil protection units in a disaster area scenario was presented in @cite_23 . This study focused on a small scale operation area such as a collapsed building (hundreds of square meters) with a high network density and high reachability between nodes. Instead, our study considers large operation areas (tens of square kilometers) that include several small working areas (e.g., collapsed buildings) and nodes forming a fairly sparse network. Moreover, node failures during the operation reduce further network density and challenge data survivability.
{ "cite_N": [ "@cite_14", "@cite_19", "@cite_43", "@cite_23", "@cite_49" ], "mid": [ "2053694592", "2145450252", "2147001732", "1972606191", "1921603329" ], "abstract": [ "In this paper, we present a survey of various mobility models in both cellular networks and multi-hop networks. We show that group motion occurs frequently in ad hoc networks, and introduce a novel group mobility model Reference Point Group Mobility (RPGM) to represent the relationship among mobile hosts. RPGM can be readily applied to many existing applications. Moreover, by proper choice of parameters, RPGM can be used to model several mobility models which were previously proposed. One of the main themes of this paper is to investigate the impact of the mobility model on the performance of a specific network protocol or application. To this end, we have applied our RPGM model to two different network protocol scenarios, clustering and routing, and have evaluated network performance under different mobility patterns and for different protocol implementations. As expected, the results indicate that different mobility patterns affect the various protocols in different ways. In particular, the ranking of routing algorithms is influenced by the choice of mobility pattern.", "This paper presents an analysis of the behavior of mobile ad hoc networks when group mobility is involved. We propose four different group mobility models and present a mobility pattern generator, called grcmob that we designed to be used with the ns-2 simulator. Using 2k factorial analysis we determine the most representative factors for protocol performance. We then evaluate the performance of a dynamic source routing (DSR) based MANET, using both TCP and UDP data traffic. The results are compared with the classical random waypoint mobility model. It is shown that the number of groups parameter is more important than the number of nodes one and that the impact of the area size is almost negligible. We make also evident that the mix of inter- and intra-group communication has the strongest impact on the performance. Finally, it is evidenced that the presence of groups forces the network topology to be sparser and therefore the probability of network partitions and node disconnections grows.", "In this paper, we propose a novel group mobility model, called reference region group mobility (RRGM) model, which can be used in the description of group motion behavior as well as individual movement. This model is designed to be applicable to many different scenarios, such as military operation, search and rescue, exhibition hall visiting, building search, etc., where a group may be partitioned into a number of smaller groups and groups may merge whenever necessary. Moreover, by using the density-based approach, our model can control the size of the region to be covered by a group. Our main contribution is on the effectiveness of modeling group mobility scenarios with group partitioning and merging, which are most likely to be found in ad hoc networks", "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining leaving of nodes. As these characteristics cannot be modeled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modeling, we compare it to existing ones (modeling the same scenario) using different pure movement and link-based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.", "In wireless ad-hoc networks, network partitioning occurs when the mobile nodes move with diverse patterns and cause the network to separate into completely disconnected portions. Network partitioning is a wide-scale topology change that can cause sudden and severe disruptions to ongoing network routing and upper layer applications. Its occurrence can be attributed to the aggregate group motion exhibited in the movements of the mobile nodes. By exploiting the group mobility pattern, we can predict the future network partitioning, and thus minimize the amount of disruption. We propose a new characterization of group mobility, based on existing group mobility models, which provides parameters that are sufficient for network partition prediction. We then demonstrate how partition prediction can be made using the mobility model parameters and illustrate the applicability of the prediction information. Furthermore, we use a simple but effective data clustering algorithm that, given the velocities of the mobile nodes in an ad-hoc network, can accurately determine the mobility groups and estimate the characteristic parameters of each group." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
The key assumption in web data extraction is that a large fraction of the data on the web is structured @cite_29 by HTML markup and visual styling, especially when web pages are automatically generated and populated from templates and underlying information systems. This sets web data extraction apart from information extraction where entities, relations, and other information are extracted from free text (possibly from web pages).
{ "cite_N": [ "@cite_29" ], "mid": [ "2044515729" ], "abstract": [ "Google's Web Tables and Deep Web Crawler identify and deliver this otherwise inaccessible resource directly to end users." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
Early web data extraction approaches address data extraction via manual wrapper development @cite_17 or through visual, semi-automated tools @cite_37 @cite_3 (still commonly used in industry). Modern web data extraction approaches, on the other hand, overwhelmingly fall into one of two categories (for recent surveys, see @cite_5 @cite_24 ): @cite_38 @cite_10 @cite_22 @cite_51 @cite_12 @cite_45 @cite_26 @cite_2 starts from a number of manually annotated examples, i.e., pages where the objects and attributes to be extracted are marked by a human, and automatically produce a wrapper program which extracts the corresponding content from previously unseen pages. @cite_4 @cite_21 @cite_14 @cite_41 @cite_11 @cite_46 @cite_30 attempts to fully automate the extraction process by unsupervised learning of repeated structures on the page as they usually indicate the presence of content to be extracted.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_41", "@cite_3", "@cite_2", "@cite_5", "@cite_10", "@cite_38", "@cite_4", "@cite_21", "@cite_46", "@cite_17", "@cite_37", "@cite_26", "@cite_12", "@cite_14", "@cite_24", "@cite_45", "@cite_51", "@cite_11" ], "mid": [ "", "", "", "2072936489", "", "2134150392", "", "2002956097", "2085016361", "", "", "2136072238", "2148210463", "", "", "", "2005646337", "", "", "" ], "abstract": [ "", "", "", "In this paper we present DEByE(Data Extraction By Example), an approach to extracting data from Web sources, based on a small set of examples specified by the user. The novelty is in the fact that the user specifies examples according to a structure of his liking and that this structure is described at example specification time. For the specification of the examples, the user interacts with a tool we developed which adopts nested tables as its visual paradigm. Nested tables are simple, intuitive, and allow shielding the user from technical details (such as HTML tags, formatting operators, and learning automata) related to the extraction problem. The examples provided by the user are then used to generate patterns which allow extracting data from new documents. For the extraction, DEByE adopts a new bottom-up procedure we proposed which is very effective with various Web sources, as demonstrated by our experiments.", "", "The Internet presents a huge amount of useful information which is usually formatted for its users, which makes it difficult to extract relevant data from various sources. Therefore, the availability of robust, flexible information extraction (IE) systems that transform the Web pages into program-friendly structures such as a relational database will become a great necessity. Although many approaches for data extraction from Web pages have been developed, there has been limited effort to compare such tools. Unfortunately, in only a few cases can the results generated by distinct tools be directly compared since the addressed extraction tasks are different. This paper surveys the major Web data extraction approaches and compares them in three dimensions: the task domain, the automation degree, and the techniques used. The criteria of the first dimension explain why an IE system fails to handle some Web sites of particular structures. The criteria of the second dimension classify IE systems based on the techniques used. The criteria of the third dimension measure the degree of automation for IE systems. We believe these criteria provide qualitatively measures to evaluate various IE approaches", "", "On script-generated web sites, many documents share common HTML tree structure, allowing wrappers to effectively extract information of interest. Of course, the scripts and thus the tree structure evolve over time, causing wrappers to break repeatedly, and resulting in a high cost of maintaining wrappers. In this paper, we explore a novel approach: we use temporal snapshots of web pages to develop a tree-edit model of HTML, and use this model to improve wrapper construction. We view the changes to the tree structure as suppositions of a series of edit operations: deleting nodes, inserting nodes and substituting labels of nodes. The tree structures evolve by choosing these edit operations stochastically. Our model is attractive in that the probability that a source tree has evolved into a target tree can be estimated efficiently--in quadratic time in the size of the trees--making it a potentially useful tool for a variety of tree-evolution problems. We give an algorithm to learn the probabilistic model from training examples consisting of pairs of trees, and apply this algorithm to collections of web-page snapshots to derive HTML-specific tree edit models. Finally, we describe a novel wrapper-construction framework that takes the tree-edit model into account, and compare the quality of resulting wrappers to that of traditional wrappers on synthetic and real HTML document examples.", "Data extraction from HTML pages is performed by software modules, usually called wrappers. Roughly speaking, a wrapper identifies and extracts relevant pieces of text inside a web page, and reorganizes them in a more structured format. In the literature there is a number of systems to (semi-)automatically generate wrappers for HTML pages [1]. We have recently investigated for original approaches that aims at pushing further the level of automation of the wrapper generation process. Our main intuition is that, in a dataintensive web site, pages can be classified in a small number of classes, such that pages belonging to the same class share a rather tight structure. Based on this observation, we have studied an novel technique, we call the matching technique [2], that automatically generates a common wrapper by exploiting similarities and differences among pages of the same class. In addition, in order to deal with the complexity and the heterogeneities of real-life web sites, we have also studied several complementary techniques that greatly enhance the effectiveness of matching. Our demonstration presents RoadRunner, our prototype that implements matching and its companion techniques. We have conducted several experiments on pages from real life web sites; these experiences have shown the effectiveness of the approach, as well as the efficiency of the system [2]. The matching technique for wrapper inference [2] is based on an iterative process; at every step, matching works on two objects at a time: (i) an input page, which represented as a list of tokens (each token is either a tag or a text field), and (ii) a wrapper, expressed as a regular expression. The process starts by taking one input page as an initial version of the wrapper; then, the wrapper is matched against the sample and it is progressively refined trying to solve mismatches: a mismatch happens when some token in the sample does not comply to the grammar specified by the wrapper. Mismatches can be solved by generalizing the wrapper. The process succeeds if a common wrapper can be generated by solving all mismatches encountered.", "", "", "In this paper we discuss themanagement of semi-structured data, i.e., data that has irregular or dynamically changing structure. We describe components of the Stanford TSIMMIS Project that help extract semi-structured data from Web pages, that allow the storage and querying of semi-structured data, and that allow its browsing through the World Wide Web. A prototype implementation of the TSIMMIS system as described here is currently installed and running in the database group testbed.", "We present new techniques for supervised wrapper generation and automated web information extraction, and a system called Lixto implementing these techniques. Our system can generate wrappers which translate relevant pieces of HTML pages into XML. Lixto, of which a working prototype has been implemented, assists the user to semi-automatically create wrapper programs by providing a fully visual and interactive user interface. In this convenient user-interface very expressive extraction programs can be created. Internally, this functionality is reected by the new logicbased declarative language Elog. Users never have to deal with Elog and even familiarity with HTML is not required. Lixto can be used to create an -Companion\" for an HTML web page with changing content, containing the continually updated XML translation of the relevant information.", "", "", "", "In the last few years, several works in the literature have addressed the problem of data extraction from Web pages. The importance of this problem derives from the fact that, once extracted, the data can be handled in a way similar to instances of a traditional database. The approaches proposed in the literature to address the problem of Web data extraction use techniques borrowed from areas such as natural language processing, languages and grammars, machine learning, information retrieval, databases, and ontologies. As a consequence, they present very distinct features and capabilities which make a direct comparison difficult to be done. In this paper, we propose a taxonomy for characterizing Web data extraction fools, briefly survey major Web data extraction tools described in the literature, and provide a qualitative analysis of them. Hopefully, this work will stimulate other studies aimed at a more comprehensive analysis of data extraction approaches and tools for Web data.", "", "", "" ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
By itself, wrapper induction is incapable of scaling to the web. Because of the wide variation in the template structures of given web sites, it is practically impossible to annotate a sufficiently large page set to cover all relevant combinations of features indicating the presence of structured data. More formally, the for web-scale supervised wrapper induction is too high in all but some restricted cases, as in e.g. @cite_19 which extracts news titles and bodies. Furthermore, traditional wrapper inducers are very sensitive to incompleteness and noise in the annotations thus requiring considerable human effort to create such low noise and complete annotations.
{ "cite_N": [ "@cite_19" ], "mid": [ "2051141368" ], "abstract": [ "Automatic news extraction from news pages is important in many Web applications such as news aggregation. However, the existing news extraction methods based on template-level wrapper induction have three serious limitations. First, the existing methods cannot correctly extract pages belonging to an unseen template. Second, it is costly to maintain up-to-date wrappers for a large amount of news websites, because any change of a template may invalidate the corresponding wrapper. Last, the existing methods can merely extract unformatted plain texts, and thus are not user friendly. In this paper, we tackle the problem of template-independent Web news extraction in a user-friendly way. We formalize Web news extraction as a machine learning problem and learn a template-independent wrapper using a very small number of labeled news pages from a single site. Novel features dedicated to news titles and bodies are developed. Correlations between news titles and news bodies are exploited. Our template-independent wrapper can extract news pages from different sites regardless of templates. Moreover, our approach can extract not only texts, but also images and animates within the news bodies and the extracted news articles are in the same visual style as in the original pages. In our experiments, a wrapper learned from 40 pages from a single news site achieved an accuracy of 98.1 on 3,973 news pages from 12 news sites." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
A complementary line of work deals with specifically stylized structures, such as @cite_15 @cite_33 and @cite_47 . The more clearly defined characteristics of these structures enable domain-independent algorithms that achieve fairly high precision in distinguish genuine structures with relevant data from structures created only for layout purposes. They are particular attractive for use in settings such as web search that optimise for coverage over all sites rather than recall from a particular site.
{ "cite_N": [ "@cite_15", "@cite_47", "@cite_33" ], "mid": [ "2108223890", "2135767707", "2102189859" ], "abstract": [ "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google's general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own \"schema\" of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WEBTABLES system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on co-occurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links.", "A large number of web pages contain data structured in the form of \"lists\". Many such lists can be further split into multi-column tables, which can then be used in more semantically meaningful tasks. However, harvesting relational tables from such lists can be a challenging task. The lists are manually generated and hence need not have well defined templates -- they have inconsistent delimiters (if any) and often have missing information. We propose a novel technique for extracting tables from lists. The technique is domain-independent and operates in a fully unsupervised manner. We first use multiple sources of information to split individual lines into multiple fields, and then compare the splits across multiple lines to identify and fix incorrect splits and bad alignments. In particular, we exploit a corpus of HTML tables, also extracted from the Web, to identify likely fields and good alignments. For each extracted table, we compute an extraction score that reflects our confidence in the table's quality. We conducted an extensive experimental study using both real web lists and lists derived from tables on the Web. The experiments demonstrate the ability of our technique to extract tables with high accuracy. In addition, we applied our technique on a large sample of about 100,000 lists crawled from the Web. The analysis of the extracted tables have led us to believe that there are likely to be tens of millions of useful and query-able relational tables extractable from lists on the Web.", "Traditionally, information extraction from web tables has focused on small, more or less homogeneous corpora, often based on assumptions about the use of tags. A multitude of different HTML implementations of web tables make these approaches difficult to scale. In this paper, we approach the problem of domain-independent information extraction from web tables by shifting our attention from the tree-based representation of webpages to a variation of the two-dimensional visual box model used by web browsers to display the information on the screen. The there by obtained topological and style information allows us to fill the gap created by missing domain-specific knowledge about content and table templates. We believe that, in a future step, this approach can become the basis for a new way of large-scale knowledge acquisition from the current \"Visual Web." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
Instead of limiting the structure types to be recognized, one can exploit to train more specific models. approaches such as @cite_19 @cite_39 exploit specific properties for record detection and attribute labeling. However, besides the difficulty of choosing the features to be considered in the learning algorithm for each domain, changing the domain usually results in at least a partial retraining of the models if not an algorithmic redesign.
{ "cite_N": [ "@cite_19", "@cite_39" ], "mid": [ "2051141368", "1973483159" ], "abstract": [ "Automatic news extraction from news pages is important in many Web applications such as news aggregation. However, the existing news extraction methods based on template-level wrapper induction have three serious limitations. First, the existing methods cannot correctly extract pages belonging to an unseen template. Second, it is costly to maintain up-to-date wrappers for a large amount of news websites, because any change of a template may invalidate the corresponding wrapper. Last, the existing methods can merely extract unformatted plain texts, and thus are not user friendly. In this paper, we tackle the problem of template-independent Web news extraction in a user-friendly way. We formalize Web news extraction as a machine learning problem and learn a template-independent wrapper using a very small number of labeled news pages from a single site. Novel features dedicated to news titles and bodies are developed. Correlations between news titles and news bodies are exploited. Our template-independent wrapper can extract news pages from different sites regardless of templates. Moreover, our approach can extract not only texts, but also images and animates within the news bodies and the extracted news articles are in the same visual style as in the original pages. In our experiments, a wrapper learned from 40 pages from a single news site achieved an accuracy of 98.1 on 3,973 news pages from 12 news sites.", "Recent work has shown the feasibility and promise of template-independent Web data extraction. However, existing approaches use decoupled strategies - attempting to do data record detection and attribute labeling in two separate phases. In this paper, we show that separately extracting data records and attributes is highly ineffective and propose a probabilistic model to perform these two tasks simultaneously. In our approach, record detection can benefit from the availability of semantics required in attribute labeling and, at the same time, the accuracy of attribute labeling can be improved when data records are labeled in a collective manner. The proposed model is called Hierarchical Conditional Random Fields. It can efficiently integrate all useful features by learning their importance, and it can also incorporate hierarchical interactions which are very important for Web data extraction. We empirically compare the proposed model with existing decoupled approaches for product information extraction, and the results show significant improvements in both record detection and attribute labeling." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
Besides , we are only aware of three other approaches @cite_28 @cite_0 @cite_34 that exploit the mutual benefit of unsupervised extraction and induction from automatic annotations. All these approaches are a form of learning, a concept well known in the machine learning community and that has already been successfully applied in the information extraction setting @cite_9 .
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_34", "@cite_0" ], "mid": [ "2080132606", "2071305045", "2079594573", "2111278149" ], "abstract": [ "We present a generic framework to make wrapper induction algorithms tolerant to noise in the training data. This enables us to learn wrappers in a completely unsupervised manner from automatically and cheaply obtained noisy training data, e.g., using dictionaries and regular expressions. By removing the site-level supervision that wrapper-based techniques require, we are able to perform information extraction at web-scale, with accuracy unattained with existing unsupervised extraction techniques. Our system is used in production at Yahoo! and powers live applications.", "Web extraction systems attempt to use the immense amount of unlabeled text in the Web in order to create large lists of entities and relations. Unlike traditional Information Extraction methods, the Web extraction systems do not label every mention of the target entity or relation, instead focusing on extracting as many different instances as possible while keeping the precision of the resulting list reasonably high. SRES is a self-supervised Web relation extraction system that learns powerful extraction patterns from unlabeled text, using short descriptions of the target relations and their attributes. SRES automatically generates the training data needed for its pattern-learning component. The performance of SRES is further enhanced by classifying its output instances using the properties of the instances and the patterns. The features we use for classification and the trained classification model are independent from the target relation, which we demonstrate in a series of experiments. We also compare the performance of SRES to the performance of the state-of-the-art KnowItAll system, and to the performance of its pattern learning component, which learns simpler pattern language than SRES.", "We present an original approach to the automatic induction of wrappers for sources of the hidden Web that does not need any human supervision. Our approach only needs domain knowledge expressed as a set of concept names and concept instances. There are two parts in extracting valuable data from hidden-Web sources: understanding the structure of a given HTML form and relating its fields to concepts of the domain, and understanding how resulting records are represented in an HTML result page. For the former problem, we use a combination of heuristics and of probing with domain instances; for the latter, we use a supervised machine learning technique adapted to tree-like information on an automatic, imperfect, and imprecise, annotation using the domain knowledge. We show experiments that demonstrate the validity and potential of the approach.", "We present in this paper a novel approach for extracting structured data from the Web, whose goal is to harvest real-world items from template-based HTML pages (the structured Web). It illustrates a two-phase querying of the Web, in which an intentional description of the data that is targeted is first provided, in a flexible and widely applicable manner. The extraction process leverages then both the input description and the source structure. Our approach is domain-independent, in the sense that it applies to any relation, either flat or nested, describing real-world items. Extensive experiments on five different domains and comparison with the main state of the art extraction systems from literature illustrate its flexibility and precision. We advocate via our technique that automatic extraction and integration of complex structured data can be done fast and effectively, when the redundancy of the Web meets knowledge over the to-be-extracted data." ] }
1210.5984
2405968456
The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (more than 98 percent) multi-attribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.
@cite_34 , web pages are independently annotated using background knowledge from the domain and analyzed for repeated structures with conditional random fields (CRFs). The analysis of repeated structures identifies the record structure in searching for evenly distributed annotations to validate (and eventually repair) the learned structure. Conceptually, @cite_34 differs from as it initially infers a repeating page structure with the CRFs independently of the annotations. , in contrast, analyses only those portions of the page that are more likely to contain useful and regular data. Focusing the analysis of repeated structures to smaller areas is critical for learning an accurate wrapper since complex pages might contain several regular structures that are not relevant for the extraction task at hand. This is also evident from the reported accuracy of the method proposed in @cite_34 that ranges between @math @math 65 ) are extracted without any error, is able to extract over $95 .
{ "cite_N": [ "@cite_34" ], "mid": [ "2079594573" ], "abstract": [ "We present an original approach to the automatic induction of wrappers for sources of the hidden Web that does not need any human supervision. Our approach only needs domain knowledge expressed as a set of concept names and concept instances. There are two parts in extracting valuable data from hidden-Web sources: understanding the structure of a given HTML form and relating its fields to concepts of the domain, and understanding how resulting records are represented in an HTML result page. For the former problem, we use a combination of heuristics and of probing with domain instances; for the latter, we use a supervised machine learning technique adapted to tree-like information on an automatic, imperfect, and imprecise, annotation using the domain knowledge. We show experiments that demonstrate the validity and potential of the approach." ] }
1210.6052
2951743068
Social applications mine user social graphs to improve performance in search, provide recommendations, allow resource sharing and increase data privacy. When such applications are implemented on a peer-to-peer (P2P) architecture, the social graph is distributed on the P2P system: the traversal of the social graph translates into a socially-informed routing in the peer-to-peer layer. In this work we introduce the model of a projection graph that is the result of decentralizing a social graph onto a peer-to-peer network. We focus on three social network metrics: degree, node betweenness and edge betweenness centrality and analytically formulate the relation between metrics in the social graph and in the projection graph. Through experimental evaluation on real networks, we demonstrate that when mapping user communities of sizes up to 50-150 users on each peer, the association between the properties of the social graph and the projection graph is high, and thus the properties of the (dynamic) projection graph can be inferred from the properties of the (slower changing) social graph. Furthermore, we demonstrate with two application scenarios on large-scale social networks the usability of the projection graph in designing social search applications and unstructured P2P overlays.
The management of social data in a P2P architecture has been addressed in systems such as PeerSoN @cite_1 , Vis- a -Vis @cite_13 , Safebook @cite_31 , LifeSocial.KOM @cite_0 and Prometheus @cite_43 . In some cases (PeerSoN, Vis- a -Vis, Safebook, LifeSocial.KOM), the information of a user is isolated from other users, and peers access them individually. Thus, the @math is fragmented into 1-hop neighborhoods, one for each user, and distributed across all peers, with potentially multiple fragments stored on the same peer. In contrast, in Prometheus @cite_43 , a peer can mine the collection of social data entrusted to it by a group of (possibly socially connected) users. In all these systems, regardless of the way peers are organized in the P2P architecture (e.g., in a structured or unstructured overlay), the @math model can be applied for studying and improving system and application routing.
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_43", "@cite_31", "@cite_13" ], "mid": [ "1984599464", "2141937798", "1798429731", "2127437778", "2080741759" ], "abstract": [ "To address privacy concerns over Online Social Networks (OSNs), we propose a distributed, peer-to-peer approach coupled with encryption. Moreover, extending this distributed approach by direct data exchange between user devices removes the strict Internet-connectivity requirements of web-based OSNs. In order to verify the feasibility of this approach, we designed a two-tiered architecture and protocols that recreate the core features of OSNs in a decentralized way. This paper focuses on the description of the prototype built for the P2P infrastructure for social networks, as a first step without the encryption part, and shares early experiences from the prototype and insights gained since first outlining the challenges and possibilities of decentralized alternatives to OSNs.", "The phenomenon of online social networks reaches millions of users in the Internet nowadays. In these, users present themselves, their interests and their social links which they use to interact with other users. We present in this paper LifeSocial.KOM, a p2p-based platform for secure online social networks which provides the functionality of common online social networks in a totally distributed and secure manner. It is plugin-based, thus extendible in its functionality, providing secure communication and access-controlled storage as well as monitored quality of service, addressing the needs of both, users and system providers. The platform operates solely on the resources of the users, eliminating the concentration of crucial operational costs for one provider. In a testbed evaluation, we show the feasibility of the approach and point out the potential of the p2p paradigm in the field of online social networks.", "Recent Internet applications, such as online social networks and user-generated content sharing, produce an unprecedented amount of social information, which is further augmented by location or collocation data collected from mobile phones. Unfortunately, this wealth of social information is fragmented across many different proprietary applications. Combined, it could provide a more accurate representation of the social world, and it could enable a whole new set of socially-aware applications. We introduce Prometheus, a peer-to-peer service that collects and manages social information from multiple sources and implements a set of social inference functions while enforcing user-defined access control policies. Prometheus is socially-aware: it allows users to select peers that manage their social information based on social trust and exploits naturally-formed social groups for improved performance. We tested our Prometheus prototype on PlanetLab and built a mobile social application to test the performance of its social inference functions under realtime constraints. We showed that the social-based mapping of users onto peers improves the service response time and high service availability is achieved with low overhead.", "Online social network applications severely suffer from various security and privacy exposures. This article suggests a new approach to tackle these security and privacy problems with a special emphasis on the privacy of users with respect to the application provider in addition to defense against intruders or malicious users. In order to ensure users' privacy in the face of potential privacy violations by the provider, the suggested approach adopts a decentralized architecture relying on cooperation among a number of independent parties that are also the users of the online social network application. The second strong point of the suggested approach is to capitalize on the trust relationships that are part of social networks in real life in order to cope with the problem of building trusted and privacy- preserving mechanisms as part of the online application. The combination of these design principles is Safebook, a decentralized and privacy- preserving online social network application. Based on the two design principles, decentralization and exploiting real-life trust, various mechanisms for privacy and security are integrated into Safebook in order to provide data storage and data management functions that preserve users' privacy, data integrity, and availability. Preliminary evaluations of Safebook show that a realistic compromise between privacy and performance is feasible.", "Online Social Networks (OSNs) have become enormously popular. However, two aspects of many current OSNs have important implications with regards to privacy: their centralized nature and their acquisition of rights to users' data. Recent work has proposed decentralized OSNs as more privacy-preserving alternatives to the prevailing OSN model. We present three schemes for decentralized OSNs. In all three, each user stores his own personal data in his own machine, which we term a Virtual Individual Server (VIS). VISs self-organize into peer-to-peer overlay networks, one overlay per social group with which the VIS owner wishes to share information. The schemes differ in where VISs and data reside: (a) on a virtualized utility computing infrastructure in the cloud, (b) on desktop machines augmented with socially-informed data replication, and (c) on desktop machines during normal operation, with failover to a standby virtual machine in the cloud when the primary VIS becomes unavailable. We focus on tradeoffs between these schemes in the areas of privacy, cost, and availability." ] }
1210.6052
2951743068
Social applications mine user social graphs to improve performance in search, provide recommendations, allow resource sharing and increase data privacy. When such applications are implemented on a peer-to-peer (P2P) architecture, the social graph is distributed on the P2P system: the traversal of the social graph translates into a socially-informed routing in the peer-to-peer layer. In this work we introduce the model of a projection graph that is the result of decentralizing a social graph onto a peer-to-peer network. We focus on three social network metrics: degree, node betweenness and edge betweenness centrality and analytically formulate the relation between metrics in the social graph and in the projection graph. Through experimental evaluation on real networks, we demonstrate that when mapping user communities of sizes up to 50-150 users on each peer, the association between the properties of the social graph and the projection graph is high, and thus the properties of the (dynamic) projection graph can be inferred from the properties of the (slower changing) social graph. Furthermore, we demonstrate with two application scenarios on large-scale social networks the usability of the projection graph in designing social search applications and unstructured P2P overlays.
In other studies, such as @cite_3 , peers are organized into social P2P networks based on similar preferences, interests or knowledge of their users, to improve search by utilizing peers trusted or relevant to the search. Similarly, in @cite_6 a social-based overlay for unstructured P2P networks is outlined, that enables peers to find and establish ties with other peers if their owners have common interest in specific types of content, thus improving search and reducing overlay construction overhead. @cite_28 , P2P social networks self-organize based on the concept of distributed neuron-like agents and search stimulus between peers, to facilitate improved resource sharing and search. In such systems, the peers form edges over similar preferences of their owners or search requests (i.e, @math edges). Thus, they implicitly use the @math model to organize peers into a P2P social network.
{ "cite_N": [ "@cite_28", "@cite_6", "@cite_3" ], "mid": [ "2168839150", "2158881236", "" ], "abstract": [ "Peer-to-peer (P2P) systems provide a new solution to distributed information and resource sharing because of its outstanding properties in decentralization, dynamics, flexibility, autonomy, and cooperation, summarized as DDFAC in this paper. After a detailed analysis of the current P2P literature, this paper suggests to better exploit peer social relationships and peer autonomy to achieve efficient P2P structure design. Accordingly, this paper proposes Self-organizing peer-to-peer social networks (SoPPSoNs) to self-organize distributed peers in a decentralized way, in which neuron-like agents following extended Hebbian rules found in the brain activity represent peers to discover useful peer connections. The self-organized networks capture social associations of peers in resource sharing, and hence are called P2P social networks. SoPPSoNs have improved search speed and success rate as peer social networks are correctly formed. This has been verified through tests on real data collected from the Gnutella system. Analysis on the Gnutella data has verified that social associations of peers in reality are directed, asymmetric and weighted, validating the design of SoPPSoN. The tests presented in this paper have also evaluated the scalability of SoPPSoN, its performance under varied initial network connectivity and the effects of different learning rules.", "The widespread use of peer-to-peer (P2P) systems has made multimedia content sharing more efficient. Users in a P2P network can query and download objects based on their preference for specific types of multimedia content. However, most P2P systems only construct the overlay architecture according to physical network constraints and do not take user preferences into account. In this paper, we investigate a social-based overlay that can cluster peers that have similar preferences. To construct a semantic social-based overlay, we model a quantifiable measure of similarity between peers so that those with a higher degree of similarity can be connected by shorter paths. Hence, peers can locate objects of interest from their overlay neighbors, i.e., peers who have common interests. In addition, we propose an overlay adaptation algorithm that allows the overlay to adapt to P2P churn and preference changes in a distributed manner. We use simulations and a real database called Audioscrobbler, which tracks users' listening habits, to evaluate the proposed social-based overlay. The results show that social-based overlay adaptation enables users to locate content of interest with a higher success ratio and with less message overhead.", "" ] }
1210.6052
2951743068
Social applications mine user social graphs to improve performance in search, provide recommendations, allow resource sharing and increase data privacy. When such applications are implemented on a peer-to-peer (P2P) architecture, the social graph is distributed on the P2P system: the traversal of the social graph translates into a socially-informed routing in the peer-to-peer layer. In this work we introduce the model of a projection graph that is the result of decentralizing a social graph onto a peer-to-peer network. We focus on three social network metrics: degree, node betweenness and edge betweenness centrality and analytically formulate the relation between metrics in the social graph and in the projection graph. Through experimental evaluation on real networks, we demonstrate that when mapping user communities of sizes up to 50-150 users on each peer, the association between the properties of the social graph and the projection graph is high, and thus the properties of the (dynamic) projection graph can be inferred from the properties of the (slower changing) social graph. Furthermore, we demonstrate with two application scenarios on large-scale social networks the usability of the projection graph in designing social search applications and unstructured P2P overlays.
Relevant to our work is the notion of the @cite_19 , where a group of users is replaced by a single super" vertex (similar to the peer in the @math model). However, in the @math model: 1) all users must be mapped to groups peers, while the group reduced graph has both a super vertex and regular users as nodes; 2) a peer is consequently connected only to other peers (and not users); and 3) PG edges are weighted, while there is no concept of edge weight in the group-reduced model. Moreover, the authors of the group-reduced graph model express a reservation related to the applicability of their model: from a sociological point of view it is difficult to justify the removal of @math edges between users within a social group. The @math , however, materializes in the technical space, and thus the relationships between users are irrelevant within the peer they are mapped on.
{ "cite_N": [ "@cite_19" ], "mid": [ "2089370556" ], "abstract": [ "This paper extends the standard network centrality measures of degree, closeness and betweenness to apply to groups and classes as well as individuals. The group centrality measures will enable researchers to answer such questions as ‘how central is the engineering department in the informal influence network of this company?’ or ‘among middle managers in a given organization, which are more central, the men or the women?’ With these measures we can also solve the inverse problem: given the network of ties among organization members, how can we form a team that is maximally central? The measures are illustrated using two classic network data sets. We also formalize a measure of group centrality efficiency, which indicates the extent to which a group's centrality is principally due to a small subset of its members." ] }
1210.5588
1988090728
Through a self-dual mapping of the geometry AdS5 ×S5, fermionic T-duality provides a beautiful geometric interpretation of hidden symmetries for scattering amplitudes in super-Yang–Mills. Starting with Green–Schwarz sigma-models, we consolidate developments in this area into this small review. In particular, we discuss the translation of fermionic T-duality into the supergravity fields via pure spinor formalism and show that a general class of fermionic transformations can be identified directly in the supergravity. In addition to discussing fermionic T-duality for the geometry AdS4 × ℂP3, dual to ABJM theory, we review work on other self-dual geometries. Finally, we present a short round-up of studies with a formal interest in fermionic T-duality.
Unrelated to whether geometries exhibit a self-duality transformation incorporating fermionic T-duality, a small body of works studying some formal aspects of fermionic T-duality have appeared in the literature @cite_52 @cite_40 @cite_59 @cite_12 @cite_47 @cite_55 @cite_42 . We now present brief summary of these papers.
{ "cite_N": [ "@cite_55", "@cite_42", "@cite_52", "@cite_40", "@cite_59", "@cite_47", "@cite_12" ], "mid": [ "2060090352", "2140369185", "2092323754", "2074335749", "2026672909", "2091792558", "2157149227" ], "abstract": [ "In this article we establish the relationship between fermionic T-duality and momenta noncommuativity. This is extension of known relation between bosonic Tduality and coordinate noncommutativity. The case of open string propagating in background of the type IIB superstring theory has been considered. We perform T-duality with respect to the fermionic variables instead to the bosonic ones. We also choose Dirichlet boundary conditions at the string endpoints, which lead to the momenta noncommutativity, instead Neumann ones which lead to the coordinates noncommutativity. Finally, we establish the main result of the article that momenta noncommutativity parameters are just fermionic T-dual fields.", "In this article we investigate the relation between consequences of Dirichlet boundary conditions (momenta noncommutativity and parameters of the effective theory) and background fields of fermionic T-dual theory. We impose Dirichlet boundary conditions on the endpoints of the open string propagating in background of type IIB superstring theory. We showed that on the solution of the boundary conditions the momenta become noncommutative, while the coordinates commute. Fermionic Tduality is also introduced and its relation to noncommutativity is considered. We use compact notation so that type IIB superstring formally gets the form of the bosonic one with Grassman variables. Then momenta noncommutativity parameters are fermionic T-dual fields. The effective theory, the initial theory on the solution of boundary conditions, is bilinear in the effective coordinates, odd under world-sheet parity transformation. The effective metric is equal to the initial one and terms with the effective Kalb-Ramond field vanish.", "In this note we study the preservation of the classical pure spinor BRST constraints under super T-duality transformations. We also determine the invariance of the one-loop conformal invariance and of the local gauge and Lorentz anomalies under the super T-dualities.", "Abstract We study the dualities for sigma models with fermions and bosons. We found that the generalization of the SO ( m , m ) duality for D = 2 sigma models and the Sp ( 2 n ) duality for D = 4 sigma models is the orthosymplectic duality OSp ( m , m | 2 n ) . We study the implications of this and we derive the most general D = 2 sigma model, coupled to fermionic and bosonic one-forms, with such dualities. To achieve this we generalize Gaillard–Zumino analysis to orthosymplectic dualities, which requires to define embedding of the superisometry group of the target space into the duality group. We finally discuss the recently proposed fermionic dualities as a by-product of our construction.", "We establish that the recently discovered fermionic T-duality can be viewed as a canonical transformation in phase space. This requires a careful treatment of constrained Hamiltonian systems. Additionally, we show how the canonical transformation approach for bosonic T-duality can be extended to include Ramond–Ramond backgrounds in the pure spinor formalism.", "In this paper we investigate the relationship between the so-called fermionic T-duality and the Morita equivalence of noncommutative supertori. We first get an action satisfying the BRST invariance under nonvanishing constant R-R and NS-NS backgrounds in the hybrid formalism. We investigate the effect of bosonic T-duality transformation together with fermionic T-duality transformation in this background and look for the resultant symmetry of transformations. We find that the duality transformations correspond to Morita equivalence of noncommutative supertori. In particular, we obtain the symmetry group ( SO ( 2,2, V _ Z ^0 ) ) in two dimensions, where ( V _ Z ^0 ) denotes Grassmann even number whose body part belongs to ( Z ).", "We study two aspects of fermionic T-duality: the duality in purely fermionic sigma models exploring the possible obstructions and the extension of the T-duality beyond classical approximation. We consider fermionic sigma models as coset models of supergroups divided by their maximally bosonic subgroup OSp(m|n) SO(m) × Sp(n). Using the non-abelian T-duality and a non-conventional gauge fixing we derive their fermionic T-duals. In the second part of the paper, we prove the conformal invariance of these models at one and two loops using the Background Field Method and we check the Ward Identities." ] }
1210.5588
1988090728
Through a self-dual mapping of the geometry AdS5 ×S5, fermionic T-duality provides a beautiful geometric interpretation of hidden symmetries for scattering amplitudes in super-Yang–Mills. Starting with Green–Schwarz sigma-models, we consolidate developments in this area into this small review. In particular, we discuss the translation of fermionic T-duality into the supergravity fields via pure spinor formalism and show that a general class of fermionic transformations can be identified directly in the supergravity. In addition to discussing fermionic T-duality for the geometry AdS4 × ℂP3, dual to ABJM theory, we review work on other self-dual geometries. Finally, we present a short round-up of studies with a formal interest in fermionic T-duality.
In @cite_52 both bosonic and fermionic T-duality in the context of pure spinor heterotic superstring are studied. @cite_40 attempts to treat fermionic T-duality in a background independent matter by defining a supersymmetric sigma-model which is globally invariant with respect to a super-duality group. In @cite_59 it is shown that fermionic T-duality, like its bosonic counterpart, can be viewed as a canonical transformation in phase space. @cite_12 explores extensions of fermionic T-duality beyond the classical approximation. In @cite_47 a connection between fermionic T-duality and the Morita equivalence @cite_58 @cite_16 for noncommutative supertori is established. Noncommutativity in momenta can also be shown to arise as a consequence of fermionic T-duality @cite_55 . Finally, aspects of Dirichlet boundary conditions in the context of fermionic T-duality are touched upon in @cite_42 .
{ "cite_N": [ "@cite_55", "@cite_42", "@cite_52", "@cite_16", "@cite_40", "@cite_59", "@cite_47", "@cite_58", "@cite_12" ], "mid": [ "2060090352", "2140369185", "2092323754", "1990102072", "2074335749", "2026672909", "2091792558", "2953089372", "2157149227" ], "abstract": [ "In this article we establish the relationship between fermionic T-duality and momenta noncommuativity. This is extension of known relation between bosonic Tduality and coordinate noncommutativity. The case of open string propagating in background of the type IIB superstring theory has been considered. We perform T-duality with respect to the fermionic variables instead to the bosonic ones. We also choose Dirichlet boundary conditions at the string endpoints, which lead to the momenta noncommutativity, instead Neumann ones which lead to the coordinates noncommutativity. Finally, we establish the main result of the article that momenta noncommutativity parameters are just fermionic T-dual fields.", "In this article we investigate the relation between consequences of Dirichlet boundary conditions (momenta noncommutativity and parameters of the effective theory) and background fields of fermionic T-dual theory. We impose Dirichlet boundary conditions on the endpoints of the open string propagating in background of type IIB superstring theory. We showed that on the solution of the boundary conditions the momenta become noncommutative, while the coordinates commute. Fermionic Tduality is also introduced and its relation to noncommutativity is considered. We use compact notation so that type IIB superstring formally gets the form of the bosonic one with Grassman variables. Then momenta noncommutativity parameters are fermionic T-dual fields. The effective theory, the initial theory on the solution of boundary conditions, is bilinear in the effective coordinates, odd under world-sheet parity transformation. The effective metric is equal to the initial one and terms with the effective Kalb-Ramond field vanish.", "In this note we study the preservation of the classical pure spinor BRST constraints under super T-duality transformations. We also determine the invariance of the one-loop conformal invariance and of the local gauge and Lorentz anomalies under the super T-dualities.", "It was shown by Connes, Douglas, Schwarz [hep-th 9711162] that one can compactify M(atrix) theory on a non-commutative torus To. We prove that compactifications on Morita equivalent tori are in some sense physically equivalent. This statement can be considered as a generalization of non-classical SL(2,Z)N duality conjectured by Connes, Douglas and Schwarz for compactifications on two-dimensional non-commutative tori.", "Abstract We study the dualities for sigma models with fermions and bosons. We found that the generalization of the SO ( m , m ) duality for D = 2 sigma models and the Sp ( 2 n ) duality for D = 4 sigma models is the orthosymplectic duality OSp ( m , m | 2 n ) . We study the implications of this and we derive the most general D = 2 sigma model, coupled to fermionic and bosonic one-forms, with such dualities. To achieve this we generalize Gaillard–Zumino analysis to orthosymplectic dualities, which requires to define embedding of the superisometry group of the target space into the duality group. We finally discuss the recently proposed fermionic dualities as a by-product of our construction.", "We establish that the recently discovered fermionic T-duality can be viewed as a canonical transformation in phase space. This requires a careful treatment of constrained Hamiltonian systems. Additionally, we show how the canonical transformation approach for bosonic T-duality can be extended to include Ramond–Ramond backgrounds in the pure spinor formalism.", "In this paper we investigate the relationship between the so-called fermionic T-duality and the Morita equivalence of noncommutative supertori. We first get an action satisfying the BRST invariance under nonvanishing constant R-R and NS-NS backgrounds in the hybrid formalism. We investigate the effect of bosonic T-duality transformation together with fermionic T-duality transformation in this background and look for the resultant symmetry of transformations. We find that the duality transformations correspond to Morita equivalence of noncommutative supertori. In particular, we obtain the symmetry group ( SO ( 2,2, V _ Z ^0 ) ) in two dimensions, where ( V _ Z ^0 ) denotes Grassmann even number whose body part belongs to ( Z ).", "One can describe an @math -dimensional noncommutative torus by means of an antisymmetric @math -matrix @math . We construct an action of the group @math on the space of antisymmetric matrices and show that, generically, matrices belonging to the same orbit of this group give Morita equivalent tori. Some applications to physics are sketched.", "We study two aspects of fermionic T-duality: the duality in purely fermionic sigma models exploring the possible obstructions and the extension of the T-duality beyond classical approximation. We consider fermionic sigma models as coset models of supergroups divided by their maximally bosonic subgroup OSp(m|n) SO(m) × Sp(n). Using the non-abelian T-duality and a non-conventional gauge fixing we derive their fermionic T-duals. In the second part of the paper, we prove the conformal invariance of these models at one and two loops using the Background Field Method and we check the Ward Identities." ] }
1210.5393
1526430400
Mobility causes network structures to change. In PSNs where underlying network structure is changing rapidly, we are interested in studying how information dissemination can be enhanced in a sparse disconnected network where nodes lack the global knowledge about the network. We use beamforming to study the enhancement in the information dissemination process. In order to identify potential beamformers and nodes to which beams should be directed we use the concept of stability. We first predict the stability of a node in the dynamic network using truncated levy walk nature of jump lengths of human mobility and then use this measure to identify beamforming nodes and the nodes to which the beams are directed. We also develop our algorithm such that it does not require any global knowledge about the network and works in a distributed manner. We also show the effect of various parameters such as number of sources, number of packets, mobility parameters, antenna parameters, type of stability used and density of the network on information dissemination in the network. We validate our findings with three validation model, no beamforming, beamforming using different stability measure and when no stability measure is associated but same number of node beamform and the selection of the beamforming nodes is random. Our simulation results show that information dissemination can be enhanced using our algorithm over other models.
Many mobility models have been proposed that capture the characteristics of human mobility. A comprehensive survey of mobility properties associated with humans can be found in @cite_11 @cite_6 , while a clear difference between different human mobility models can also be found in @cite_11 . Most of the traditional models on human mobility use spatial properties of human mobility in the model. However, it was brought up that human mobility also has temporal characteristics and follows truncated power law distribution. Truncated Power Law is characterized by a probability distribution that has the properties of a power law with an exponential cutoff, eq. where proportionality constant is equal to @math , @math is the power law decay exponent constant while @math is the cutoff value. Truncated power law distribution means that the distribution starts as a power law and ends as an exponential curve. The exponential cutoff is because @math r_ g @math @math @math @math @math r_ g $. We next provide a brief overview of antenna models and beamforming in section .
{ "cite_N": [ "@cite_6", "@cite_11" ], "mid": [ "1680503020", "2060334710" ], "abstract": [ "The Mobile Ad Hoc Network (MANET) has emerged as the next frontier for wireless communications networking in both the military and commercial arena. Handbook of Mobile Ad Hoc Networks for Mobility Models introduces 40 different major mobility models along with numerous associate mobility models to be used in a variety of MANET networking environments in the ground, air, space, and or under water mobile vehicles and or handheld devices. These vehicles include cars, armors, ships, under-sea vehicles, manned and unmanned airborne vehicles, spacecrafts and more. This handbook also describes how each mobility pattern affects the MANET performance from physical to application layer; such as throughput capacity, delay, jitter, packet loss and packet delivery ratio, longevity of route, route overhead, reliability, and survivability. Case studies, examples, and exercises are provided throughout the book. Handbook of Mobile Ad Hoc Networks for Mobility Models is for advanced-level students and researchers concentrating on electrical engineering and computer science within wireless technology. Industry professionals working in the areas of mobile ad hoc networks, communications engineering, military establishments engaged in communications engineering, equipment manufacturers who are designing radios, mobile wireless routers, wireless local area networks, and mobile ad hoc network equipment will find this book useful as well.", "Mobile ad hoc networks enable communications between clouds of mobile devices without the need for a preexisting infrastructure. One of their most interesting evolutions are opportunistic networks, whose goal is to also enable communication in disconnected environments, where the general absence of an end-to-end path between the sender and the receiver impairs communication when legacy MANET networking protocols are used. The key idea of OppNets is that the mobility of nodes helps the delivery of messages, because it may connect, asynchronously in time, otherwise disconnected subnetworks. This is especially true for networks whose nodes are mobile devices (e.g., smartphones and tablets) carried by human users, which is the typical OppNets scenario. In such a network where the movements of the communicating devices mirror those of their owners, finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity (frequencies of meetings, average duration of a contact, etc.), and exploiting them to predict future encounters. Therefore, there is a challenge in studying human mobility, specifically in its application to OppNets research. In this article we review the state of the art in the field of human mobility analysis and present a survey of mobility models. We start by reviewing the most considerable findings regarding the nature of human movements, which we classify along the spatial, temporal, and social dimensions of mobility. We discuss the shortcomings of the existing knowledge about human movements and extend it with the notion of predictability and patterns. We then survey existing approaches to mobility modeling and fit them into a taxonomy that provides the basis for a discussion on open problems and further directions for research on modeling human mobility." ] }
1210.5474
1786904711
Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.
The model proposed here was strongly influenced by previous attempts to disentangle factors of variation in data using latent variable models. One of the earlier efforts in this direction also used higher-order interactions of latent variables, specifically bilinear @cite_3 @cite_14 and multilinear @cite_2 models. One critical difference between these previous attempts to disentangle factors of variation and our method is that unlike these previous methods, we are attempting to learn to disentangle from entirely unsupervised information. In this way, one can interpret our approach as an attempt to extend the subspace feature pooling approach to the problem of disentangling factors of variation.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_2" ], "mid": [ "2098207052", "2170653751", "2100779717" ], "abstract": [ "Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.", "Perceptual systems routinely separate “content” from “style,” classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.", "Independent components analysis (ICA) maximizes the statistical independence of the representational components of a training image ensemble, but it cannot distinguish between the different factors, or modes, inherent to image formation, including scene structure, illumination, and imaging. We introduce a nonlinear, multifactor model that generalizes ICA. Our multilinear ICA (MICA) model of image ensembles learns the statistically independent components of multiple factors. Whereas ICA employs linear (matrix) algebra, MICA exploits multilinear (tensor) algebra. We furthermore introduce a multilinear projection algorithm which projects an unlabeled test image into the N constituent mode spaces to simultaneously infer its mode labels. In the context of facial image ensembles, where the mode labels are person, viewpoint, illumination, expression, etc., we demonstrate that the statistical regularities learned by MICA capture information that, in conjunction with our multilinear projection algorithm, improves automatic face recognition." ] }
1210.5474
1786904711
Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.
also propose to disentangle factors of variation by learning to extract features associated with pose parameters, where the changes in pose parameters (but not the feature values) are known at training time. The proposed model is also closely related to recent work @cite_0 , where higher-order Boltzmann Machines are used as models of spatial transformations in images. While there are a number of differences between this model and ours, the most significant difference is our use of multiplicative interactions between variables. While they included higher-order interactions within the Boltzmann energy function, they were used exclusively between observed variables, dramatically simplifying the inference and learning procedures. Another major point of departure is that instead of relying on low-rank approximations to the weight tensor, our approach employs highly structured and sparse connections between latent variables (e.g. @math is not interact with or @math for @math ), reminiscent of recent work on structured sparse coding @cite_15 and structured @math -norms @cite_21 . As discussed above, our use of a sparse connection structure allows us to isolate groups of interacting latent variables. Keeping the interactions local in this way, is a key component of our ability to successfully learn using only unsupervised data.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_21" ], "mid": [ "2136163184", "2161977692", "2138265962" ], "abstract": [ "To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.", "This work describes a conceptually simple method for structured sparse coding and dictionary design. Supposing a dictionary with K atoms, we introduce a structure as a set of penalties or interactions between every pair of atoms. We describe modifications of standard sparse coding algorithms for inference in this setting, and describe experiments showing that these algorithms are efficient. We show that interesting dictionaries can be learned for interactions that encode tree structures or locally connected structures. Finally, we show that our framework allows us to learn the values of the interactions from the data, rather than having them pre-specified.", "Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. While naturally cast as a combinatorial optimization problem, variable or feature selection admits a convex relaxation through the regularization by the @math -norm. In this paper, we consider situations where we are not only interested in sparsity, but where some structural prior knowledge is available as well. We show that the @math -norm can then be extended to structured norms built on either disjoint or overlapping groups of variables, leading to a flexible framework that can deal with various structures. We present applications to unsupervised learning, for structured sparse principal component analysis and hierarchical dictionary learning, and to supervised learning in the context of non-linear variable selection." ] }
1210.4601
1487158520
Boosting methods combine a set of moderately accurate weaklearners to form a highly accurate predictor. Despite the practical importance of multi-class boosting, it has received far less attention than its binary counterpart. In this work, we propose a fully-corrective multi-class boosting formulation which directly solves the multi-class problem without dividing it into multiple binary classification problems. In contrast, most previous multi-class boosting algorithms decompose a multi-boost problem into multiple binary boosting problems. By explicitly deriving the Lagrange dual of the primal optimization problem, we are able to construct a column generation-based fully-corrective approach to boosting which directly optimizes multi-class classification performance. The new approach not only updates all weak learners' coefficients at every iteration, but does so in a manner flexible enough to accommodate various loss functions and regularizations. For example, it enables us to introduce structural sparsity through mixed-norm regularization to promote group sparsity and feature sharing. Boosting with shared features is particularly beneficial in complex prediction problems where features can be expensive to compute. Our experiments on various data sets demonstrate that our direct multi-class boosting generalizes as well as, or better than, a range of competing multi-class boosting methods. The end result is a highly effective and compact ensemble classifier which can be trained in a distributed fashion.
Our work here can also be seen as an extension of the general binary fully-corrective boosting framework of @cite_16 to the multi-class case. As in , we design a feature-sharing boosting method using a direct formulation, but for multi-class problems and using a more sophisticated group sparsity regularization. Note that the general boosting framework of @cite_16 is not directly applicable in our problem setting.
{ "cite_N": [ "@cite_16" ], "mid": [ "2125607229" ], "abstract": [ "We study boosting algorithms from a new perspective. We show that the Lagrange dual problems of l1-norm-regularized AdaBoost, LogitBoost, and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We also theoretically prove that approximately, l1-norm-regularized AdaBoost maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation-based optimization algorithms, which are totally corrective. We show that they exhibit almost identical classification results to that of standard stagewise additive boosting algorithms but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique." ] }
1210.5135
1726835694
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
The third major approach performs a score-and-search strategy. In general, score-and-search algorithms search through structure space guided by a scoring function. One of the most basic score-and-search algorithms is Greedy Search @cite_4 . Since the size of structure space grows super-exponentially, the search approach would get inevitably trapped into local maximum, even there are many ways to escape, such as random restarts, simulated annealing or search in the space of equivalence classes of DAGs, called PDAGs.
{ "cite_N": [ "@cite_4" ], "mid": [ "114335372" ], "abstract": [ "Learning the most probable a posteriori Bayesian network from data has been shown to be an NP-Hard problem and typical state-of-the-art algorithms are exponential in the worst case. However, an important open problem in the field is to identify the least restrictive set of assumptions and corresponding algorithms under which learning the optimal network becomes polynomial. In this paper, we present a technique for learning the skeleton of a Bayesian network, called Polynomial Max-Min Skeleton (PMMS), and compare it with Three Phase Dependency Analysis, another state-ofthe-art polynomial algorithm. This analysis considers both the theoretical and empirical differences between the two algorithms, and demonstrates PMMS’s advantages in both respects. When extended with a greedy hill-climbing Bayesianscoring search to orient the edges, the novel algorithm proved more time efficient, scalable, and accurate in quality of reconstruction than most state-of-the-art Bayesian network learning algorithms. The results show promise of the existence of polynomial algorithms that are provably correct under minimal distributional assumptions." ] }
1210.5135
1726835694
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
The forth major approach serves as a hybrid approach. Hybrid approaches integrate constrain-based and score-and-search algorithms together. MMHC @cite_32 (Max-Min Hill-Climbing) shows superiority to other algorithms by combining local learning, reconstructing the skeleton of a Bayesian network by constrain-based approach, and performing greedy hill-climbing search for edge orientation.
{ "cite_N": [ "@cite_32" ], "mid": [ "2165190832" ], "abstract": [ "We present a new algorithm for Bayesian network structure learning, called Max-Min Hill-Climbing (MMHC). The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and state-of-the-art algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at http: www.dsl-lab.org supplements mmhc_paper mmhc_index.html." ] }
1210.5135
1726835694
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
One of traditional methods regards overlapping communities detection as an optimization problem, specifically, each community is identified as a subgraph reaching local optimization given quality function @math , thus detecting overlapping communities becomes finding all locally-optimized subgraphs @cite_23 . Furthermore, the optimization could be augmented by combination with spectral mapping and fuzzy clustering @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_23" ], "mid": [ "2044719661", "2161903623" ], "abstract": [ "Identification of (overlapping) communities clusters in a complex network is a general problem in data mining of network data sets. In this paper, we devise a novel algorithm to identify overlapping communities in complex networks by the combination of a new modularity function based on generalizing NG's Q function, an approximation mapping of network nodes into Euclidean space and fuzzy c-means clustering. Experimental results indicate that the new algorithm is efficient at detecting both good clusterings and the appropriate number of clusters.", "We describe models and efficient algorithms for detecting groups (communities) functioning in communication networks which attempt to hide their functionality – hidden groups. Our results reveal the properties of the background network activity that make detection of the hidden group easy, as well as those that make it difficult." ] }
1210.5135
1726835694
The motivation for this paper is to apply Bayesian structure learning using Model Averaging in large-scale networks. Currently, Bayesian model averaging algorithm is applicable to networks with only tens of variables, restrained by its super-exponential complexity. We present a novel framework, called LSBN(Large-Scale Bayesian Network), making it possible to handle networks with infinite size by following the principle of divide-and-conquer. The method of LSBN comprises three steps. In general, LSBN first performs the partition by using a second-order partition strategy, which achieves more robust results. LSBN conducts sampling and structure learning within each overlapping community after the community is isolated from other variables by Markov Blanket. Finally LSBN employs an efficient algorithm, to merge structures of overlapping communities into a whole. In comparison with other four state-of-art large-scale network structure learning algorithms such as ARACNE, PC, Greedy Search and MMHC, LSBN shows comparable results in five common benchmark datasets, evaluated by precision, recall and f-score. What's more, LSBN makes it possible to learn large-scale Bayesian structure by Model Averaging which used to be intractable. In summary, LSBN provides an scalable and parallel framework for the reconstruction of network structures. Besides, the complete information of overlapping communities serves as the byproduct, which could be used to mine meaningful clusters in biological networks, such as protein-protein-interaction network or gene regulatory network, as well as in social network.
Partitioning approaches transform original graph into a larger graph without overlapping nodes before conduct traditional partitions. Those overlapping nodes are identified and split into multiple copies of themselves beforehand @cite_24 . The identification of candidate overlapping nodes is based on @cite_31 , and the splitting process continues as long as the of variables is sufficiently high.
{ "cite_N": [ "@cite_24", "@cite_31" ], "mid": [ "2037096232", "1971421925" ], "abstract": [ "We propose an algorithm for finding overlapping community structure in very large networks. The algorithm is based on the label propagation technique of Raghavan, Albert and Kumara, but is able to detect communities that overlap. Like the original algorithm, vertices have labels that propagate between neighbouring vertices so that members of a community reach a consensus on their community membership. Our main contribution is to extend the label and propagation step to include information about more than one community: each vertex can now belong to up to v communities, where v is the parameter of the algorithm. Our algorithm can also handle weighted and bipartite networks. Tests on an independently designed set of benchmarks, and on real networks, show the algorithm to be highly effective in recovering overlapping communities. It is also very fast and can process very large and dense networks in a short time.", "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases." ] }
1210.4276
2949256903
This paper introduces a novel, well-founded, betweenness measure, called the Bag-of-Paths (BoP) betweenness, as well as its extension, the BoP group betweenness, to tackle semisupervised classification problems on weighted directed graphs. The objective of semi-supervised classification is to assign a label to unlabeled nodes using the whole topology of the graph and the labeled nodes at our disposal. The BoP betweenness relies on a bag-of-paths framework assigning a Boltzmann distribution on the set of all possible paths through the network such that long (high-cost) paths have a low probability of being picked from the bag, while short (low-cost) paths have a high probability of being picked. Within that context, the BoP betweenness of node j is defined as the sum of the a posteriori probabilities that node j lies in-between two arbitrary nodes i, k, when picking a path starting in i and ending in k. Intuitively, a node typically receives a high betweenness if it has a large probability of appearing on paths connecting two arbitrary nodes of the network. This quantity can be computed in closed form by inverting a n x n matrix where n is the number of nodes. For the group betweenness, the paths are constrained to start and end in nodes within the same class, therefore defining a group betweenness for each class. Unlabeled nodes are then classified according to the class showing the highest group betweenness. Experiments on various real-world data sets show that BoP group betweenness outperforms all the tested state of-the-art methods. The benefit of the BoP betweenness is particularly noticeable when only a few labeled nodes are available.
Some authors also considered bounded (or truncated) walks @cite_13 @cite_5 @cite_33 and obtained promising results on large graphs. This approach could also be considered in our framework in order to tackle large networks; this will be investigated in further work.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_33" ], "mid": [ "1730631237", "2169847772", "" ], "abstract": [ "Recently there has been much interest in graph-based learning, with applications in collaborative filtering for recommender networks, link prediction for social networks and fraud detection. These networks can consist of millions of entities, and so it is very important to develop highly efficient techniques. We are especially interested in accelerating random walk approaches to compute some very interesting proximity measures of these kinds of graphs. These measures have been shown to do well empirically (Liben-Nowell & Kleinberg, 2003; Brand, 2005). We introduce a truncated variation on a well-known measure, namely commute times arising from random walks on graphs. We present a very novel algorithm to compute all interesting pairs of approximate nearest neighbors in truncated commute times, without computing it between all pairs. We show results on both simulated and real graphs of size up to 100; 000 entities, which indicate near-linear scaling in computation time.", "This work addresses graph-based semi-supervised classification and betweenness computation in large, sparse, networks (several millions of nodes). The objective of semi-supervised classification is to assign a label to unlabeled nodes using the whole topology of the graph and the labeling at our disposal. Two approaches are developed to avoid explicit computation of pairwise proximity between the nodes of the graph, which would be impractical for graphs containing millions of nodes. The first approach directly computes, for each class, the sum of the similarities between the nodes to classify and the labeled nodes of the class, as suggested initially in [1,2]. Along this approach, two algorithms exploiting different state-of-the-art kernels on a graph are developed. The same strategy can also be used in order to compute a betweenness measure. The second approach works on a trellis structure built from biased random walks on the graph, extending an idea introduced in [3]. These random walks allow to define a biased bounded betweenness for the nodes of interest, defined separately for each class. All the proposed algorithms have a linear computing time in the number of edges while providing good results, and hence are applicable to large sparse networks. They are empirically validated on medium-size standard data sets and are shown to be competitive with state-of-the-art techniques. Finally, we processed a novel data set, which is made available for benchmarking, for multi-class classification in a large network: the U.S. patents citation network containing 3M nodes (of six different classes) and 38M edges. The three proposed algorithms achieve competitive results (around 85 classification rate) on this large network-they classify the unlabeled nodes within a few minutes on a standard workstation.", "" ] }
1210.4276
2949256903
This paper introduces a novel, well-founded, betweenness measure, called the Bag-of-Paths (BoP) betweenness, as well as its extension, the BoP group betweenness, to tackle semisupervised classification problems on weighted directed graphs. The objective of semi-supervised classification is to assign a label to unlabeled nodes using the whole topology of the graph and the labeled nodes at our disposal. The BoP betweenness relies on a bag-of-paths framework assigning a Boltzmann distribution on the set of all possible paths through the network such that long (high-cost) paths have a low probability of being picked from the bag, while short (low-cost) paths have a high probability of being picked. Within that context, the BoP betweenness of node j is defined as the sum of the a posteriori probabilities that node j lies in-between two arbitrary nodes i, k, when picking a path starting in i and ending in k. Intuitively, a node typically receives a high betweenness if it has a large probability of appearing on paths connecting two arbitrary nodes of the network. This quantity can be computed in closed form by inverting a n x n matrix where n is the number of nodes. For the group betweenness, the paths are constrained to start and end in nodes within the same class, therefore defining a group betweenness for each class. Unlabeled nodes are then classified according to the class showing the highest group betweenness. Experiments on various real-world data sets show that BoP group betweenness outperforms all the tested state of-the-art methods. The benefit of the BoP betweenness is particularly noticeable when only a few labeled nodes are available.
suggested a method avoiding to take the inverse of a @math matrix for computing the random walk with restart measure @cite_28 . They reduce the computing time by partitioning the input graph into smaller communities. Then, a sparse approximate of the random walk with restart is obtained by applying a low rank approximation. This approach suffers from the fact that it adds a hyperparameter @math (the number of communities) that depends on the network and is still untractable for large graphs with millions of nodes. On the other hand, the computing time is reduced by this same factor @math . This is another path to investigate in further work.
{ "cite_N": [ "@cite_28" ], "mid": [ "2133299088" ], "abstract": [ "How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the \"connection subgraphs\", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90 + quality preservation." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
The translation of threads into events has been rediscovered many times @cite_2 @cite_7 @cite_17 . In this section, we review existing solutions, and observe that each of them generates only one particular kind of event-driven style. As we shall see in sec:generating , we believe that these implementations are in fact a few classical transformation techniques, studied extensively in the context of functional languages, and adapted to imperative languages, sometimes unknowingly, by programmers trying to solve the issue of writing events in a threaded style.
{ "cite_N": [ "@cite_17", "@cite_7", "@cite_2" ], "mid": [ "", "1487134436", "2161566505" ], "abstract": [ "", "Tame is a new event-based system for managing concurrency in network applications. Code written with Tame abstractions does not suffer from the \"stack-ripping\" problem associated with other event libraries. Like threaded code, tamed code uses standard control flow, automatically-managed local variables, and modular interfaces between callers and callees. Tame's implementation consists of C++ libraries and a source-to-source translator; no platform-specific support or compiler modifications are required, and Tame induces little runtime overhead. Experience with Tame in real-world systems, including a popular commercial Web site, suggests it is easy to adopt and deploy.", "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
Duff introduces a technique, known as @cite_15 , to express general loop unrolling directly in C, using the switch statement. Much later, this technique has been employed multiple times to express state machines and event-driven programs in a threaded style: @cite_2 , ' automata @cite_9 . These libraries help keep a clearer flow of control but they provide no automatic handling of data flow: the programmer is expected to save local variables manually in his own data structures, just like in event-driven style.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_2" ], "mid": [ "2039801803", "", "2161566505" ], "abstract": [ "FairThreads introduces fair threads which are executed in a cooperative way when linked to a scheduler, and in a preemptive way otherwise. Constructs exist for programming the dynamic linking unlinking of threads during execution. Users can profit from the cooperative scheduling when threads are linked. For example, data only accessed by the threads linked to the same scheduler does not need to be protected by locks. Users can also profit from the preemptive scheduling provided by the operating system (OS) when threads are unlinked, for example to deal with blocking I Os. In the cooperative context, for the threads linked to the same scheduler, FairThreads make it possible to use broadcast events. Broadcasting is a powerful, abstract, and modular means of communication. Basically, event broadcasting is made possible by the specific way threads are scheduled by the scheduler to which they are linked (the ‘fair’ strategy). FairThreads give a way to deal with some limitations of the OS. Automata are special threads, coded as state machines, which do not need the allocation of a native thread and which have efficient execution. Automata also give a means to deal with the limited number of native threads available when large numbers of concurrent tasks are needed, for example in simulations. Copyright © 2005 John Wiley & Sons, Ltd.", "", "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
@cite_7 is a C++ language extension and library which exposes events to the programmer but does not impose event-driven style: it generates state machines to avoid the stack ripping issue and retain a thread-like feeling. Similarly to Weave, the programmer needs to annotate local variables that must be saved across context switches.
{ "cite_N": [ "@cite_7" ], "mid": [ "1487134436" ], "abstract": [ "Tame is a new event-based system for managing concurrency in network applications. Code written with Tame abstractions does not suffer from the \"stack-ripping\" problem associated with other event libraries. Like threaded code, tamed code uses standard control flow, automatically-managed local variables, and modular interfaces between callers and callees. Tame's implementation consists of C++ libraries and a source-to-source translator; no platform-specific support or compiler modifications are required, and Tame induces little runtime overhead. Experience with Tame in real-world systems, including a popular commercial Web site, suggests it is easy to adopt and deploy." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
@cite_19 implements the same idea as Tame, in Java, but preserves local variables automatically, storing them in a state record. @cite_18 is a message-passing framework for Java providing actor-based, lightweight threads. It is also implemented by a partial CPS conversion performed on annotated functions, but contrary to TaskJava, it works at the JVM bytecode level.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "2054564983", "1581908531" ], "abstract": [ "The event-driven programming style is pervasive as an efficient method for interacting with the environment. Unfortunately, the event-driven style severely complicates program maintenance and understanding, as it requires each logical flow of control to be fragmented across multiple independent callbacks. We propose tasks as a new programming model for organizing event-driven programs. Tasks are a variant of cooperative multi-threading and allow each logical control flow to be modularized in the traditional manner, including usage of standard control mechanisms like procedures and exceptions. At the same time, by using method annotations, task-based programs can be automatically and modularly translated into efficient event-based code, using a form of continuation passing style (CPS) translation. A linkable scheduler architecture permits tasks to be used in many different contexts. We have instantiated our model as a backward-compatible extension to Java, called TaskJava. We illustrate the benefits of our language through a formalization in an extension to Featherweight Java, and through a case study based on an open-source web server.", "This paper describes Kilim, a framework that employs a combination of techniques to help create robust, massively concurrent systems in mainstream languages such as Java: (i) ultra-lightweight, cooperatively-scheduled threads (actors), (ii) a message-passing framework (no shared memory, no locks) and (iii) isolation-aware messaging. Isolation is achieved by controlling the shape and ownership of mutable messages --- they must not have internal aliases and can only be owned by a single actor at a time. We demonstrate a static analysis built around isolation type qualifiers to enforce these constraints. Kilim comfortably scales to handle hundreds of thousands of actors and messages on modest hardware. It is fast as well --- task-switching is 1000x faster than Java threads and 60x faster than other lightweight tasking frameworks, and message-passing is 3x faster than Erlang (currently the gold standard for concurrency-orientedprogramming)." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
@cite_8 is a conservative extension of Javascript for writing asynchronous RPC, compiled to plain Javascript using some kind of ad-hoc splitting and CPS conversion. Interestingly enough, the authors note that, in spite of Javascript's support for nested functions, they need to perform function denesting'' for performance reasons; they store free variables in environments ( closure objects'') rather than using lambda lifting.
{ "cite_N": [ "@cite_8" ], "mid": [ "2100106097" ], "abstract": [ "The current approach to developing rich, interactive web applications relies on asynchronous RPCs (Remote Procedure Calls) to fetch new data to be displayed by the client. We argue that for the majority of web applications, this RPC-based model is not the correct abstraction: it forces programmers to use an awkward continuation-passing style of programming and to expend too much effort manually transferring data. We propose a new programming model, MapJAX, to remedy these problems. MapJAX provides the abstraction of data structures shared between the browser and the server, based on the familiar primitives of objects, locks, and threads. MapJAX also provides additional features (parallel for loops and prefetching) that help developers minimize response times in their applications. Map-JAX thus allows developers to focus on what they do best-writing compelling applications-rather than worrying about systems issues of data transfer and callback management. We describe the design and implementation of the MapJAX framework and show its use in three prototypical web applications: a mapping application, an email client, and a search-autocomplete application. We evaluate the performance of these applications under realistic Internet latency and bandwidth constraints and find that the unoptimized MapJAX versions perform comparably to the standard AJAX versions, while MapJAX performance optimizations can dramatically improve performance, by close to a factor of 2 relative to non-MapJAX code in some cases." ] }
1210.4263
1723322193
Threads and events are two common abstractions for writing concurrent programs. Because threads are often more convenient, but events more efficient, it is natural to want to translate the former into the latter. However, whereas there are many different event-driven styles, existing translators often apply ad-hoc rules which do not reflect this diversity. We analyse various control-flow and data-flow encodings in real-world event-driven code, and we observe that it is possible to generate any of these styles automatically from threaded code, by applying certain carefully chosen classical program transformations. In particular, we implement two of these transformations, lambda lifting and environments, in CPC, an extension of the C language for writing concurrent systems. Finally, we find out that, although rarely used in real-world programs because it is tedious to perform manually, lambda lifting yields better performance than environments in most of our benchmarks.
@cite_10 is a set of language constructs for composable asynchronous I O in C and C++. introduce do..finish and async operators to write asynchronous requests in a synchronous style, and give an operational semantics. The language constructs are somewhat similar to those of Tame but the implementation is very different, using LLVM code blocks or macros based on GCC's nested functions rather than source-to-source transformations.
{ "cite_N": [ "@cite_10" ], "mid": [ "2115429665" ], "abstract": [ "This paper introduces AC, a set of language constructs for composable asynchronous IO in native languages such as C C++. Unlike traditional synchronous IO interfaces, AC lets a thread issue multiple IO requests so that they can be serviced concurrently, and so that long-latency operations can be overlapped with computation. Unlike traditional asynchronous IO interfaces, AC retains a sequential style of programming without requiring code to use multiple threads, and without requiring code to be \"stack-ripped\" into chains of callbacks. AC provides an \"async\" statement to identify opportunities for IO operations to be issued concurrently, a \"do..finish\" block that waits until any enclosed \"async\" work is complete, and a \"cancel\" statement that requests cancellation of unfinished IO within an enclosing \"do..finish\". We give an operational semantics for a core language. We describe and evaluate implementations that are integrated with message passing on the Barrelfish research OS, and integrated with asynchronous file and network IO on Microsoft Windows. We show that AC offers comparable performance to existing C C++ interfaces for asynchronous IO, while providing a simpler programming model." ] }
1210.3635
2950339729
We give geometric descriptions of the category C_k(n,d) of rational polynomial representations of GL_n over a field k of degree d for d less than or equal to n, the Schur functor and Schur-Weyl duality. The descriptions and proofs use a modular version of Springer theory and relationships between the equivariant geometry of the affine Grassmannian and the nilpotent cone for the general linear groups. Motivated by this description, we propose generalizations for an arbitrary connected complex reductive group of the category C_k(n,d) and the Schur functor.
Another closely-related picture arises in joint work with Achar. Motivated by the equivalence between @math and @math , we propose @cite_2 a for the equivariant derived category @math . The definition is very similar to that of the geometric Schur functor in this paper. In particular, we define a functor [ := i^* i_*[ - ]: D_G( ;k) D_G( ;k) ] and show that it is an autoequivalence.
{ "cite_N": [ "@cite_2" ], "mid": [ "1503666173" ], "abstract": [ "Given the nilpotent cone of a complex reductive Lie algebra, we consider its equivariant constructible derived category of sheaves with coefficients in an arbitrary field. This category and its subcategory of perverse sheaves play an important role in Springer theory and the theory of character sheaves. We show that the composition of the Fourier--Sato transform on the Lie algebra followed by restriction to the nilpotent cone restricts to an autoequivalence of the derived category of the nilpotent cone. In the case of @math , we show that this autoequivalence can be regarded as a geometric version of Ringel duality for the Schur algebra." ] }
1210.3312
1784635377
This paper describes Artex, another algorithm for Automatic Text Summarization. In order to rank sentences, a simple inner product is calculated between each sentence, a document vector (text topic) and a lexical vector (vocabulary used by a sentence). Summaries are then generated by assembling the highest ranked sentences. No ruled-based linguistic post-processing is necessary in order to obtain summaries. Tests over several datasets (coming from Document Understanding Conferences (DUC), Text Analysis Conferences (TAC), evaluation campaigns, etc.) in French, English and Spanish have shown that summarizer achieves interesting results.
Research in Automatic Text Summarization was introduced by H.P. Luhn in 1958 @cite_2 . In the stra -tegy proposed by Luhn, the sentences are scored for their component word values as determined by tf*idf-like weights. Scored sentences are then ranked and selected from the top until some summary length threshold is reached. Finally, the summary is generated by assembling the selected sentences in the original source order. Although fairly simple, this extractive methodology is still used in current approaches. Later on, @cite_6 extended this work by adding simple heuristic features such as the position of sentences in the text or some key phrases indicate the importance of the sentences. As the range of possible features for source characterization widened, choosing appropriate features, feature weights and feature combinations have become a central issue.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2166347079", "1974339500" ], "abstract": [ "This paper describes new methods of automatically extracting documents for screening purposes, i.e. the computer selection of sentences having the greatest potential for conveying to the reader the substance of the document. While previous work has focused on one component of sentence significance, namely, the presence of high-frequency content words (key words), the methods described here also treat three additional components: pragmatic words (cue words); title and heading words; and structural indicators (sentence location). The research has resulted in an operating system and a research methodology. The extracting system is parameterized to control and vary the influence of the above four components. The research methodology includes procedures for the compilation of the required dictionaries, the setting of the control parameters, and the comparative evaluation of the automatic extracts with manually produced extracts. The results indicate that the three newly proposed components dominate the frequency component in the production of better extracts.", "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form is scanned by an IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \"auto-abstract.\"" ] }
1210.3312
1784635377
This paper describes Artex, another algorithm for Automatic Text Summarization. In order to rank sentences, a simple inner product is calculated between each sentence, a document vector (text topic) and a lexical vector (vocabulary used by a sentence). Summaries are then generated by assembling the highest ranked sentences. No ruled-based linguistic post-processing is necessary in order to obtain summaries. Tests over several datasets (coming from Document Understanding Conferences (DUC), Text Analysis Conferences (TAC), evaluation campaigns, etc.) in French, English and Spanish have shown that summarizer achieves interesting results.
. Simple rule-based method is used for sentence splitting. Documents are chunked at the period, exclamation and question mark. . Words lowercased and cleared up from sloppy punctuation. Words with less than 2 occurrences ( @math ) are eliminated ( Hapax legomenon presents once in a document). Words that do not carry meaning such as functional or very common words are removed. Small stop-lists (depending of language) are used in this step. . Remaining words are replaced by their canonical form using lemmatization, stemming, ultra-stemming or none of them (raw text). Four methods of normalization were applied after filtering: Lemmatization by simples dictionaries of morphological families. These dictionaries have 1.32M, 208K and 316K words-entries in Spanish, English and French, respectively. Porter's Stemming, available at Snowball (web site http: snowball.tartarus.org texts stemmersoverview.html ) for English, Spanish, French among other languages. Ultra-stemming. This normalization seems be very efficient and it produces a compact matrix representation @cite_17 . Ultra-stemming consider only the @math first letters of each word. For example, in the case of ultra-stemming (first letter, Fix @math ), inflected verbs like sing'', song'', sings'', singing''... or proper names smith'', snowboard'', sex'',... are replaced by the letter s ''.
{ "cite_N": [ "@cite_17" ], "mid": [ "1579835159" ], "abstract": [ "In Automatic Text Summarization, preprocessing is an important phase to reduce the space of textual representation. Classically, stemming and lemmatization have been widely used for normalizing words. However, even using normalization on large texts, the curse of dimensionality can disturb the performance of summarizers. This paper describes a new method for normalization of words to further reduce the space of representation. We propose to reduce each word to its initial letters, as a form of Ultra-stemming. The results show that Ultra-stemming not only preserve the content of summaries produced by this representation, but often the performances of the systems can be dramatically improved. Summaries on trilingual corpora were evaluated automatically with Fresa. Results confirm an increase in the performance, regardless of summarizer system used." ] }
1210.3241
2144956557
The paper introduces a framework for representation and acquisition of knowledge emerging from large samples of textual data. We utilise a tensor-based, distributional representation of simple statements extracted from text, and show how one can use the representation to infer emergent knowledge patterns from the textual data in an unsupervised manner. Examples of the patterns we investigate in the paper are implicit term relationships or conjunctive IF-THEN rules. To evaluate the practical relevance of our approach, we apply it to annotation of life science articles with terms from MeSH (a controlled biomedical vocabulary and thesaurus).
Regarding the application of our framework to annotation of biomedical articles, a body of more or less recent works like @cite_8 , @cite_9 , @cite_5 , @cite_14 or @cite_0 exists (the second, third and fifth of the approaches are either used or considered for use as a support service for the professional annotators of the articles on PubMed, a biomedical literature repository). The state of the art methods, however, often require at least an indirect input from human users before they can produce annotations of new articles automatically. For instance, @cite_9 and @cite_0 require a large corpus of previously annotated articles for learning and ranking possible annotations of new resources. Other methods like @cite_5 require rather sophisticated tuning (e.g., experimenting with parameter settings or with the processing pipeline composition) for optimum performance on new data. This is not the case of our approach, as it can work in a purely unsupervised manner off-the-shelf.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_0", "@cite_5" ], "mid": [ "2114370860", "1550518777", "2160171761", "2152143870", "2097553584" ], "abstract": [ "Motivation: We report on the development of a generic text categorization system designed to automatically assign biomedical categories to any input text. Unlike usual automatic text categorization systems, which rely on data-intensive models extracted from large sets of training data, our categorizer is largely data-independent. Methods: In order to evaluate the robustness of our approach we test the system on two different biomedical terminologies: the Medical Subject Headings (MeSH) and the Gene Ontology (GO). Our lightweight categorizer, based on two ranking modules, combines a pattern matcher and a vector space retrieval engine, and uses both stems and linguistically-motivated indexing units. Results and Conclusion: Results show the effectiveness of phrase indexing for both GO and MeSH categorization, but we observe the categorization power of the tool depends on the controlled vocabulary: precision at high ranks ranges from above 90 for MeSH to <20 for GO, establishing a new baseline for categorizers based on retrieval methods. Contact: Patrick.Ruch@sim.hcuge.ch", "For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine.", "The Medical Text Indexer (MTI) is a program for producing MeSH indexing recommendations. It is the major product of NLM’s Indexing Initiative and has been used in both semi-automated and fully automated indexing environments at the Library since mid 2002. We report here on an experiment conducted with MEDLINE indexers to evaluate MTI’s performance and to generate ideas for its improvement as a tool for user-assisted indexing. We also discuss some filtering techniques developed to improve MTI’s accuracy for use primarily in automatically producing the indexing for several abstracts collections.", "Background Due to the high cost of manual curation of key aspects from the scientific literature, automated methods for assisting this process are greatly desired. Here, we report a novel approach to facilitate MeSH indexing, a challenging task of assigning MeSH terms to MEDLINE citations for their archiving and retrieval. @PARASPLIT Methods Unlike previous methods for automatic MeSH term assignment, we reformulate the indexing task as a ranking problem such that relevant MeSH headings are ranked higher than those irrelevant ones. Specifically, for each document we retrieve 20 neighbor documents, obtain a list of MeSH main headings from neighbors, and rank the MeSH main headings using ListNet–a learning-to-rank algorithm. We trained our algorithm on 200 documents and tested on a previously used benchmark set of 200 documents and a larger dataset of 1000 documents. @PARASPLIT Results Tested on the benchmark dataset, our method achieved a precision of 0.390, recall of 0.712, and mean average precision (MAP) of 0.626. In comparison to the state of the art, we observe statistically significant improvements as large as 39 in MAP (p-value <0.001). Similar significant improvements were also obtained on the larger document set. @PARASPLIT Conclusion Experimental results show that our approach makes the most accurate MeSH predictions to date, which suggests its great potential in making a practical impact on MeSH indexing. Furthermore, as discussed the proposed learning framework is robust and can be adapted to many other similar tasks beyond MeSH indexing in the biomedical domain. All data sets are available at: .", "The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE^(R), the largest bibliographic database of biomedical citations. Indexers at the US National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH^(R) terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM's Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading subheading pair recommendations were assessed independently and combined. The best combination achieves 48 precision and 30 recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice." ] }
1210.3241
2144956557
The paper introduces a framework for representation and acquisition of knowledge emerging from large samples of textual data. We utilise a tensor-based, distributional representation of simple statements extracted from text, and show how one can use the representation to infer emergent knowledge patterns from the textual data in an unsupervised manner. Examples of the patterns we investigate in the paper are implicit term relationships or conjunctive IF-THEN rules. To evaluate the practical relevance of our approach, we apply it to annotation of life science articles with terms from MeSH (a controlled biomedical vocabulary and thesaurus).
By comparing the row vectors in corpus tensor matricisations, one essentially compares the meaning of the corresponding label terms, as it is emerging from the underlying data. For exploring the matricised perspectives, one can use linear algebra methods that have been proven to work by countless successful applications to vector space analysis in the last couple of decades @cite_6 @cite_11 @cite_7 . Large feature spaces can be reliably reduced to more manageable and less noisy number of dimensions by techniques like singular value decomposition or random indexing (see http: en.wikipedia.org wiki Dimension_reduction ). After the (optional) dimensionality reduction, the perspective vectors can be compared in a well-founded manner by measures like cosine similarity (see http: en.wikipedia.org wiki Cosine_similarity ), as illustrated in Example .
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_11" ], "mid": [ "", "2165612380", "2147152072" ], "abstract": [ "", "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising." ] }
1210.3234
2143592605
In this paper, we explore the risks of friends in social networks caused by their friendship patterns, by using real life social network data and starting from a previously defined risk model. Particularly, we observe that risks of friendships can be mined by analyzing users' attitude towards friends of friends. This allows us to give new insights into friendship and risk dynamics on social networks.
Privacy risks that are associated by friends' actions in information disclosure has been studied in @cite_7 , but the authors work with direct actions (e.g., re-sharing user's photos) of friends, rather than their friendship patterns. Recent privacy research focused on creating global models of risk or privacy rather than finding the best privacy settings, so that ideal privacy settings can be mined automatically and presented to the user more easily. In @cite_8 , prepared a risk model for social network users in order to regulate personal data disclosure. Similarly, @cite_14 has modeled privacy by considering how sensitive personal data is disclosed in interactions. Although users assign global privacy or risk scores to other social network users, friend roles in information disclosure are ignored in these work.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8" ], "mid": [ "2107855415", "1602027763", "2069714447" ], "abstract": [ "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities. In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.", "As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user's private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook's privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content.", "Several efforts have been made for more privacy aware Online Social Networks (OSNs) to protect personal data against various privacy threats. However, despite the relevance of these proposals, we believe there is still the lack of a conceptual model on top of which privacy tools have to be designed. Central to this model should be the concept of risk. Therefore, in this paper, we propose a risk measure for OSNs. The aim is to associate a risk level with social network users in order to provide other users with a measure of how much it might be risky, in terms of disclosure of private information, to have interactions with them. We compute risk levels based on similarity and benefit measures, by also taking into account the user risk attitudes. In particular, we adopt an active learning approach for risk estimation, where user risk attitude is learned from few required user interactions. The risk estimation process discussed in this paper has been developed into a Facebook application and tested on real data. The experiments show the effectiveness of our proposal." ] }
1210.3234
2143592605
In this paper, we explore the risks of friends in social networks caused by their friendship patterns, by using real life social network data and starting from a previously defined risk model. Particularly, we observe that risks of friendships can be mined by analyzing users' attitude towards friends of friends. This allows us to give new insights into friendship and risk dynamics on social networks.
An advantage of global models is that once they are learned, privacy settings can be transfered and applied to other users. In such a shared privacy work, @cite_20 use of privacy settings which are specified by friends or trusted experts. However, the authors do not use a global risk privacy model, and users should know which suites to use without knowing the risk of social network users surrounding him her.
{ "cite_N": [ "@cite_20" ], "mid": [ "1981743035" ], "abstract": [ "Creating privacy controls for social networks that are both expressive and usable is a major challenge. Lack of user understanding of privacy settings can lead to unwanted disclosure of private information and, in some cases, to material harm. We propose a new paradigm which allows users to easily choose “suites” of privacy settings which have been specified by friends or trusted experts, only modifying them if they wish. Given that most users currently stick with their default, operator-chosen settings, such a system could dramatically increase the privacy protection that most users experience with minimal time investment." ] }
1210.2195
2952030618
While past research in answer-set programming (ASP) mainly focused on theory, ASP solver technology, and applications, the present work situates itself in the context of a quite recent research trend: development support for ASP. In particular, we propose to augment answer-set programs with additional meta-information formulated in a dedicated annotation language, called LANA. This language allows the grouping of rules into coherent blocks and to specify language signatures, types, pre- and postconditions, as well as unit tests for such blocks. While these annotations are invisible to an ASP solver, as they take the form of program comments, they can be interpreted by tools for documentation, testing, and verification purposes, as well as to eliminate sources of common programming errors by realising syntax checking or code completion features. To demonstrate its versatility, we introduce two such tools, viz. (i) ASPDOC, for generating an HTML documentation for a program based on the annotated information, and (ii) ASPUNIT, for running and monitoring unit tests on program blocks. LANA is also exploited in the SeaLion system, an integrated development environment for ASP based on Eclipse. To appear in Theory and Practice of Logic Programming
In general, developing and debugging a declarative language is quite different from software engineering in a more traditional procedural or object-oriented programming language. With larger programs for real-world applications being written, it is vital to support the programmer with the right tools. In recent years, some work has been done to provide the ASP programmer with dedicated tools. The integrated development environments @cite_22 and @cite_21 provide, among other features, syntax colouring and syntax checking for ASP programs and run as an Eclipse front-end to solvers. IDEs for the solver and its extensions are discussed by peritecive07 and aspide . Debugging in ASP is supported by @cite_20 , which makes use of ASP to explain and handle unexpected outcomes like missing atoms in an answer set or the absence of an answer set. aspviz and kara provide mechanisms to visualise answer sets of a given program to support code debugging.
{ "cite_N": [ "@cite_21", "@cite_22", "@cite_20" ], "mid": [ "", "33233687", "2624790684" ], "abstract": [ "", "It has been recognised that better programming tools are required to support the logic programming paradigm of Answer Set Programming (ASP), especially when larger scale applications need to be developed. In order to meet this demand, the aspects of programming in ASP that require better support need to be investigated, and suitable tools to support them identified and implemented. In this paper we detail an exploratory development approach to implementing an Integrated Development Environment (IDE) for ASP, the AnsProlog* Programming Environment (APE). APE is implemented as a plug-in for the Eclipse platform. Given that an IDE is itself composed of a set of programming tools, this approach is used to identify a set of tool requirements for ASP, together with suggestions for improvements to existing tools and programming practices.", "Answer-set programming (ASP) is a logic programming paradigm for declarative problem solving which gained increasing importance during the last decade. However, so far hardly any tools exist supporting software engineers in developing answer-set programs, and there are no standard methodologies for handling unexpected outcomes of a program. Thus, writing answer-set programs is sometimes quite intricate, especially when large programs for real-world applications are required. In order to increase the usability of ASP, the development of appropriate debugging strategies is therefore vital. In this paper, we describe the system spock, a debugging support tool for answer-set programs making use of ASP itself. The implemented techniques maintain the declarative nature of ASP within the debugging process and are independent from the actual computation of answer sets." ] }
1210.2162
1668901377
How many labeled examples are needed to estimate a classifier's performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semisupervised Performance Evaluation (SPE), is based on a generative model for the classifier's confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by re-estimating the class-conditional confidence distributions.
Previous approaches for estimating classifier performance with few labels falls into two categories: stratified sampling and active estimation using importance sampling. Bennett and Carvalho @cite_20 suggested that the accuracy of classifiers can be estimated cost-effectively by dividing the data into disjoint strata based on the item scores, and proposed an online algorithm for sampling from the strata. This work has since been generalized to other classifier performance metrics, such as precision and recall @cite_10 . proposed instead to use importance sampling to focus labeling effort on data items with high classifier uncertainty, and applied it to standard loss functions @cite_15 and F-measures @cite_8 . While both of these approaches assume that the classifier threshold @math is fixed (see s:model ) and that a single scalar performance measure is desired, SPE can be applied to the between different performance measures in the form of performance curves.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_20", "@cite_8" ], "mid": [ "2128511958", "2036735563", "2116917894", "2108736173" ], "abstract": [ "We address the problem of estimating the Fα-measure of a given model as accurately as possible on a fixed labeling budget. This problem occurs whenever an estimate cannot be obtained from held-out training data; for instance, when data that have been used to train the model are held back for reasons of privacy or do not reflect the test distribution. In this case, new test instances have to be drawn and labeled at a cost. An active estimation procedure selects instances according to an instrumental sampling distribution. An analysis of the sources of estimation error leads to an optimal sampling distribution that minimizes estimator variance. We explore conditions under which active estimates of Fα-measures are more accurate than estimates based on instances sampled from the test distribution.", "Machine learning often relies on costly labeled data, and this impedes its application to new classification and information extraction problems. This has motivated the development of methods for leveraging abundant prior knowledge about these problems, including methods for lightly supervised learning using model expectation constraints. Building on this work, we envision an interactive training paradigm in which practitioners perform evaluation, analyze errors, and provide and refine expectation constraints in a closed loop. In this paper, we focus on several key subproblems in this paradigm that can be cast as selecting a representative sample of the unlabeled data for the practitioner to inspect. To address these problems, we propose stratified sampling methods that use model expectations as a proxy for latent output variables. In classification and sequence labeling experiments, these sampling strategies reduce accuracy evaluation effort by as much as 53 , provide more reliable estimates of @math for rare labels, and aid in the specification and refinement of constraints.", "Deploying a classifier to large-scale systems such as the web requires careful feature design and performance evaluation. Evaluation is particularly challenging because these large collections frequently change. In this paper we adapt stratified sampling techniques to evaluate the precision of classifiers deployed in large-scale systems. We investigate different types of stratification strategies, and then we derive a new online sampling algorithm that incrementally approximates the theoretical optimal disproportionate sampling strategy. In experiments, the proposed algorithm significantly outperforms both simple random sampling as well as other types of stratified sampling, with an average reduction of about 20 in labeling effort to reach the same confidence and interval-bounds on precision", "We address the problem of evaluating the risk of a given model accurately at minimal labeling costs. This problem occurs in situations in which risk estimates cannot be obtained from held-out training data, because the training data are unavailable or do not reflect the desired test distribution. We study active risk estimation processes in which instances are actively selected by a sampling process from a pool of unlabeled test instances and their labels are queried. We derive the sampling distribution that minimizes the estimation error of the active risk estimator when used to select instances from the pool. An analysis of the distribution that governs the estimator leads to confidence intervals. We empirically study conditions under which the active risk estimate is more accurate than a standard risk estimate that draws equally many instances from the test distribution." ] }
1210.2429
2105637827
Android and Facebook provide third-party applications with access to users’ private data and the ability to perform potentially sensitive operations (e.g., post to a user’s wall or place phone calls). As a security measure, these platforms restrict applications’ privileges with permission systems: users must approve the permissions requested by applications before the applications can make privacy- or security-relevant API calls. However, recent studies have shown that users often do not understand permission requests and lack a notion of typicality of requests. As a rst step towards simplifying
The Facebook Platform supports third-party integration with Facebook. Facebook lists applications in an Apps and Games'' market alongside information about the applications, including the numbers of installs, the average ratings, and the names of friends who use the same applications. Through the Facebook Platform, applications can read users' profile information, post to users' news feeds, read and send messages, control users' advertising preferences, etc. Access to these resources is limited by a permission system, and developers must request the appropriate permissions for their applications to function. Applications can request permissions at any time, but most permission requests are displayed during installation as a condition of installation. surveyed Facebook applications and found that their permission usage is similar to Android applications: a small number of permissions are heavily used, and popular applications request more permissions @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2086648145" ], "abstract": [ "Third-party applications (apps) drive the attractiveness of web and mobile application platforms. Many of these platforms adopt a decentralized control strategy, relying on explicit user consent for granting permissions that the apps request. Users have to rely primarily on community ratings as the signals to identify the potentially harmful and inappropriate apps even though community ratings typically reflect opinions about perceived functionality or performance rather than about risks. With the arrival of HTML5 web apps, such user-consent permission systems will become more widespread. We study the effectiveness of user-consent permission systems through a large scale data collection of Facebook apps, Chrome extensions and Android apps. Our analysis confirms that the current forms of community ratings used in app markets today are not reliable indicators of privacy risks of an app. We find some evidence indicating attempts to mislead or entice users into granting permissions: free applications and applications with mature content request more permissions than is typical; 'look-alike' applications which have names similar to popular applications also request more permissions than is typical. We also find that across all three platforms popular applications request more permissions than average." ] }
1210.2610
2117955223
Lambda calculus is the basis of functional programming and higher order proof assistants. However, little is known about combinatorial properties of lambda terms, in particular, about their asymptotic distribution and random generation. This paper tries to answer questions like: How many terms of a given size are there? What is a "typical" structure of a simply typable term? Despite their ostensible simplicity, these questions still remain unanswered, whereas solutions to such problems are essential for testing compilers and optimizing programs whose expected efficiency depends on the size of terms. Our approach toward the afore-mentioned problems may be later extended to any language with bound variables, i.e., with scopes and declarations. This paper presents two complementary approaches: one, theoretical, uses complex analysis and generating functions, the other, experimental, is based on a generator of lambda-terms. Thanks to de Bruijn indices, we provide three families of formulas for the number of closed lambda terms of a given size and we give four relations between these numbers which have interesting combinatorial interpretations. As a by-product of the counting formulas, we design an algorithm for generating lambda terms. Performed tests provide us with experimental data, like the average depth of bound variables and the average number of head lambdas. We also create random generators for various sorts of terms. Thereafter, we conduct experiments that answer questions like: What is the ratio of simply typable terms among all terms? (Very small!) How are simply typable lambda terms distributed among all lambda terms? (A typable term almost always starts with an abstraction.) In this paper, abstractions and applications have size 1 and variables have size 0.
Since we cited, as an application, the random generation of terms for the construction of samples for debugging functional programming compilers and the connection with languages with bound variables, it is sensible to mention Csmith @cite_3 , which is the most recent and the most efficient bug tracker of C compilers. It is based on random program generation and uses filters for generating programs enforcing semantic restrictions, like ours when generating simply typable terms. However, the generation is not based on unranking, therefore Csmith lacks the ability to construct test case of a specific size on demand, but Csmith can generate large terms, which reveals to be useful, since the greatest number of distinct crash errors is found by programs containing 8K-16K tokens. However, one may wonder if this feature is not a consequence of the non-uniformity of the distribution.
{ "cite_N": [ "@cite_3" ], "mid": [ "2098456636" ], "abstract": [ "Compilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to find compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our first contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undefined and unspecified behaviors that would destroy its ability to automatically find wrong-code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers." ] }
1210.1863
2154648409
When approximating a space curve, it is natural to consider whether the knot type of the original curve is preserved in the approximant. This preservation is of strong contemporary interest in computer graphics and visualization. We establish a criterion to preserve knot type under approximation that relies upon pointwise convergence and convergence in total curvature.
The preservation of topology in computer graphics and visualization has previously been articulated in two primary applications @cite_16 : preservation of isotopic equivalence by approximations; and preservation of isotopic equivalence during dynamic changes, such as protein unfolding.
{ "cite_N": [ "@cite_16" ], "mid": [ "2189418457" ], "abstract": [ "Ambient isotopic approximations are fundamental for correct representation of the embedding of geometric objects in R 3, with a detailed geometric construction given here. Using that geometry, an algorithm is presented for efficient update of these isotopic approximations for dynamic visualization with a molecular simulation." ] }
1210.1863
2154648409
When approximating a space curve, it is natural to consider whether the knot type of the original curve is preserved in the approximant. This preservation is of strong contemporary interest in computer graphics and visualization. We establish a criterion to preserve knot type under approximation that relies upon pointwise convergence and convergence in total curvature.
Recent progress was made for the class of B 'ezier curves, by providing stopping criteria for subdivision algorithms to ensure ambient isotopic equivalence for B 'ezier curves of any degree @math @cite_8 , extending the previous work of @cite_21 , that had been restricted to degree less than @math . This extension is based on theorems and sophisticated techniques on knot structures.
{ "cite_N": [ "@cite_21", "@cite_8" ], "mid": [ "2140016427", "48487988" ], "abstract": [ "Non-self-intersection is both a topological and a geometric property. It is known that non-self-intersecting regular Bezier curves have non-self-intersecting control polygons, after sufficiently many uniform subdivisions. Here a sufficient condition is given within ℝ3 for a non-self-intersecting, regular C 2 cubic Bezier curve to be ambient isotopic to its control polygon formed after sufficiently many subdivisions. The benefit of using the control polygon as an approximant for scientific visualization is presented in this paper.", "It is of increasing contemporary interest to preserve ambient isotopy during geometric modeling. Bezier curves are pervasive in computer aided geometric design, as one of the fundamental computational representations for geometric modeling. For Bezier curves, subdivision algorithms create control polygons as piecewise linear (PL) approximations that converge under Hausdorff distance. A natural question is whether subdivision produces topologically reliable PL approximations. Here we focus upon ambient isotopy and prove that sufficiently many subdivisions produce a control polygon ambient isotopic to the Bezier curve. We also derive closed-form formulas to compute the number of subdivision iterations to ensure ambient isotopic equivalence in the resulting approximation. This work relies upon explicitly constructing homeomorphism and ambient isotopy, which provides more algorithmic efficiency than only showing the existence of these equivalence relations." ] }
1210.1863
2154648409
When approximating a space curve, it is natural to consider whether the knot type of the original curve is preserved in the approximant. This preservation is of strong contemporary interest in computer graphics and visualization. We establish a criterion to preserve knot type under approximation that relies upon pointwise convergence and convergence in total curvature.
There exist results in the literature showing ambient isotopy from a different point of view @cite_17 @cite_18 . Precisely, there is an upper bound on distance and an upper bound on angles between corresponding points for two curves. If the corresponding distances and angles are within the upper bounds, then they are ambient isotopic. Milnor @cite_15 defined the total curvature for a @math curve using inscribed PL curves. The extension of the definition to piecewise @math curves can be trivially done. Consequently, Fenchel's Theorem can be applied to piecewise @math curves, as we need here.
{ "cite_N": [ "@cite_18", "@cite_15", "@cite_17" ], "mid": [ "", "2325575424", "1792595856" ], "abstract": [ "", "2'n, equality holding only for plane convex curves. K. Borsuk, in 1947, extended this result to n dimensional space, and, in the same paper, conjectured that the total curvature of a knot in three dimensional space must exceed 47r. A proof of this conjecture is presented below.' In proving this proposition, use will be made of a definition, suggested by R. H. Fox, of total curvature which is applicable to any closed curve. This general definition is validated by showing that the generalized total curvature K(C) is", "Generalizing Milnor’s result that an FTC (finite total curvature) knot has an isotopic inscribed polygon, we show that any two nearby knotted FTC graphs are isotopic by a small isotopy. We also show how to obtain sharper constants when the starting curve is smooth. We apply our main theorem to prove a limiting result for essential subarcs of a knot." ] }
1210.1863
2154648409
When approximating a space curve, it is natural to consider whether the knot type of the original curve is preserved in the approximant. This preservation is of strong contemporary interest in computer graphics and visualization. We establish a criterion to preserve knot type under approximation that relies upon pointwise convergence and convergence in total curvature.
Milnor @cite_15 also proved the result restricted to inscribed curves. That is a similar version of Theorem presented here. That result was recently generalized to finite total curvature knots @cite_17 . The application to graphs was also established recently @cite_2 . Our proof here indicates upper bounds on distance and total curvature, which leads to the formulation of algorithms.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_17" ], "mid": [ "2325575424", "2099367821", "1792595856" ], "abstract": [ "2'n, equality holding only for plane convex curves. K. Borsuk, in 1947, extended this result to n dimensional space, and, in the same paper, conjectured that the total curvature of a knot in three dimensional space must exceed 47r. A proof of this conjecture is presented below.' In proving this proposition, use will be made of a definition, suggested by R. H. Fox, of total curvature which is applicable to any closed curve. This general definition is validated by showing that the generalized total curvature K(C) is", "We define a new notion of total curvature, called net total curvature, for finite graphs embedded inR n , and investigate its properties. Two guiding principles ar e given by Milnor's way of measuring the local crookedness of a Jorda n curve via a Crofton-type formula, and by considering the double cover of a given graph as an Eulerian circuit. The strength of combining these ideas in defining the curvature f unctional is (1) it allows us to interpret the singular non-eulidean behavior at the vertices of the graph as a superposition of vertices of a 1-dimensional manifold, and thus (2) one can compute the total curva- ture for a wide range of graphs by contrasting local and global properties of the graph utilizing the integral geometric representation of the cur vature. A collection of results on upper lower bounds of the total curvature on isotopy homeomorphism classes of embed- dings is presented, which in turn demonstrates the effectiveness of net total curvature as a new functional measuring complexity of spatial graphs in differential-geometric terms.", "Generalizing Milnor’s result that an FTC (finite total curvature) knot has an isotopic inscribed polygon, we show that any two nearby knotted FTC graphs are isotopic by a small isotopy. We also show how to obtain sharper constants when the starting curve is smooth. We apply our main theorem to prove a limiting result for essential subarcs of a knot." ] }
1210.0864
1763164889
Let @math be a class of probability distributions over the discrete domain @math We show that if @math satisfies a rather general condition -- essentially, that each distribution in @math can be well-approximated by a variable-width histogram with few bins -- then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of @math unknown distributions from @math We analyze several natural types of distributions over @math , including log-concave, monotone hazard rate and unimodal distributions, and show that they have the required structural property of being well-approximated by a histogram with few bins. Applying our general algorithm, we obtain near-optimally efficient algorithms for all these mixture learning problems.
MHR distributions: As noted above, MHR distributions appear frequently and play an important role in reliability theory and in economics (to the extent that the MHR condition is considered a standard assumption in these settings). Surprisingly, the problem of learning an unknown MHR distribution or mixture of such distributions has not been explicitly considered in the statistics literature. We note that several authors have considered the problem of estimating the hazard rate of an MHR distribution in different contexts, see e.g. @cite_48 @cite_13 @cite_14 @cite_50 .
{ "cite_N": [ "@cite_48", "@cite_14", "@cite_13", "@cite_50" ], "mid": [ "1982909230", "2188923044", "", "2303223340" ], "abstract": [ "On construit des estimateurs non parametriques da la fonction de repartition F et de sa fonction de hasard dans la classe de toutes les distributions IFR", "Consider non-parametric estimation of a decreasing density function f under the random (right) censorship model. Alternatively, consider estimation of a monotone increasing (or decreasing) hazard rate i based on randomly right censored data. We show that the non-paramet- ric maximum likelihood estimator of the density f (introduced by Laslett, 1982) is asymptotically equivalent to the estimator obtained by differentiating the least concave majorant of the Kaplan-Meier estimator, the non-parametric maximum likelihood estimator of the distribution function F in the larger model without any monotonicity assumption. A similar result is shown to hold for the non-parametric maximum likelihood estimator of an increasing hazard rate A: the non-parametric maximum likelihood estimator of A (introduced in the uncensored case by Prakasa Rao, 1970) is asymptotically equivalent to the estimator obtained by differentiation of the greatest convex minorant of the Nelson-Aalen estimator, the non-parametric maximum likelihood estimator of the cumulative hazard function A in the larger model without any monotonicity assumption. In proving these asymptotic equivalences, we also establish the asymptotic distributions of the different estimators at a fixed point at which the monotonicity assumption is strictly satisfied.", "", "We propose a new method for pointwise estimation of monotone, uni- modal and U-shaped failure rates, under a right-censoring mechanism, using non- parametric likelihood ratios. The asymptotic distribution of the likelihood ratio is pivotal, though non-standard, and can therefore be used to construct asymptotic confidence intervals for the failure rate at a point of interest, via inversion. Major advantages of the new method lie in the facts that it completely avoids estima- tion of nuisance parameters, or the choice of a bandwidth tuning parameter, and is extremely easy to implement. The new method is shown to perform competi- tively in simulations, and is illustrated on a data set involving time to diagnosis of schizophrenia in the Jerusalem Perinatal Cohort Schizophrenia Study." ] }
1210.0864
1763164889
Let @math be a class of probability distributions over the discrete domain @math We show that if @math satisfies a rather general condition -- essentially, that each distribution in @math can be well-approximated by a variable-width histogram with few bins -- then there is a highly efficient (both in terms of running time and sample complexity) algorithm that can learn any mixture of @math unknown distributions from @math We analyze several natural types of distributions over @math , including log-concave, monotone hazard rate and unimodal distributions, and show that they have the required structural property of being well-approximated by a histogram with few bins. Applying our general algorithm, we obtain near-optimally efficient algorithms for all these mixture learning problems.
Unimodal distributions: The problem of learning a single unimodal distribution is well-understood: Birg ' e @cite_10 gave an efficient algorithm for learning continuous unimodal distributions (whose density is absolutely bounded); his algorithm, when translated to the discrete domain @math , requires @math samples. This sample size is also known to be optimal (up to constant factors) @cite_43 . In recent work, @cite_16 gave an efficient algorithm to learn @math -modal distributions over @math . We remark that their result does not imply ours, as even a mixture of two unimodal distributions over @math may have @math modes. We are not aware of prior work on efficiently learning mixtures of unimodal distributions.
{ "cite_N": [ "@cite_43", "@cite_16", "@cite_10" ], "mid": [ "2014255551", "2952993179", "2126204693" ], "abstract": [ "On considere la classe de toutes les densites unimodales definies sur un intervalle de longueur L et borne par H; on etudie le risque minimax sur cette classe, lorsqu'on estime en utilisant n observations i.i.d., la perte etant mesuree par la distance L 1 entre l'estimateur et la densite reelle", "A @math -modal probability distribution over the discrete domain @math is one whose histogram has at most @math \"peaks\" and \"valleys.\" Such distributions are natural generalizations of monotone ( @math ) and unimodal ( @math ) probability distributions, which have been intensively studied in probability theory and statistics. In this paper we consider the problem of (i.e., performing density estimation of) an unknown @math -modal distribution with respect to the @math distance. The learning algorithm is given access to independent samples drawn from an unknown @math -modal distribution @math , and it must output a hypothesis distribution @math such that with high probability the total variation distance between @math and @math is at most @math Our main goal is to obtain algorithms for this problem that use (close to) an information-theoretically optimal number of samples. We give an efficient algorithm for this problem that runs in time @math . For @math , the number of samples used by our algorithm is very close (within an @math factor) to being information-theoretically optimal. Prior to this work computationally efficient algorithms were known only for the cases @math Birge:87b,Birge:97 . A novel feature of our approach is that our learning algorithm crucially uses a new algorithm for as a key subroutine. The learning algorithm uses the property tester to efficiently decompose the @math -modal distribution into @math (near-)monotone distributions, which are easier to learn.", "The Grenander estimator of a decreasing density, which is defined as the derivative of the concave envelope of the empirical c.d.f., is known to be a very good estimator of an unknown decreasing density on the half-line R + when this density is not assumed to be smooth. It is indeed the maximum likelihood estimator and one can get precise upper bounds for its risk when the loss is measured by the L 1 -distance between densities. Moreover, if one restricts oneself to the compact subsets of decreasing densities bounded by H with support on [0, L] the risk of this estimator is within a fixed factor of the minimax risk. The same is true if one deals with the maximum likelihood estimator for unimodal densities with known mode. When the mode is unknown, the maximum likelihood estimator does not exist any more. We shall provide a general purpose estimator (together with a computational algorithm) for estimating nonsmooth unimodal densities. Its risk is the same as the risk of the Grenander estimator based on the knowledge of the true mode plus some lower order term. It can also cope with small departures from unimodality." ] }
1210.1104
2220377616
In order to anticipate dangerous events, like a collision, an agent needs to make long-term predictions. However, those are challenging due to uncertainties in internal and external variables and environment dynamics. A sensorimotor model is acquired online by the mobile robot using a state-of-the-art method that learns the optical flow distribution in images, both in space and time. The learnt model is used to anticipate the optical flow up to a given time horizon and to predict an imminent collision by using reinforcement learning. We demonstrate that multi-modal predictions reduce to simpler distributions once actions are taken into account.
In our work we are very interested in providing long-term predictions. One option is to learn a model based on a differential equation of how sensor values change @cite_1 . Then we can anticipate sensory states at arbitrary times by simulating such a system, although accuracy decreases quickly depending on model complexity. Unfortunately, this cannot be reused directly to predict collisions and cannot handle multi-modality unless using a mixture.
{ "cite_N": [ "@cite_1" ], "mid": [ "61701622" ], "abstract": [ "This paper presents a predictive model of sensor readings for mobile robot. The model predicts sensor readings for given time horizon based on current sensor readings and velocities of wheels assumed for this horizon. Similar models for such anticipation have been proposed in the literature. The novelty of the model presented in the paper comes from the fact that its structure takes into account physical phenomena and is not just a black box, for example a neural network. From this point of view it may be regarded as a semi-phenomenological model. The model is developed for the Khepera robot, but after certain modifications, it may be applied for any robot with distance sensors such as infrared or ultrasonic sensors. Keywords—Mobile robot, sensors, prediction, anticipation. I. INTRODUCTION ODELS of mobile robots usually takes into account only their kinematics and dynamics. Then variables describing the state of the mobile robot are: location coordinates, direction (azimuth) and sometimes velocity. In such models usually readings from the distance sensors are not taken into account. Such measurements, describing the distance from obstacles, are utilized only during generation of the control signals (velocities of the wheels). Moreover, only present measurements are used without any anticipation of measurements. Such anticipation may improve the control quality. If, for example, during the so called behavioral control the \"avoid obstacles\" rule is activated if the sensory reading exceed assumed threshold, then the control signals might be improved based on the anticipation of the sensory readings. In the literature several attempts of anticipation of sensory readings are reported but they are not serve during the control of the robot but for selection of landmarks (3). For example in articles (2) and (5) an artificial feedforward neural network was proposed as a predictor of further measurements of sensors. The proposed predictor was very simple - it contained only one layer and took as inputs only past readings from neighbouring sensors and did not use wheel velocities of the mobile robot. Such simplified model was good enough for the landmark selection based on the difference between the predicted and real readings but for the control purposes a" ] }
1210.1104
2220377616
In order to anticipate dangerous events, like a collision, an agent needs to make long-term predictions. However, those are challenging due to uncertainties in internal and external variables and environment dynamics. A sensorimotor model is acquired online by the mobile robot using a state-of-the-art method that learns the optical flow distribution in images, both in space and time. The learnt model is used to anticipate the optical flow up to a given time horizon and to predict an imminent collision by using reinforcement learning. We demonstrate that multi-modal predictions reduce to simpler distributions once actions are taken into account.
In order to provide the agent with longer-term predictions, some authors proposed chaining forward models, where each one provides one-step predictions @cite_0 @cite_2 @cite_6 @cite_18 . Their results showed that agents that anticipate sensory consequences of their actions behave more effectively than reactive agents. However, due to the intrinsic complexity in sensor data, some authors used a Mixture of Experts, where each expert was a Recurrent Neural Network (RNN) @cite_0 . Experiments were conducted in simulated environments with low-dimensional sensor data, where it is not clear how well it could scale in more realistic environments. Furthermore, this chaining process leads to accumulation of prediction errors, so authors proposed filtering schema based in PCA @cite_18 or using RNNs that also take as input the hidden state of the network from last step @cite_6 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_6", "@cite_2" ], "mid": [ "2043385819", "2037091700", "2043369301", "2022950268" ], "abstract": [ "Abstract This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme—the so-called mixture of recurrent neural net (RNN) experts—in which a set of RNN modules become self-organized as experts on multiple levels, in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meantime, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further, more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical system analysis clarified the mechanism of the articulation. The possible correspondence between the articulation mechanism and the attention switching mechanism in thalamo-cortical loops is also discussed.", "Several scientists suggested that certain perceptual qualities are based on sensorimotor anticipation: for example, the softness of a sponge is perceived by anticipating the sensations resulting from a grasping movement. For the perception of spatial arrangements, this article demonstrates that this concept can be realized in a mobile robot. The robot first learned to predict how its visual input changes under movement commands. With this ability, two perceptual tasks could be solved: judging the distance to an obstacle in front by 'mentally' simulating a movement toward the obstacle, and recognizing a dead end by simulating either an obstacle-avoidance algorithm or a recursive search for an exit. A simulated movement contained a series of prediction steps. In each step, a multilayer perceptron anticipated the next image, which, however, became increasingly noisy. To denoise an image, it was split into patches, and each patch was projected onto a manifold obtained by modelling the density of the distribution of training patches with a mixture of Gaussian functions.", "This paper explores the possibility of providing robots with an 'inner world' based on internal simulation of perception rather than an explicit representational world model. First a series of initial experiments is discussed, in which recurrent neural networks were evolved to control collision-free corridor following behavior in a simulated Khepera robot and predict the next time step's sensory input as accurately as possible. Attempts to let the robot act blindly, i.e. repeatedly using its own prediction instead of the real sensory input, were not particularly successful. This motivated the second series of experiments, on which this paper focuses. A feed-forward network was used which, as above, controlled behavior and predicted sensory input. However, weight evolution was now guided by the sole fitness criterion of successful, 'blindfolded' corridor following behavior, including timely turns, as above using as input only own sensory predictions rather than actual sensory input. The trained robot is in some cases actually able to move blindly in a simple environment for hundreds of time steps, successfully handling several multi-step turns. Somewhat surprisingly, however, it does so based on self-generated input that is not particularly similar to the actual sensory values.", "The basic idea of our anticipatory approach to perception is to avoid the common separation of perception and generation of behavior and to fuse both aspects into a consistent neural process. Our approach tries to explain the phenomenon of perception, in particular, of perception at the level of sensorimotor intelligence, from a behavior-oriented point of view. Perception is assumed to be a generative process of anticipating the course of events resulting from alternative sequences of hypothetically executed actions. By means of this sensorimotor anticipation, it is possible to characterize a visual scenery immediately in categories of behavior, i.e. by a set of actions which describe possible methods of interaction with the objects in the environment. Thus, the competence to perceive a complex situation can be understood as the capability to anticipate the course of events caused by different action sequences. Starting from an abstract description of anticipatory perception and the essential biological evidence for internal simulation, we present two biologically motivated computational models that are able to anticipate and evaluate hypothetically sensorimotor sequences. Both models consider functional aspects of those cortical and subcortical systems that are assumed to be involved in the process of sensory prediction and sensorimotor control. Our first approach, the Model for Anticipation based on Sensory IMagination (MASIM), realizes a sequential search in sensorimotor space using a simple model of lateral cerebellum as sensory predictor. We demonstrate the efficiency of this model approach in the light of visually guided local navigation behaviors of a mobile system. The second approach, the Model for Anticipation based on Cortical Representations (MACOR), is actually still at a conceptual level of realization. We postulate that this model allows a completely parallel search at the neocortical level using assemblies of spiking neurons for grouping, separation, and selection of sensorimotor sequences. Both models are intended as general schemes for anticipation based perception at the level of sensorimotor intelligence." ] }
1210.1104
2220377616
In order to anticipate dangerous events, like a collision, an agent needs to make long-term predictions. However, those are challenging due to uncertainties in internal and external variables and environment dynamics. A sensorimotor model is acquired online by the mobile robot using a state-of-the-art method that learns the optical flow distribution in images, both in space and time. The learnt model is used to anticipate the optical flow up to a given time horizon and to predict an imminent collision by using reinforcement learning. We demonstrate that multi-modal predictions reduce to simpler distributions once actions are taken into account.
From the application point of view, many works use forward models to solve certain navigation related tasks. Forward models have been applied to generate expectations of sensory values, which have been used to correct noisy optical flow fields @cite_16 or to detect useful landmarks for navigation @cite_10 . If the forward model was acquired in an obstacle free environment, comparing expectations to novel sensory data also has been applied to detect obstacles @cite_12 . All those expectation-driven mechanisms could benefit from an incremental model as the one presented in this work to generate such expectations.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "1522785459", "1513733916", "1508050220" ], "abstract": [ "In this paper we present a biologically inspired neural architecture for visual perception based on anticipation. The main goal of this work is to demonstrate, that anticipation is a central key to improve the perception performance of technical systems. The presented approach is able to increase the robustness of the perception process against noise or sensory dropouts. We demonstrate these perceptional improvements through our architecture at the level of local navigation behavior of the miniature robot Khepera. We claim that perception is not an end in itself. Instead it is a sensorimotor process integrating the generation of behavior.", "There are many ways to define what constitutes a suitable landmark for mobile robot navigation, and automatically extracting landmarks from an environment as the robot travels is an open research problem. This paper describes an automatic landmark selection algorithm that chooses as landmarks any places where a trained sensory anticipation model makes poor predictions. The model is applied to a route navigation task, and the results are evaluated according to how well landmarks align between different runs on the same route. The quality of landmark matches is compared for several types of sensory anticipation models and also against a non-anticipatory landmark selector. We extend and correct the analysis presented in [6] and also present a more complete picture of the importance of sensory anticipation to the landmark selection process. Finally, we show that the system can navigate reliably in a goal-oriented route-following task, and we compare success rates using only metric distances with using a combination of odometric and landmark category information.", "" ] }
1210.0693
2953368425
We propose a contention-based random-access protocol, designed for wireless networks where the number of users is not a priori known. The protocol operates in rounds divided into equal-duration slots, performing at the same time estimation of the number of users and resolution of their transmissions. The users independently access the wireless link on a slot basis with a predefined probability, resulting in a distribution of user transmissions over slots, based on which the estimation and contention resolution are performed. Specifically, the contention resolution is performed using successive interference cancellation which, coupled with the use of the optimized access probabilities, enables throughputs that are substantially higher than the traditional slotted ALOHA-like protocols. The key feature of the proposed protocol is that the round durations are not a priori set and they are terminated when the estimation contention-resolution performance reach the satisfactory levels.
In @cite_9 it was noted that the execution of SIC within the framed SA framework resembles the execution of the iterative belief-propagation (BP) decoding on erasure channel, enabling the application of theory and tools of codes-on-graphs. Following this insight, the author analyzed the convergence of the SIC using and-or tree arguments @cite_7 and obtained optimal repetition strategies in terms of maximizing throughput of the scheme. It was shown that the optimal repetition strategies follow the same guidelines used for encoding of left-irregular LDPC codes. In the asymptotic case when the number of users tends to infinity, both (logical) @math and @math tend to 1. Nevertheless, for the optimal performance, the number of slots in the frame is determined by the number of transmissions.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2127820195", "2029268759" ], "abstract": [ "Contention resolution diversity slotted ALOHA (CRDSA) is a simple but effective improvement of slotted ALOHA. CRDSA relies on MAC bursts repetition and on interference cancellation (IC), achieving a peak throughput T ≅ 0.55, whereas for slotted ALOHA T ≅ 0.37. In this paper we show that the IC process of CRDSA can be conveniently described by a bipartite graph, establishing a bridge between the IC process and the iterative erasure decoding of graph-based codes. Exploiting this analogy, we show how a high throughput can be achieved by selecting variable burst repetition rates according to given probability distributions, leading to irregular graphs. A framework for the probability distribution optimization is provided. Based on that, we propose a novel scheme, named irregular repetition slotted ALOHA, that can achieve a throughput T ≅ 0.97 for large frames and near to T ≅ 0.8 in practical implementations, resulting in a gain of 45 w.r.t. CRDSA. An analysis of the normalized efficiency is introduced, allowing performance comparisons under the constraint of equal average transmission power. Simulation results, including an IC mechanism described in the paper, substantiate the validity of the analysis and confirm the high efficiency of the proposed approach down to a signal-to-noise ratio as a low as Eb N0=2 dB.", "We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above" ] }
1210.0660
1896876826
There is an increasing trend for businesses to migrate their systems towards the cloud. Security concerns that arise when outsourcing data and computation to the cloud include data confidentiality and privacy. Given that a tremendous amount of data is being generated everyday from plethora of devices equipped with sensing capabilities, we focus on the problem of access controls over live streams of data based on triggers or sliding windows, which is a distinct and more challenging problem than access control over archival data. Specifically, we investigate secure mechanisms for outsourcing access control enforcement for stream data to the cloud. We devise a system that allows data owners to specify fine-grained policies associated with their data streams, then to encrypt the streams and relay them to the cloud for live processing and storage for future use. The access control policies are enforced by the cloud, without the latter learning about the data, while ensuring that unauthorized access is not feasible. To realize these ends, we employ a novel cryptographic primitive, namely proxy-based attribute-based encryption, which not only provides security but also allows the cloud to perform expensive computations on behalf of the users. Our approach is holistic, in that these controls are integrated with an XML based framework (XACML) for high-level management of policies. Experiments with our prototype demonstrate the feasibility of such mechanisms, and early evaluations suggest graceful scalability with increasing numbers of policies, data streams and users.
Existing works on access control for stream data assume trusted domains and focus on the specification and enforcement of access policies @cite_31 @cite_5 @cite_2 . They are different to our work which considers outsourcing access control to an untrusted environment. Similarly, previous works that use XACML for fine-grained access control have also focused on trusted domains @cite_23 . We use XACML only for policy management, and rely on encryption schemes for policy enforcement.
{ "cite_N": [ "@cite_5", "@cite_31", "@cite_23", "@cite_2" ], "mid": [ "2069543922", "1580709729", "2033971597", "2106722434" ], "abstract": [ "Access control is an important component of any computational system. However, it is only recently that mechanisms to guard against unauthorized access for streaming data have been proposed. In this paper, we study how to enforce the role-based access control model proposed by us in [5]. We design a set of novel secure operators, that basically filter out tuples attributes from results of the corresponding (non-secure) operators that are not accessible according to the specified access control policies. We further develop an access control mechanism to enforce the access control policies based on these operators. We show that our method is secure according to the specified policies.", "Many data stream processing systems are increasingly being used to support applications that handle sensitive information, such as credit card numbers and locations of soldiers in battleground [1,2,3,6]. These data have to be protected from unauthorized accesses. However, existing access control models and mechanisms cannot be adequately adopted on data streams. In this paper, we propose a novel access control model for data streams based on the Aurora data model [2]. Our access control model is role-based and has the following components. Objects to be protected are essentially views (or rather queries) over data streams. We also define two types of privileges - Read privilege for operations such as Filter, Map, BSort, and a set of aggregate privileges for operations such as Min, Max, Count, Avg and Sum. The model also allows the specification of temporal constraints either to limit access to data during a given time bound or to constraint aggregate operations over the data within a specified time window. In the paper, we present the access control model and its formal semantics.", "Sharing data from various sources and of diverse kinds, and fusing them together for sophisticated analytics and mash-up applications are emerging trends, and are prerequisites for realizing grand visions such as that of cyber-physical systems enabled smart cities. Cloud infrastructure can enable such data sharing both because it can scale easily to an arbitrary volume of data and computation needs on demand, as well as because of natural collocation of diverse such data sets within the infrastructure. However, in order to convince data owners that their data are well protected while being shared among cloud users, the cloud platform needs to provide flexible mechanisms for the users to express the constraints (access rules) subject to which the data should be shared, and likewise, enforce them effectively. We study a comprehensive set of practical scenarios where data sharing needs to be enforced by methods such as aggregation, windowed frame, value constrains, etc., and observe that existing basic access control mechanisms do not provide adequate flexibility to support effective data sharing in a secure and controlled manner. In this paper, we thus propose a framework for cloud that extends popular XACML model significantly by integrating flexible access control decisions and data access in a seamless fashion. We have prototyped the framework and deployed it on commercial cloud environment for experimental runs to test the efficacy of our approach and evaluate the performance of the implemented prototype.", "The management of privacy and security in the context of data stream management systems (DSMS) remains largely an unaddressed problem to date. Unlike in traditional DBMSs where access control policies are persistently stored on the server and tend to remain stable, in streaming applications the contexts and with them the access control policies on the real-time data may rapidly change. A person entering a casino may want to immediately block others from knowing his current whereabouts. We thus propose a novel \";stream-centric\"; approach, where security restrictions are not persistently stored on the DSMS server, but rather streamed together with the data. Here, the access control policies are expressed via security constraints (called security punctuations, or short, sps) and are embedded into data streams. The advantages of the sp model include flexibility, dynamicity and speed of enforcement. DSMSs can adapt to not only data-related but also security-related selectivities, which helps reduce the waste of resources, when few subjects have access to data. We propose a security-aware query algebra and new equivalence rules together with cost estimations to guide the security-aware query plan optimization. We have implemented the sp framework in a real DSMS. Our experimental results show the validity and the performance advantages of our sp model as compared to alternative access control enforcement solutions for DSMSs." ] }
1210.0461
2950755983
We consider the problem of sparse matrix multiplication by the column row method in a distributed setting where the matrix product is not necessarily sparse. We present a surprisingly simple method for "consistent" parallel processing of sparse outer products (column-row vector products) over several processors, in a communication-avoiding setting where each processor has a copy of the input. The method is consistent in the sense that a given output entry is always assigned to the same processor independently of the specific structure of the outer product. We show guarantees on the work done by each processor, and achieve linear speedup down to the point where the cost is dominated by reading the input. Our method gives a way of distributing (or parallelizing) matrix product computations in settings where the main bottlenecks are storing the result matrix, and inter-processor communication. Motivated by observations on real data that often the absolute values of the entries in the product adhere to a power law, we combine our approach with frequent items mining algorithms and show how to obtain a tight approximation of the weight of the heaviest entries in the product matrix. As a case study we present the application of our approach to frequent pair mining in transactional data streams, a problem that can be phrased in terms of sparse @math -integer matrix multiplication by the column-row method. Experimental evaluation of the proposed method on real-life data supports the theoretical findings.
Manku and Motwani @cite_6 first recognized the necessity for efficient algorithms targeted at frequent itemsets in transaction streams and presented a heuristic approach generalizing their StickySampling algorithm. A straightforward approach to mining of frequent pairs is to reduce the problem to that of mining frequent items by generating all item pairs in a given transaction. @cite_9 and Campagna and Pagh @cite_7 present randomized algorithms for transaction stream mining. The theoretical bounds on the quality of their estimates however heavily depend on the assumption that transactions are either generated independently at random by some process or arrive in a random order. It is already clear from the experiments of @cite_7 that such optimistic assumptions do not hold for many data sets. For both schemes @cite_9 @cite_7 it is easy to find an ordering of essentially any transaction stream that breaks the randomness assumption, and makes it perform much worse than the theoretical bounds. We therefore believe that a more conservative model is needed to derive a rigorous theoretical analysis, while exploiting observed properties of real data sets.
{ "cite_N": [ "@cite_9", "@cite_7", "@cite_6" ], "mid": [ "2048105574", "", "2069980026" ], "abstract": [ "Mining frequent itemsets from transactional data streams is challenging due to the nature of the exponential explosion of itemsets and the limit memory space required for mining frequent itemsets. Given a domain of I unique items, the possible number of itemsets can be up to 2^I-1. When the length of data streams approaches to a very large number N, the possibility of an itemset to be frequent becomes larger and difficult to track with limited memory. The existing studies on finding frequent items from high speed data streams are false-positive oriented. That is, they control memory consumption in the counting processes by an error parameter @e, and allow items with support below the specified minimum support s but above [email protected] counted as frequent ones. However, such false-positive oriented approaches cannot be effectively applied to frequent itemsets mining for two reasons. First, false-positive items found increase the number of false-positive frequent itemsets exponentially. Second, minimization of the number of false-positive items found, by using a small @e, will make memory consumption large. Therefore, such approaches may make the problem computationally intractable with bounded memory consumption. In this paper, we developed algorithms that can effectively mine frequent item(set)s from high speed transactional data streams with a bound of memory consumption. Our algorithms are based on Chernoff bound in which we use a running error parameter to prune item(set)s and use a reliability parameter to control memory. While our algorithms are false-negative oriented, that is, certain frequent itemsets may not appear in the results, the number of false-negative itemsets can be controlled by a predefined parameter so that desired recall rate of frequent itemsets can be guaranteed. Our extensive experimental studies show that the proposed algorithms have high accuracy, require less memory, and consume less CPU time. They significantly outperform the existing false-positive algorithms.", "", "Research in data stream algorithms has blossomed since late 90s. The talk will trace the history of the Approximate Frequency Counts paper, how it was conceptualized and how it influenced data stream research. The talk will also touch upon a recent development: analysis of personal data streams for improving our quality of lives." ] }
1209.5833
1498362026
We propose a learning method with feature selection for Locality-Sensitive Hashing. Locality-Sensitive Hashing converts feature vectors into bit arrays. These bit arrays can be used to perform similarity searches and personal authentication. The proposed method uses bit arrays longer than those used in the end for similarity and other searches and by learning selects the bits that will be used. We demonstrated this method can effectively perform optimization for cases such as fingerprint images with a large number of labels and extremely few data that share the same labels, as well as verifying that it is also effective for natural images, handwritten digits, and speech features.
Minimal Loss Hashing (MLH) is a supervised learning of hyperplanes @cite_19 . MLH conducts learning aiming to minimize a discontinuous function called the empirical loss function that has @math as its argument. The empirical loss function has a large value when data pairs with the same labels have large Hamming distances, and data pairs of different labels have small Hamming distances. Since the empirical loss function is a discontinuous function, optimization with the gradient method cannot be applied. For this reason, in MLH one considers a differentiable upper bound function of the empirical loss function and minimizes the upper bound function by the stochastic gradient method. Principal Component Analysis Hashing (PCAH) @cite_6 is a unsupervised learning to determine hyperplanes. PCAH analyzes principal components of learning data to make principal component vectors to be normal vectors of hyperplanes. A disadvantage of this method is that it cannot treat bit numbers larger than the dimension of the feature space.
{ "cite_N": [ "@cite_19", "@cite_6" ], "mid": [ "2221852422", "2157465536" ], "abstract": [ "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "Although it has been studied for several years by computer vision and machine learning communities, image annotation is still far from practical. In this paper, we present AnnoSearch, a novel way to annotate images using search and data mining technologies. Leveraging the Web-scale images, we solve this problem in two-steps: 1) searching for semantically and visually similar images on the Web, 2) and mining annotations from them. Firstly, at least one accurate keyword is required to enable text-based search for a set of semantically similar images. Then content-based search is performed on this set to retrieve visually similar images. At last, annotations are mined from the descriptions (titles, URLs and surrounding texts) of these images. It worth highlighting that to ensure the efficiency, high dimensional visual features are mapped to hash codes which significantly speed up the content-based search process. Our proposed approach enables annotating with unlimited vocabulary, which is impossible for all existing approaches. Experimental results on real web images show the effectiveness and efficiency of the proposed algorithm." ] }
1209.5773
1768713228
Alloy is an increasingly popular lightweight specification language based on relational logic. Alloy models can be automatically verified within a bounded scope using off-the-shelf SAT solvers. Since false assertions can usually be disproved using small counter-examples, this approach suffices for most applications. Unfortunately, it can sometimes lead to a false sense of security, and in critical applications a more traditional unbounded proof may be required. The automatic theorem prover Prover9 has been shown to be particularly effective for proving theorems of relation algebras [7], a quantifier-free (or point-free) axiomatization of a fragment of relational logic. In this paper we propose a translation from Alloy specifications to fork algebras (an extension of relation algebras with the same expressive power as relational logic) which enables their unbounded verification in Prover9. This translation covers not only logic assertions, but also the structural aspects (namely type declarations), and was successfully implemented and applied to several examples.
Using Prover9 to verify relational models was already proposed in @cite_0 . However, since they use only RA, their expressiveness power is limited to a 3 variable fragment of RL. A significant difference occurs in the way the types are represented. While they use to represent sets, and thus types, we use coreflexives. This change is motivated by our belief that coreflexives are more amenable to calculation. They also did not propose a translation from Alloy to RA and assume the model is already defined in this formalism.
{ "cite_N": [ "@cite_0" ], "mid": [ "2786407357" ], "abstract": [ "A camera film advancing apparatus is disclosed which is improved in respect of reliability of engagement between the pawl of the film advancing sprocket and the perforation of the film to simplify the operation necessary for film charging." ] }
1209.5800
2951106828
In a recent empirical study we found that evaluating abstractions of Model-Driven Engineering (MDE) is not as straight forward as it might seem. In this paper, we report on the challenges that we as researchers faced when we conducted the aforementioned field study. In our study we found that modeling happens within a complex ecosystem of different people working in different roles. An empirical evaluation should thus mind the ecosystem, that is, focus on both technical and human factors. In the following, we present and discuss five lessons learnt from our recent work.
Heijstek and Chaudron @cite_4 studied an industrial MDE case over two years, where a team of 28 built a business application for the financial sector. Using grounded theory they found 14 factors which impact the architectural process. They found that MDE shifts responsibility from engineers to modelers, and that the domain-specific models facilitated easier communication across disciplines and even became a language of business experts. The setup of their case differs from our recent work @cite_0 in that their case had a whole-system view on a closed ecosystem of 28 people, with premium access to both project lead and main architect of the system. In comparison, we had a peephole view on a much larger ecosystem of tens of thousands of people that are collaborating across the main company and it's subsidiaries. It will be interesting to compare the findings of these two studies with regard to these different perspectives.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2012058332", "1974984063" ], "abstract": [ "In this paper, we investigate model-driven engineering, reporting on an exploratory case-study conducted at a large automotive company. The study consisted of interviews with 20 engineers and managers working in different roles. We found that, in the context of a large organization, contextual forces dominate the cognitive issues of using model-driven technology. The four forces we identified that are likely independent of the particular abstractions chosen as the basis of software development are the need for diffing in software product lines, the needs for problem-specific languages and types, the need for live modeling in exploratory activities, and the need for point-to-point traceability between artifacts. We also identified triggers of accidental complexity, which we refer to as points of friction introduced by languages and tools. Examples of the friction points identified are insufficient support for model diffing, point-to-point traceability, and model changes at runtime.", "While Model-Driven Development (MDD) is an increasingly popular software development approach, its impact on the development process in large-scale, industrial practice is not yet clear. For this study the application of MDD in a large-scale industrial software development project is analyzed over a period of two years. Applying a grounded theory approach we identified 14 factors which impact the architectural process. We found that scope creep is more likely to occur, late changes can imply more extensive rework and that business engineers need to be more aware of the technical impact of their decisions. In addition, the introduced Domain-Specific Language (DSL) provides a new common idiom that can be used by more team members and will ease communication among team members and with clients. Also, modelers need to be much more explicit and complete in their descriptions. Parallel development of a code generator and defining a proper meta-model require additional time investments. Lastly, the more central role of software architecture design documentation requires more structured, detailed and complete architectural information and consequently, more frequent reviews." ] }
1209.5800
2951106828
In a recent empirical study we found that evaluating abstractions of Model-Driven Engineering (MDE) is not as straight forward as it might seem. In this paper, we report on the challenges that we as researchers faced when we conducted the aforementioned field study. In our study we found that modeling happens within a complex ecosystem of different people working in different roles. An empirical evaluation should thus mind the ecosystem, that is, focus on both technical and human factors. In the following, we present and discuss five lessons learnt from our recent work.
The proceedings of RAO 2006 @cite_2 , a workshop on the role of abstractions, providing interesting insight into both the role and study of abstraction, both in the context of MDE and in the context of software engineering in general.
{ "cite_N": [ "@cite_2" ], "mid": [ "2150164276" ], "abstract": [ "This workshop explores the concept of abstraction in software engineering at the individual, team and organization level. The aim is to explore the role of abstraction in dealing with complexity in the software engineering process, to discuss how the use of different levels of abstraction may facilitate performance of different activities, and to examine whether or not abstraction skills can be taught." ] }
1209.4829
2231439187
We determine the exact freezing threshold, r^f, for a family of models of random boolean constraint satisfaction problems, including NAE-SAT and hypergraph 2-colouring, when the constraint size is sufficiently large. If the constraint-density of a random CSP, F, in our family is greater than r^f then for almost every solution of F, a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations in which we change o(n) variables at a time, always switching to another solution. If the constraint-density is less than r^f, then almost every solution has o(n) frozen variables. Freezing is a key part of the clustering phenomenon that is hypothesized by non-rigorous techniques from statistical physics. The understanding of clustering has led to the development of advanced heuristics such as Survey Propogation. It has been suggested that the freezing threshold is a precise algorithmic barrier: that for densities below r^f the random CSPs can be solved using very simple algorithms, while for densities above r^f one requires more sophisticated techniques in order to deal with frozen clusters.
Achlioptas and Ricci-Tersenghi @cite_17 were the first to rigorously prove that freezing occurs in a random CSP. They studied random @math -SAT and showed that for @math , for a wide range of edge-densities below the satisfiability threshold and for every satisfying assignment @math , the vast majority of variables are 1-frozen w.r.t @math . They did so by stripping down to the *-core, which inspired us to do the same here. One difference between their approach and ours is that the variables of the *-core are 1-frozen by definition, whereas much of the work in this paper is devoted to proving that, for our models, they are in fact @math -frozen. We expect that our techniques should be able to prove that the 1-frozen variables established in @cite_17 are, indeed, @math -frozen.
{ "cite_N": [ "@cite_17" ], "mid": [ "1980276492" ], "abstract": [ "For a number of random constraint satisfaction problems, such as random k-SAT and random graph hypergraph coloring, there are very good estimates of the largest constraint density for which solutions exist. Yet, all known polynomial-time algorithms for these problems fail to find solutions even at much lower densities. To understand the origin of this gap we study how the structure of the space of solutions evolves in such problems as constraints are added. In particular, we prove that much before solutions disappear, they organize into an exponential number of clusters, each of which is relatively small and far apart from all other clusters. Moreover, inside each cluster most variables are frozen, i.e., take only one value. The existence of such frozen variables gives a satisfying intuitive explanation for the failure of the polynomial-time algorithms analyzed so far. At the same time, our results establish rigorously one of the two main hypotheses underlying Survey Propagation, a heuristic introduced by physicists in recent years that appears to perform extraordinarily well on random constraint satisfaction problems." ] }
1209.4829
2231439187
We determine the exact freezing threshold, r^f, for a family of models of random boolean constraint satisfaction problems, including NAE-SAT and hypergraph 2-colouring, when the constraint size is sufficiently large. If the constraint-density of a random CSP, F, in our family is greater than r^f then for almost every solution of F, a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations in which we change o(n) variables at a time, always switching to another solution. If the constraint-density is less than r^f, then almost every solution has o(n) frozen variables. Freezing is a key part of the clustering phenomenon that is hypothesized by non-rigorous techniques from statistical physics. The understanding of clustering has led to the development of advanced heuristics such as Survey Propogation. It has been suggested that the freezing threshold is a precise algorithmic barrier: that for densities below r^f the random CSPs can be solved using very simple algorithms, while for densities above r^f one requires more sophisticated techniques in order to deal with frozen clusters.
@cite_39 proves the asymptotic (in @math ) density for the appearance of what they call rigid variables in @math -COL, @math -NAE-SAT and hypergraph 2-colouring (and proves that this is an upper bound for @math -SAT) . The definition of rigid is somewhat weaker than frozen, but a simple modification extends their proof to show the same for frozen vertices. So @cite_39 provided the asymptotic, in @math , location of the freezing threhold for those models . @cite_25 provided the exact location of the threshold for @math -COL, when @math is sufficiently large.
{ "cite_N": [ "@cite_25", "@cite_39" ], "mid": [ "1991809010", "2115831572" ], "abstract": [ "We rigorously determine the exact freezing threshold, rkf, for k-colourings of a random graph. We prove that for random graphs with density above rkf, almost every colouring is such that a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations whereby we change the colours of o(n) vertices at a time, always obtaining another proper colouring. When the density is below rkf, then almost every colouring has at most o(n) frozen variables. This confirms hypotheses made using the non-rigorous cavity method. It has been hypothesized that the freezing threshold is the cause of the \"algorithmic barrier\", the long observed phenomenon that when the edge-density of a random graph exceeds hf k ln k(1+ok(1)), no algorithms are known to find k-colourings, despite the fact that this density is only half the k-colourability threshold. We also show that rkf is the threshold of a strong form of reconstruction for k-colourings of the Galton-Watson tree, and of the graphical model.", "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs." ] }
1209.4829
2231439187
We determine the exact freezing threshold, r^f, for a family of models of random boolean constraint satisfaction problems, including NAE-SAT and hypergraph 2-colouring, when the constraint size is sufficiently large. If the constraint-density of a random CSP, F, in our family is greater than r^f then for almost every solution of F, a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations in which we change o(n) variables at a time, always switching to another solution. If the constraint-density is less than r^f, then almost every solution has o(n) frozen variables. Freezing is a key part of the clustering phenomenon that is hypothesized by non-rigorous techniques from statistical physics. The understanding of clustering has led to the development of advanced heuristics such as Survey Propogation. It has been suggested that the freezing threshold is a precise algorithmic barrier: that for densities below r^f the random CSPs can be solved using very simple algorithms, while for densities above r^f one requires more sophisticated techniques in order to deal with frozen clusters.
@cite_46 @cite_39 @cite_11 establish the existence of what they call cluster-regions for @math -SAT, @math -COL, @math -NAE-SAT and hypergraph 2-colouring . @cite_39 proves that by the time the density exceeds @math times the hypothesized clustering threshold the solution space w.h.p. shatters into an exponential number of @math -separated cluster-regions, each containing an exponential number of solutions. While these cluster-regions are not shown to be well-connected, the well-connected property does not seem to be crucial to the difficulties that clusters pose for algorithms. So this was a very big step towards explaining why an algorithmic barrier seems to arise asymptotically (in @math ) close to the clustering threshold.
{ "cite_N": [ "@cite_46", "@cite_11", "@cite_39" ], "mid": [ "", "1818081266", "2115831572" ], "abstract": [ "", "Random instances of constraint satisfaction problems (CSPs) appear to be hard for all known algorithms when the number of constraints per variable lies in a certain interval. Contributing to the general understanding of the structure of the solution space of a CSP in the satisfiable regime, we formulate a set of technical conditions on a large family of random CSPs and prove bounds on three most interesting thresholds for the density of such an ensemble: namely, the satisfiability threshold, the threshold for clustering of the solution space, and the threshold for an appropriate reconstruction problem on the CSPs. The bounds become asymptoticlally tight as the number of degrees of freedom in each clause diverges. The families are general enough to include commonly studied problems such as random instances of Not-All-Equal SAT, k-XOR formulae, hypergraph 2-coloring, and graph k-coloring. An important new ingredient is a condition involving the Fourier expansion of clauses, which characterizes the class of ...", "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs." ] }
1209.4829
2231439187
We determine the exact freezing threshold, r^f, for a family of models of random boolean constraint satisfaction problems, including NAE-SAT and hypergraph 2-colouring, when the constraint size is sufficiently large. If the constraint-density of a random CSP, F, in our family is greater than r^f then for almost every solution of F, a linear number of variables are frozen, meaning that their colours cannot be changed by a sequence of alterations in which we change o(n) variables at a time, always switching to another solution. If the constraint-density is less than r^f, then almost every solution has o(n) frozen variables. Freezing is a key part of the clustering phenomenon that is hypothesized by non-rigorous techniques from statistical physics. The understanding of clustering has led to the development of advanced heuristics such as Survey Propogation. It has been suggested that the freezing threshold is a precise algorithmic barrier: that for densities below r^f the random CSPs can be solved using very simple algorithms, while for densities above r^f one requires more sophisticated techniques in order to deal with frozen clusters.
@cite_40 @cite_4 provided the first asymptotically tight lower bounds on the satisfiability threshold of @math -NAE-SAT and hypergraph 2-colouring, achieving a bound that is roughly equal to the condensation threshold. @cite_38 provides an even stronger bound for hypergraph 2-colouring, extending above the condensation threshold. @cite_29 provides a remarkably strong bound for @math -NAE-SAT - the difference between their upper and lower bounds decreases exponentially with @math .
{ "cite_N": [ "@cite_38", "@cite_40", "@cite_29", "@cite_4" ], "mid": [ "1610636547", "", "2132891419", "2038932251" ], "abstract": [ "For many random constraint satisfaction problems such as random satisfiability or random graph or hypergraph coloring, the best current estimates of the threshold for the existence of solutions are based on the first and the second moment method. However, in most cases these techniques do not yield matching upper and lower bounds. Sophisticated but non-rigorous arguments from statistical mechanics have ascribed this discrepancy to the existence of a phase transition called condensation that occurs shortly before the actual threshold for the existence of solutions and that affects the combinatorial nature of the problem (Krzakala, Montanari, Ricci-Tersenghi, Semerjian, Zdeborova: PNAS 2007). In this paper we prove for the first time that a condensation transition exists in a natural random CSP, namely in random hypergraph 2-coloring. Perhaps surprisingly, we find that the second moment method applied to the number of 2-colorings breaks down strictly before the condensation transition. Our proof also yields slightly improved bounds on the threshold for random hypergraph 2-colorability.", "", "The best current estimates of the thresholds for the existence of solutions in random constraint satisfaction problems ('CSPs') mostly derive from the first and the second moment method. Yet apart from a very few exceptional cases these methods do not quite yield matching upper and lower bounds. According to deep but non-rigorous arguments from statistical mechanics, this discrepancy is due to a change in the geometry of the set of solutions called condensation that occurs shortly before the actual threshold for the existence of solutions (Krzakala, Montanari, Ricci-Tersenghi, Semerjian, Zdeborova: PNAS 2007). To cope with condensation, physicists have developed a sophisticated but non-rigorous formalism called Survey Propagation (Me-zard, Parisi, Zecchina: Science 2002). This formalism yields precise conjectures on the threshold values of many random CSPs. Here we develop a new Survey Propagation inspired second moment method for the random k-NAESAT problem, which is one of the standard benchmark problems in the theory of random CSPs. This new technique allows us to overcome the barrier posed by condensation rigorously. We prove that the threshold for the existence of solutions in random k-NAESAT is 2k-1ln2-(ln 2 2+1 4)+ek, where |ek| ≤ 2-(1-ok(1))k, thereby verifying the statistical mechanics conjecture for this problem.", "It is known that random k-CNF formulas have a so-called satisfiability threshold at a density (namely, clause-variable ratio) of roughly 2kln2: at densities slightly below this threshold almost all k-CNF formulas are satisfiable, whereas slightly above this threshold almost no k-CNF formula is satisfiable. In the current work we consider satisfiable random formulas and inspect another parameter—the diameter of the solution space (that is, the maximal Hamming distance between a pair of satisfying assignments). It was previously shown that for all densities up to a density slightly below the satisfiability threshold the diameter is almost surely at least roughly n 2 (and n at much lower densities). At densities very much higher than the satisfiability threshold, the diameter is almost surely zero (a very dense satisfiable formula is expected to have only one satisfying assignment). In this paper we show that for all densities above a density that is slightly above the satisfiability threshold (more precisel..." ] }
1209.5571
2951963581
We design temporal description logics suitable for reasoning about temporal conceptual data models and investigate their computational complexity. Our formalisms are based on DL-Lite logics with three types of concept inclusions (ranging from atomic concept inclusions and disjointness to the full Booleans), as well as cardinality constraints and role inclusions. In the temporal dimension, they capture future and past temporal operators on concepts, flexible and rigid roles, the operators always' and some time' on roles, data assertions for particular moments of time and global concept inclusions. The logics are interpreted over the Cartesian products of object domains and the flow of time (Z,<), satisfying the constant domain assumption. We prove that the most expressive of our temporal description logics (which can capture lifespan cardinalities and either qualitative or quantitative evolution constraints) turn out to be undecidable. However, by omitting some of the temporal operators on concepts roles or by restricting the form of concept inclusions we obtain logics whose complexity ranges between PSpace and NLogSpace. These positive results were obtained by reduction to various clausal fragments of propositional temporal logic, which opens a way to employ propositional or first-order temporal provers for reasoning about temporal data models.
Numerous temporal DLs have been constructed and investigated since Schild's seminal paper Schild93 . One of the lessons of the 20-year history of the discipline is that logics interpreted over two- (or more) dimensional structures are very complex and sensitive to subtle interactions between constructs operating in different dimensions. The first TDLs suggested for representing TCMs were based on the expressive DLs and @cite_23 . However, it turned out that already a single rigid role and the operator @math (or @math ) on -concepts led to undecidability @cite_78 . In fact, to construct an undecidable TDL, one only needs a rigid role and three concept constructs: @math , @math and @math , that is, a temporalised @math @cite_72 . There have been several attempts to tame the bad computational behaviour of TDLs by imposing various restrictions on the DL and temporal components as well as their interaction.
{ "cite_N": [ "@cite_78", "@cite_72", "@cite_23" ], "mid": [ "", "2126458651", "1548951066" ], "abstract": [ "", "It is known that for temporal languages, such as first-order LTL, reasoning about constant (time-independent) relations is almost always undecidable. This applies to temporal description logics as well: constant binary relations together with general concept subsumptions in combinations of LTL and the basic description logic ALC cause undecidability. In this paper, we explore temporal extensions of two recently introduced families of 'weak' description logics known as DL-Lite and EL. Our results are twofold: temporalisations of even rather expressive variants of DL-Lite turn out to be decidable, while the temporalisation of EL with general concept subsumptions and constant relations is undecidable.", "Recent efforts in the Conceptual Modelling community have been devoted to properly capturing time-varying information. Various temporally enhanced Entity-Relationship (ER) models have been proposed that are intended to model the temporal aspects of database conceptual schemas. This work gives a logical formalisation of the various properties that characterise and extend different temporal ER models which are found in literature. The formalisation we propose is based on Description Logics (DL), which have been proved useful for a logical reconstruction of the most popular conceptual data modelling formalisms. The proposed DL has the ability to express both enhanced temporal ER schemas and integrity constraints in the form of complex inclusion dependencies. Reasoning in the devised logic is decidable, thus allowing for automated deductions over the whole conceptual representation, which includes both the ER schema and the integrity constraints over it." ] }
1209.5571
2951963581
We design temporal description logics suitable for reasoning about temporal conceptual data models and investigate their computational complexity. Our formalisms are based on DL-Lite logics with three types of concept inclusions (ranging from atomic concept inclusions and disjointness to the full Booleans), as well as cardinality constraints and role inclusions. In the temporal dimension, they capture future and past temporal operators on concepts, flexible and rigid roles, the operators always' and some time' on roles, data assertions for particular moments of time and global concept inclusions. The logics are interpreted over the Cartesian products of object domains and the flow of time (Z,<), satisfying the constant domain assumption. We prove that the most expressive of our temporal description logics (which can capture lifespan cardinalities and either qualitative or quantitative evolution constraints) turn out to be undecidable. However, by omitting some of the temporal operators on concepts roles or by restricting the form of concept inclusions we obtain logics whose complexity ranges between PSpace and NLogSpace. These positive results were obtained by reduction to various clausal fragments of propositional temporal logic, which opens a way to employ propositional or first-order temporal provers for reasoning about temporal data models.
The results in the first three rows of Table are established by using embeddings into the propositional temporal logic . To cope with the sub-Boolean core and Krom logics, we introduce, in , a number of new fragments of by restricting the type of clauses in Separated Normal Form @cite_67 and the available temporal operators. The obtained complexity classification in Table helps understand the results in the first three rows of Table .
{ "cite_N": [ "@cite_67" ], "mid": [ "1523903952" ], "abstract": [ "In this paper, a resolution method for propositional temporal logic is presented. Temporal formulae, incorporating both past-time and future-time temporal operators, are converted to Separated Normal Form (SNF), then both non-temporal and temporal resolution rules are applied. The resolution method is based on classical resolution, but incorporates a temporal resolution rule that can be implemented efficiently using a graph-theoretic approach." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
UML diagrams generally employ arbitrary layout. Gudenberg al have proposed an evolutionary approach to layout UML diagrams in which a fitness function is used to optimize various metrics (such as number of edge crossings) @cite_13 . Although the resulting layout does not reflect a distance metric, in principle the technique could be adapted to do so. Achieving a consistent layout is not a goal in this work.
{ "cite_N": [ "@cite_13" ], "mid": [ "2087841383" ], "abstract": [ "An evolutionary algorithm that layouts UML class diagrams is developed and described. It evolves the layout by mutating the positions of class symbols, inheritance relations, and associations. The process is controled by a fitness function that is computed from several well-known and some new layout metrics." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Andriyevksa al have conducted user studies to assess the effect that different UML layout schemes have on software comprehension @cite_25 . They report that the layout scheme that groups architecturally related classes together yields best results. They conclude that it is more important that a layout scheme convey a meaningful grouping of entities, rather than being aesthetically appealing.
{ "cite_N": [ "@cite_25" ], "mid": [ "2011584781" ], "abstract": [ "The paper presents and assesses a layout scheme for UML class diagrams that takes into account the architectural importance of a class in terms of its stereotype (e.g., boundary, control, entity). The design and running of a user study is described. The results of the study supports the hypothesis that layout based on architectural importance is more helpful in class diagram comprehension compared to layouts focusing primarily on aesthetics and or abstract graph guidelines" ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Byelas and Telea highlight related elements in a UML diagram using a custom area of interest'' algorithm that connects all related elements with a blob of the same color, taking special care to minimize the number of crossings @cite_6 . The impact of an arbitrary layout on their approach is not discussed.
{ "cite_N": [ "@cite_6" ], "mid": [ "1981863308" ], "abstract": [ "Understanding complex software systems requires getting insight in how system properties, such as performance, trust, reliability, or structural attributes, correspond to the system architecture. Such properties can be seen as defining several 'areas of interest' over the system architecture. We visualize areas of interest atop of system architecture diagrams using a new technique that minimizes visual clutter for multiple, overlapping areas for large diagrams, yet preserves the diagram layout familiar to designers. We illustrate our proposed techniques on several UML diagrams of complex, real-world systems." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Jucknath-John al present a technique to achieve stable graph layouts over the evolution of the displayed software system @cite_20 , thus achieving consistent layout, while sidestepping the issue of reflecting meaningful position or distance metrics.
{ "cite_N": [ "@cite_20" ], "mid": [ "2045414386" ], "abstract": [ "Most models are expressed and visualized with UML nowadays and most UML diagrams are based on graphs. Such graphs can get quite large if a project contains more than 600 class files, which is not uncommon. But changes are hard to recognize in a graph with 600 nodes. Therefore, the user has to find a way to build his own mental map based on the original graph. This mental map should give an overview of the original graph size, structure and it should point to changed areas.We present in this paper an approach, which gives the user a smaller graph as a mental map. This smaller graph uses packages as foundations for nodes and displays the package's content as the node icon. This had be done beforehand to visualize the evolution of developer dependencies over time, where the node icon presented the package's developer information as a pie chart. A suitable icon to present a package's class diagram can be a smaller version of the class diagram itself, if the layout respects its predecessors to allow the user an overview of its evolution." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
Balzer al proposed a modification of the classical treemap layout using Voronoi tessellation @cite_4 . Their approach creates aesthetically more appealing treemaps, reducing the number of narrow tessels.
{ "cite_N": [ "@cite_4" ], "mid": [ "2052606569" ], "abstract": [ "In this paper we present a hierarchy-based visualization approach for software metrics using Treemaps. Contrary to existing rectangle-based Treemap layout algorithms, we introduce layouts based on arbitrary polygons that are advantageous with respect to the aspect ratio between width and height of the objects and the identification of boundaries between and within the hierarchy levels in the Treemap. The layouts are computed by the iterative relaxation of Voronoi tessellations. Additionally, we describe techniques that allow the user to investigate software metric data of complex systems by utilizing transparencies in combination with interactive zooming." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
MetricView is an exploratory environment featuring UML diagram visualizations @cite_9 . The third dimension is used to extend UML with polymetric views @cite_14 . The diagrams use arbitrary layout, so do not reflect meaningful distance or position.
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2082023115", "2073433109" ], "abstract": [ "We present MetricView, a software visualization and exploration tool that combines traditional UML diagram visualization with metric visualization in an effective way. MetricView is very easy and natural to use for software architects and developers yet offers a powerful set of mechanisms that allow fine customization of the visualizations for getting specific insights. We discuss several visual and architectural design choices which turned out to be important in the construction of MetricView, and illustrate our approach with several results using real-life datasets", "Reverse engineering software systems has become a major concern in software industry because of their sheer size and complexity. This problem needs to be tackled since the systems in question are of considerable worth to their owners and maintainers. In this article, we present the concept of a polymetric view, a lightweight software visualization technique enriched with software metrics information. Polymetric views help to understand the structure and detect problems of a software system in the initial phases of a reverse engineering process. We discuss the benefits and limits of several predefined polymetric views we have implemented in our tool CodeCrawler. Moreover, based on clusters of different polymetric views, we have developed a methodology which supports and guides a software engineer in the first phases of a reverse engineering of a large software system. We have refined this methodology by repeatedly applying it on industrial systems and illustrate it by applying a selection of polymetric views to a case study." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
White Coats is an explorative environment also based on the notion of polymetric views @cite_11 . The visualizations are three-dimensional with position and visual-distance of entities given by selected metrics. However they do not incorporate the notion of a consistent layout.
{ "cite_N": [ "@cite_11" ], "mid": [ "1975005207" ], "abstract": [ "Versioning systems to store, handle, and retrieve the evolution of software systems have become a common good practice for both industrial and open-source software systems, currently exemplified by the wide usage of the CVS system. The stored information can then be manually retrieved over a command line or looked at with a browser using the ViewCVS tool. However, the information contained in the repository is difficult to navigate as ViewCVS provides only a textual view of single versions of the source files. In this paper we present an approach to visualize a CVS repository in 3D (using VRML) by means of a visualization service called White Coats. The viewer can easily navigate and interact with the visualized information" ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
CGA Call Graph Analyser is an explorative environment that visualizes a combination of function call graph and nested modules structure @cite_22 . The tool employs a 2 @math -dimensional approach. To our best knowledge, their visualizations use an arbitrary layout.
{ "cite_N": [ "@cite_22" ], "mid": [ "1977816729" ], "abstract": [ "In this paper we describe the application of our tool (CGA) for locating and understanding functionality in unfamiliar code of complex software systems onto the Gnu compiler collection GCC (approx. 1 million lines of C-code). The analysis' goal is to identify and understand those code locations that implement GCC's functionality of 'parsing constructors in C++ programs'." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
CodeCity is an explorative environment building on the city metaphor @cite_29 . CodeCity employs the nesting level of packages for their city's elevation model, and uses a modified tree layout to position the entities, packages and classes. Within a package, elements are ordered by size of the element's visual representation. Hence, when changing the metrics mapped on width and height, the overall layout of the city changes, and thus, the consistent layout breaks.
{ "cite_N": [ "@cite_29" ], "mid": [ "2110072503" ], "abstract": [ "This paper presents a 3D visualization approach which gravitates around the city metaphor, i.e., an object-oriented software system is represented as a city that can be traversed and interacted with: the goal is to give the viewer a sense of locality to ease program comprehension. The key point in conceiving a realistic software city is to map the information about the source code in meaningful ways in order to take the approach beyond beautiful pictures. We investigated several concepts that contribute to the urban feeling, such as appropriate layouts, topology, and facilities to ease navigation and interaction. We experimented our approach on a number of systems, and present our findings." ] }
1209.5490
2114708302
Software visualizations can provide a concise overview of a complex software system. Unfortunately, since software has no physical shape, there is no natural'' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typical diverges from one visualization to another. We propose a consistent layout for software maps in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use latent semantic indexing (LSI) to map software artifacts to a vector space, and then use multidimensional scaling (MDS) to map this vector space down to two dimensions.The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, since the vocabulary of software artifacts tends to be stable over time.
VERSO is an explorative environment that is also based on the city metaphor @cite_23 . Similar to CodeCity, VERSO employs a treemap layout to position their elements. Within a package, elements are either ordered by their color or by first appearance in the system's history. As the leaf elements have all the same base size, changing this setting does not change the overall layout. Hence, they provide consistent layout, however within the spatial limitations of the classical treemap layout.
{ "cite_N": [ "@cite_23" ], "mid": [ "2033239109" ], "abstract": [ "We propose an approach for complex software analysis based on visualization. Our work is motivated by the fact that in spite of years of research and practice, software development and maintenance are still time and resource consuming, and high-risk activities. The most important reason in our opinion is the complexity of many phenomena related to software, such as its evolution and its reliability. In fact, there is very little theory explaining them. Today, we have a unique opportunity to empirically study these phenomena, thanks to large sets of software data available through open-source programs and open repositories. Automatic analysis techniques, such as statistics and machine learning, are usually limited when studying phenomena with unknown or poorly-understood influence factors. We claim that hybrid techniques that combine automatic analysis with human expertise through visualization are excellent alternatives to them. In this paper, we propose a visualization framework that supports quality analysis of large-scale software systems. We circumvent the problem of size by exploiting perception capabilities of the human visual system." ] }
1209.5208
2949296132
Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it mitigates iterated differential attacks proposed by Goodrich. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms.
Research into string matching algorithms is defined by a long list of proposed algorithms over many years and for many different problems. String matching itself is closely related to the distance between strings, which can be measured by a large variety of means, ranging from generic and simple solutions like the Hamming distance @cite_5 to more powerful algorithms like Smith-Waterman @cite_4 solving local sequence alignment problems. A survey about current developments can be found in @cite_10 .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_10" ], "mid": [ "1980073965", "", "2093931624" ], "abstract": [ "The author was led to the study given in this paper from a consideration of large scale computing machines in which a large number of operations must be performed without a single error in the end result. This problem of “doing things right” on a large scale is not essentially new; in a telephone central office, for example, a very large number of operations are performed while the errors leading to wrong numbers are kept well under control, though they have not been completely eliminated. This has been achieved, in part, through the use of self-checking circuits. The occasional failure that escapes routine checking is still detected by the customer and will, if it persists, result in customer complaint, while if it is transient it will produce only occasional wrong numbers. At the same time the rest of the central office functions satisfactorily. In a digital computer, on the other hand, a single failure usually means the complete failure, in the sense that if it is detected no more computing can be done until the failure is located and corrected, while if it escapes detection then it invalidates all subsequent operations of the machine. Put in other words, in a telephone central office there are a number of parallel paths which are more or less independent of each other; in a digital machine there is usually a single long path which passes through the same piece of equipment many, many times before the answer is obtained.", "", "Rapidly evolving sequencing technologies produce data on an unparalleled scale. A central challenge to the analysis of this data is sequence alignment, whereby sequence reads must be compared to a reference. A wide variety of alignment algorithms and software have been subsequently developed over the past two years. In this article, we will systematically review the current development of these algorithms and introduce their practical applications on different types of experimental data. We come to the conclusion that short-read alignment is no longer the bottleneck of data analyses. We also consider future development of alignment algorithms with respect to emerging long sequence reads and the prospect of cloud computing." ] }
1209.5208
2949296132
Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it mitigates iterated differential attacks proposed by Goodrich. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms.
As several tasks, for example checking whether a user profile is within a remote database, do not require the exact distance between two strings, data items or other entities, the notion of approximate matching was introduced to define levels of similarity, which in the most extreme way only output a single bit of information: if the input strings are similar or not. Due to these properties this class is called approximate string matching algorithms, which is not to be confused with the approximate string matching of @cite_20 , where the term approximate'' referred to the property of two strings being close in distance.
{ "cite_N": [ "@cite_20" ], "mid": [ "2010392031" ], "abstract": [ "Approximate matching of strings is reviewed with the aim of surveying techniques suitable for finding an item in a database when there may be a spelling mistake or other error in the keyword. The methods found are classified as either equivalence or similarity problems. Equivalence problems are seen to be readily solved using canonical forms. For sinuiarity problems difference measures are surveyed, with a full description of the wellestablmhed dynamic programming method relating this to the approach using probabilities and likelihoods. Searches for approximate matches in large sets using a difference function are seen to be an open problem still, though several promising ideas have been suggested. Approximate matching (error correction) during parsing is briefly reviewed." ] }
1209.5208
2949296132
Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it mitigates iterated differential attacks proposed by Goodrich. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms.
Alternatively, techniques from private set intersection (PSI) @cite_19 could be used. However, revealing the content of the intersection is not appropriate for a privacy preserving protocol. Based on these security concerns, protocols for private set intersection cardinality (PSI-CA) were developed @cite_0 . Yet, these solutions still reveal the intersection cardinality, whereas we only reveal whether there is a match.
{ "cite_N": [ "@cite_0", "@cite_19" ], "mid": [ "2114021497", "114088334" ], "abstract": [ "In many everyday scenarios, sensitive information must be shared between parties without complete mutual trust. Private set operations are particularly useful to enable sharing information with privacy, as they allow two or more parties to jointly compute operations on their sets (e.g., intersection, union, etc.), such that only the minimum required amount of information is disclosed. In the last few years, the research community has proposed a number of secure and efficient techniques for Private Set Intersection (PSI), however, somewhat less explored is the problem of computing the magnitude, rather than the contents, of the intersection – we denote this problem as Private Set Intersection Cardinality (PSI-CA). This paper explores a few PSI-CA variations and constructs several protocols that are more efficient than the state-of-the-art.", "Cryptographic protocols for Private Set Intersection (PSI) are the basis for many important privacy-preserving applications. Over the past few years, intensive research has been devoted to designing custom protocols for PSI based on homomorphic encryption and other public-key techniques, apparently due to the belief that solutions using generic approaches would be impractical. This paper explores the validity of that belief. We develop three classes of protocols targeted to different set sizes and domains, all based on Yao’s generic garbled-circuit method. We then compare the performance of our protocols to the fastest custom PSI protocols in the literature. Our results show that a careful application of garbled circuits leads to solutions that can run on million-element sets on typical desktops, and that can be competitive with the fastest custom protocols. Moreover, generic protocols like ours can be used directly for performing more complex secure computations, something we demonstrate by adding a simple information-auditing mechanism to our PSI protocols." ] }
1209.5208
2949296132
Consider two parties who want to compare their strings, e.g., genomes, but do not want to reveal them to each other. We present a system for privacy-preserving matching of strings, which differs from existing systems by providing a deterministic approximation instead of an exact distance. It is efficient (linear complexity), non-interactive and does not involve a third party which makes it particularly suitable for cloud computing. We extend our protocol, such that it mitigates iterated differential attacks proposed by Goodrich. Further an implementation of the system is evaluated and compared against current privacy-preserving string matching algorithms.
Privacy-preserving protocols designed for approximate string comparisons can also be found in literature @cite_8 @cite_21 , but rely on interactive techniques like oblivious transfers or secure computation. This excludes these protocols from off-line execution, e.g., in the cloud. Further @cite_16 presents a more efficient solution, but which only matches exact strings, whereas we compare approximate strings.
{ "cite_N": [ "@cite_21", "@cite_16", "@cite_8" ], "mid": [ "2166971704", "", "2085801087" ], "abstract": [ "Many basic tasks in computational biology involve operations on individual DNA and protein sequences. These sequences, even when anonymized, are vulnerable to re-identification attacks and may reveal highly sensitive information about individuals. We present a relatively efficient, privacy-preserving implementation of fundamental genomic computations such as calculating the edit distance and Smith- Waterman similarity scores between two sequences. Our techniques are crypto graphically secure and significantly more practical than previous solutions. We evaluate our prototype implementation on sequences from the Pfam database of protein families, and demonstrate that its performance is adequate for solving real-world sequence-alignment and related problems in a privacy- preserving manner. Furthermore, our techniques have applications beyond computational biology. They can be used to obtain efficient, privacy-preserving implementations for many dynamic programming algorithms over distributed datasets.", "", "Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string." ] }
1209.5189
2000790146
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
Vehicular traffic is a system showing very complex behavior ( metastability, shock-wave formation and dynamic phase transitions @cite_3 @cite_32 @cite_0 ). Above a critical density, local inhomogeneities can trigger a collective phenomenon: a traffic breakdown. The initial position of the breakdown is usually located at a bottleneck ( an on-ramp or an off-ramp) from where the congested traffic pattern propagates upstream.
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_3" ], "mid": [ "2017149287", "2049176600", "2114667889" ], "abstract": [ "Certain aspects of traffic flow measurements imply the existence of a phase transition. Models known from chaos and fractals, such as nonlinear analysis of coupled differential equations, cellular automata, or coupled maps, can generate behavior which indeed resembles a phase transition in the flow behavior. Other measurements point out that the same behavior could be generated by geometrical constraints of the scenario. This paper looks at some of the empirical evidence, but mostly focuses on different modeling approaches. The theory of traffic jam dynamics is reviewed in some detail, starting from the well-established theory of kinematic waves and then veering into the area of phase transitions. One aspect of the theory of phase transitions is that, by changing one single parameter, a system can be moved from displaying a phase transition to not displaying a phase transition. This implies that models for traffic can be tuned so that they display a phase transition or not.This paper focuses on microscopic modeling, i.e., coupled differential equations, cellular automata, and coupled maps. The phase transition behavior of these models, as far as it is known, is discussed. Similarly, fluid-dynamical models for the same questions are considered. A large portion of this paper is given to the discussion of extensions and open questions, which makes clear that the question of traffic jam dynamics is, albeit important, only a small part of an interesting and vibrant field. As our outlook shows, the whole field is moving away from a rather static view of traffic toward a dynamic view, which uses simulation as an important tool.", "Since the subject of traffic dynamics has captured the interest of physicists, many astonishing effects have been revealed and explained. Some of the questions now understood are the following: Why are vehicles sometimes stopped by so-called phantom traffic jams'', although they all like to drive fast? What are the mechanisms behind stop-and-go traffic? Why are there several different kinds of congestion, and how are they related? Why do most traffic jams occur considerably before the road capacity is reached? Can a temporary reduction of the traffic volume cause a lasting traffic jam? Under which conditions can speed limits speed up traffic? Why do pedestrians moving in opposite directions normally organize in lanes, while similar systems are freezing by heating''? Why do self-organizing systems tend to reach an optimal state? Why do panicking pedestrians produce dangerous deadlocks? All these questions have been answered by applying and extending methods from statistical physics and non-linear dynamics to self-driven many-particle systems. This review article on traffic introduces (i) empirically data, facts, and observations, (ii) the main approaches to pedestrian, highway, and city traffic, (iii) microscopic (particle-based), mesoscopic (gas-kinetic), and macroscopic (fluid-dynamic) models. Attention is also paid to the formulation of a micro-macro link, to aspects of universality, and to other unifying concepts like a general modelling framework for self-driven many-particle systems, including spin systems. Subjects such as the optimization of traffic flows and relations to biological or socio-economic systems such as bacterial colonies, flocks of birds, panics, and stock market dynamics are discussed as well.", "In the so-called “microscopic” models of vehicular traffic, attention is paid explicitly to each individual vehicle each of which is represented by a “particle”; the nature of the “interactions” among these particles is determined by the way the vehicles influence each others’ movement. Therefore, vehicular traffic, modeled as a system of interacting “particles” driven far from equilibrium, offers the possibility to study various fundamental aspects of truly nonequilibrium systems which are of current interest in statistical physics. Analytical as well as numerical techniques of statistical physics are being used to study these models to understand rich variety of physical phenomena exhibited by vehicular traffic. Some of these phenomena, observed in vehicular traffic under different circumstances, include transitions from one dynamical phase to another, criticality and self-organized criticality, metastability and hysteresis, phase-segregation, etc. In this critical review, written from the perspective of statistical physics, we explain the guiding principles behind all the main theoretical approaches. But we present detailed discussions on the results obtained mainly from the so-called “particle-hopping” models, particularly emphasizing those which have been formulated in recent years using the language of cellular automata." ] }
1209.5189
2000790146
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
A similar behavior is known from one-dimensional driven particle systems with open boundaries. The bulk dynamics of such systems is governed by the rates at which particles enter or leave the system at the boundaries @cite_4 @cite_21 @cite_9 @cite_33 . The resulting phase diagram reveals distinct phases separated by first and second order phase transitions, respectively. Depending on the inflow and outflow rates of the system, a local perturbation may move either along the flow of particles or in the opposite direction. When compared with vehicular traffic, the latter case may be interpreted as a traffic jam propagating upstream. Similarly, the shock, which marks a discontinuity in the density profile, can be seen as the jam's upstream front. Hence, a traffic breakdown is a spatiotemporal phenomenon, whose observation requires a relatively broad spatial and temporal horizon.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_4", "@cite_33" ], "mid": [ "2067212895", "1994466062", "2031735592", "" ], "abstract": [ "We study the steady-state behavior of a driven non-equilibrium lattice gas of hard-core particles with next-nearest-neighbor interaction. We calculate the exact stationary distribution of the periodic system and for a particular line in the phase diagram of the system with open boundaries where particles can enter and leave the system. For repulsive interactions the dynamics can be interpreted as a two-speed model for traffic flow. The exact stationary distribution of the periodic continuous-time system turns out to coincide with that of the asymmetric exclusion process (ASEP) with discrete-time parallel update. However, unlike in the (single-speed) ASEP, the exact flow diagram for the two-speed model resembles in some important features the flow diagram of real traffic. The stationary phase diagram of the open system obtained from Monte Carlo simulations can be understood in terms of a shock moving through the system and an overfeeding effect at the boundaries, thus confirming theoretical predictions of a recently developed general theory of boundary-induced phase transitions. In the case of attractive interaction we observe an unexpected reentrance transition due to boundary effects.", "We investigate the stationary states of one-dimensional driven diffusive systems, coupled to boundary reservoirs with fixed particle densities. We argue that the generic phase diagram is governed by an extremal principle for the macroscopic current irrespective of the local dynamics. In particular, we predict a minimal current phase for systems with local minimum in the current density relation. This phase is explained by a dynamical phenomenon, the branching and coalescence of shocks; Monte Carlo simulations confirm the theoretical scenario.", "We consider the asymmetric simple exclusion process (ASEP) with open boundaries and other driven stochastic lattice gases of particles entering, hopping and leaving a one- dimensional lattice. The long-term system dynamics, stationary states, and the nature of phase transitions between steady states can be understood in terms of the interplay of two characteristic velocities, the collective velocity and the shock (domain wall) velocity. This interplay results in two distinct types of domain walls whose dynamics is computed. We conclude that the phase diagram of the ASEP is generic for one-component driven lattice gases with a single maximum in the current-density relation.", "" ] }
1209.5189
2000790146
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
Currently, stationary loop detectors are still the most common source of traffic data. In their simplest form they count the number of passing vehicles and measure their velocity aggregated over intervals of one minute. These values already allow an empirical fundamental diagram of traffic flow to be drawn. A more detailed picture on inter-vehicle dynamics can be obtained if even single vehicle data are available. Knospe al @cite_20 , for instance, used data from loop detectors which also measured time-headways ( the time passing between two vehicles crossing a detector). They analyzed the distribution of time-headways depending on the density and studied the functional relation between speed and distance to the preceding car, known as optimal velocity function. In addition to that, they compared these data to seven traffic cellular automata (CA) models. Knospe al found significant differences between the examined models. In particular, the earlier and simpler models were not able to satisfyingly reproduce the empirical results. More advanced models like the comfortable driving model (CDM), which is one of the models to be studied in this paper, showed good agreement with the empirical data.
{ "cite_N": [ "@cite_20" ], "mid": [ "2067179438" ], "abstract": [ "Based on a detailed microscopic test scenario motivated by recent empirical studies of single-vehicle data, several cellular automaton models for traffic flow are compared. We find three levels of agreement with the empirical data: (1) models that do not reproduce even qualitatively the most important empirical observations, (2) models that are on a macroscopic level in reasonable agreement with the empirics, and (3) models that reproduce the empirical data on a microscopic level as well. Our results are not only relevant for applications, but also shed light on the relevant interactions in traffic flow." ] }
1209.5189
2000790146
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
For this reason the data of a single detector do not suffice to study the spatiotemporal dynamics. Analyzing the time series of a sequence of neighboring detectors removes this restriction. Therefore, we have chosen a highway section with a sufficient number of detectors for our study. To examine spatiotemporal traffic dynamics, we have selected two models that gave good results in previous studies. As a reference, we have also included the Nagel-Schreckenberg model (NSM) @cite_17 , which is a rather simplistic traffic cellular automaton @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_17" ], "mid": [ "1968508709", "2154376416" ], "abstract": [ "Abstract In this paper, we give an elaborate and understandable review of traffic cellular automata (TCA) models, which are a class of computationally efficient microscopic traffic flow models. TCA models arise from the physics discipline of statistical mechanics, having the goal of reproducing the correct macroscopic behaviour based on a minimal description of microscopic interactions. After giving an overview of cellular automata (CA) models, their background and physical setup, we introduce the mathematical notations, show how to perform measurements on a TCA model's lattice of cells, as well as how to convert these quantities into real-world units and vice versa. The majority of this paper then relays an extensive account of the behavioural aspects of several TCA models encountered in literature. Already, several reviews of TCA models exist, but none of them consider all the models exclusively from the behavioural point of view. In this respect, our overview fills this void, as it focusses on the behaviour of the TCA models, by means of time–space and phase-space diagrams, and histograms showing the distributions of vehicles’ speeds, space, and time gaps. In the report, we subsequently give a concise overview of TCA models that are employed in a multi-lane setting, and some of the TCA models used to describe city traffic as a two-dimensional grid of cells, or as a road network with explicitly modelled intersections. The final part of the paper illustrates some of the more common analytical approximations to single-cell TCA models.", "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stop- waves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained." ] }
1209.5189
2000790146
Traffic flow is a very prominent example of a driven non-equilibrium system. A characteristic phenomenon of traffic dynamics is the spontaneous and abrupt drop of the average velocity on a stretch of road leading to congestion. Such a traffic breakdown corresponds to a boundary-induced phase transition from free flow to congested traffic. In this paper, we study the ability of selected microscopic traffic models to reproduce a traffic breakdown, and we investigate its spatiotemporal dynamics. For our analysis, we use empirical traffic data from stationary loop detectors on a German Autobahn showing a spontaneous breakdown. We then present several methods to assess the results and compare the models with each other. In addition, we will also discuss some important modeling aspects and their impact on the resulting spatiotemporal pattern. The investigation of different downstream boundary conditions, for example, shows that the physical origin of the traffic breakdown may be artificially induced by the setup of the boundaries.
The only comparisons between empirically observed and simulated traffic dynamics that the authors are aware of were carried out by Treiber al @cite_22 , Popkov al @cite_34 and Kerner al @cite_28 . The first two articles, however, focused on single lane dynamics and did not provide quantitative results. Lane changes have an important influence on traffic dynamics, as Kerner and Klenov @cite_13 found by analyzing vehicle trajectories: lane changing between neighboring lanes is responsible for the emergence (and dissolution) of congested traffic states. The article by Kerner al offers a very detailed discussion of traffic dynamics and a qualitative comparison of empirical data with two models based on Kerner's three phase traffic theory. The authors, however, do not provide a quantitative analysis nor do they discuss the influence of the various parameters on their results.
{ "cite_N": [ "@cite_28", "@cite_34", "@cite_13", "@cite_22" ], "mid": [ "2057165483", "2029716009", "2007776214", "1965455100" ], "abstract": [ "A review of dynamic nonlinear features of spatiotemporal congested patterns in freeway traffic is presented. The basis of the review is a comparison of theoretical features of the congested patterns that are shown by a microscopic traffic flow model in the context of the Kerner's three-phase traffic theory and empirical microscopic and macroscopic pattern characteristics measured on different freeways over various days and years. In this test of the microscopic three-phase traffic flow theory, a model of an \"open\" road is applied: Empirical time-dependence of traffic demand and drivers' destinations are used at the upstream model boundaries. At downstream model boundary conditions for vehicle freely leaving a modeling freeway section(s) are given. Spatiotemporal congested patterns emerge, develop, and dissolve in this open freeway model with the same types of bottlenecks as those in empirical observations. It is found that microscopic three-phase traffic models can explain all known macroscopic and microscopic empirical congested pattern features (e.g., probabilistic breakdown phenomenon as a first-order phase transition from free flow to synchronized flow, moving jam emergence in synchronized flow rather than in free flow, spatiotemporal features of synchronized flow and general congested patterns at freeway bottlenecks, intensification of downstream congestion due to upstream congestion at adjacent bottlenecks). It turns out that microscopic optimal velocity (OV) functions and time headway distributions are not necessarily qualitatively different, even if local congested traffic behavior is qualitatively different. Model performance with respect to spatiotemporal pattern emergence and evolution cannot be tested using these traffic characteristics. The reason for this is that important spatiotemporal features of congested traffic patterns are lost in these and many other macroscopic and microscopic traffic characteristics, which are widely used as the empirical basis for a test of traffic flow models.", "A recently developed theory for boundary-induced phenomena in nonequilibrium systems predicts the existence of various steady-state phase transitions induced by the motion of a shock wave. We provide direct empirical evidence that a phase transition between a free flow and a congested phase occurring in traffic flow on highways in the vicinity of on- and off-ramps can be interpreted as an example of such a boundary-induced phase transition of first order. We analyse the empirical traffic data and give a theoretical interpretation of the transition in terms of the macroscopic current. Additionally we support the theory with computer simulations of the Nagel-Schreckenberg model of vehicular traffic on a road segment which also exhibits the expected second-order transition. Our results suggest ways to predict and to some extent to optimize the capacity of a general traffic network.", "", "We present data from several German freeways showing different kinds of congested traffic forming near road inhomogeneities, specifically lane closings, intersections, or uphill gradients. The states are localized or extended, homogeneous or oscillating. Combined states are observed as well, like the coexistence of moving localized clusters and clusters pinned at road inhomogeneities, or regions of oscillating congested traffic upstream of nearly homogeneous congested traffic. The experimental findings are consistent with a recently proposed theoretical phase diagram for traffic near on-ramps [D. Helbing, A. Hennecke, and M. Treiber, Phys. Rev. Lett. 82, 4360 (1999)]. We simulate these situations with a continuous microscopic single-lane model, the intelligent driver model,'' using empirical boundary conditions. All observations, including the coexistence of states, are qualitatively reproduced by describing inhomogeneities with local variations of one model parameter. We show that the results of the microscopic model can be understood by formulating the theoretical phase diagram for bottlenecks in a more general way. In particular, a local drop of the road capacity induced by parameter variations has essentially the same effect as an on-ramp." ] }
1209.4227
2953068027
Edge bundling reduces the visual clutter in a drawing of a graph by uniting the edges into bundles. We propose a method of edge bundling drawing each edge of a bundle separately as in metro-maps and call our method ordered bundles. To produce aesthetically looking edge routes it minimizes a cost function on the edges. The cost function depends on the ink, required to draw the edges, the edge lengths, widths and separations. The cost also penalizes for too many edges passing through narrow channels by using the constrained Delaunay triangulation. The method avoids unnecessary edge-node and edge-edge crossings. To draw edges with the minimal number of crossings and separately within the same bundle we develop an efficient algorithm solving a variant of the metro-line crossing minimization problem. In general, the method creates clear and smooth edge routes giving an overview of the global graph structure, while still drawing each edge separately and thus enabling local analysis.
The problem of ordering paths along the edges of an embedded graph is known as metro-line crossing minimization (MLCM) problem. The problem was introduced by Benkert et al. @cite_21 , and was studied in several variants in @cite_12 @cite_17 @cite_5 @cite_16 . We mention two variants of the MLCM problem that are related to our path ordering problem.
{ "cite_N": [ "@cite_21", "@cite_5", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "1564822866", "1765507333", "", "1587972828", "2114857285" ], "abstract": [ "In this paper we consider a new problem that occurs when drawing wiring diagrams or public transportation networks. Given an embedded graph G = (V, E) (e.g., the streets served by a bus network) and a set L of paths in G (e.g., the bus lines), we want to draw the paths along the edges of G such that they cross each other as few times as possible. For esthetic reasons we insist that the relative order of the paths that traverse a node does not change within the area occupied by that node. Our main contribution is an algorithm that minimizes the number of crossings on a single edge u, v ∈ E if we are given the order of the incoming and outgoing paths. The difficulty is deciding the order of the paths that terminate in u or v with respect to the fixed order of the paths that do not end there. Our algorithm uses dynamic programming and takes O(n2) time, where n is the number of terminating paths.", "The metro-line crossing minimization (MLCM) problem was recently introduced as a response to the problem of drawing metro maps or public transportation networks, in general. According to this problem, we are given a planar, embedded graph G = (V ,E ) and a set L of simple paths on G , called lines . The main task is to place the lines on G , so that the number of crossings among pairs of lines is minimized. Our main contribution is two polynomial time algorithms. The first solves the general case of the MLCM problem, where the lines that traverse a particular vertex of G are allowed to use any side of it to either \"enter\" or \"exit\", assuming that the endpoints of the lines are located at vertices of degree one. The second one solves more efficiently the restricted case, where only the left and the right side of each vertex can be used. To the best of our knowledge, this is the first time where the general case of the MLCM problem is solved. Previous work was devoted to the restricted case of the MLCM problem under the additional assumption that the endpoints of the lines are either the topmost or the bottommost in their corresponding vertices, i.e., they are either on top or below the lines that pass through the vertex. Even for this case, we improve a known result of from O (|E |5 2|L |3) to O (|V |(|E | + |L |)).", "", "In this paper we consider a problem that occurs when drawing public transportation networks. Given an embedded graph G = (V, E) (e.g. the railroad network) and a set H of paths in G (e.g. the train lines), we want to draw the paths along the edges of G such that they cross each other as few times as possible. For aesthetic reasons we insist that the relative order of the paths that traverse a vertex does not change within the area occupied by the vertex. We prove that the problem, which is known to be NP-hard, can be rewritten as an integer linear program that finds the optimal solution for the problem. In the case when the order of the endpoints of the paths is fixed we prove that the problem can be solved in polynomial time. This improves a recent result by (2007).", "We consider the problem of drawing a set of simple paths along the edges of an embedded underlying graph G = (V,E), so that the total number of crossings among pairs of paths is minimized. This problem arises when drawing metro maps, where the embedding of G depicts the structure of the underlying network, the nodes of G correspond to train stations, an edge connecting two nodes implies that there exists a railway line which connects them, whereas the paths illustrate the lines connecting terminal stations. We call this the metro-line crossing minimization problem (MLCM). In contrast to the problem of drawing the underlying graph nicely, MLCM has received fewer attention. It was recently introduced by Benkert et. al in [4]. In this paper, as a first step towards solving MLCM in arbitrary graphs, we study path and tree networks.We examine several variations of the problem for which we develop algorithms for obtaining optimal solutions." ] }
1209.4227
2953068027
Edge bundling reduces the visual clutter in a drawing of a graph by uniting the edges into bundles. We propose a method of edge bundling drawing each edge of a bundle separately as in metro-maps and call our method ordered bundles. To produce aesthetically looking edge routes it minimizes a cost function on the edges. The cost function depends on the ink, required to draw the edges, the edge lengths, widths and separations. The cost also penalizes for too many edges passing through narrow channels by using the constrained Delaunay triangulation. The method avoids unnecessary edge-node and edge-edge crossings. To draw edges with the minimal number of crossings and separately within the same bundle we develop an efficient algorithm solving a variant of the metro-line crossing minimization problem. In general, the method creates clear and smooth edge routes giving an overview of the global graph structure, while still drawing each edge separately and thus enabling local analysis.
Asquith et al. @cite_12 defined the so-called MLCM-FixedSE problem, in which it is specified whether a path terminates at the top or bottom of its terminal node. For a graph @math and a set of paths @math they give the algorithm with @math time complexity. A closely related problem called MLCM-T1, in which all paths connect degree- @math terminal nodes in @math , was considered by Argyriou et al. @cite_5 . Their algorithm computes an optimal ordering of paths and has a running time of @math . Recently, N "o llenburg @cite_16 presented an @math -time algorithm for these variants of the MLCM problem. Our algorithm can be used to solve both the MLCM-FixedSE and MLCM-T1 problems with complexity @math . To the best of our knowledge, this is the fastest solution to date of these variants of MLCM.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_12" ], "mid": [ "1765507333", "", "1587972828" ], "abstract": [ "The metro-line crossing minimization (MLCM) problem was recently introduced as a response to the problem of drawing metro maps or public transportation networks, in general. According to this problem, we are given a planar, embedded graph G = (V ,E ) and a set L of simple paths on G , called lines . The main task is to place the lines on G , so that the number of crossings among pairs of lines is minimized. Our main contribution is two polynomial time algorithms. The first solves the general case of the MLCM problem, where the lines that traverse a particular vertex of G are allowed to use any side of it to either \"enter\" or \"exit\", assuming that the endpoints of the lines are located at vertices of degree one. The second one solves more efficiently the restricted case, where only the left and the right side of each vertex can be used. To the best of our knowledge, this is the first time where the general case of the MLCM problem is solved. Previous work was devoted to the restricted case of the MLCM problem under the additional assumption that the endpoints of the lines are either the topmost or the bottommost in their corresponding vertices, i.e., they are either on top or below the lines that pass through the vertex. Even for this case, we improve a known result of from O (|E |5 2|L |3) to O (|V |(|E | + |L |)).", "", "In this paper we consider a problem that occurs when drawing public transportation networks. Given an embedded graph G = (V, E) (e.g. the railroad network) and a set H of paths in G (e.g. the train lines), we want to draw the paths along the edges of G such that they cross each other as few times as possible. For aesthetic reasons we insist that the relative order of the paths that traverse a vertex does not change within the area occupied by the vertex. We prove that the problem, which is known to be NP-hard, can be rewritten as an integer linear program that finds the optimal solution for the problem. In the case when the order of the endpoints of the paths is fixed we prove that the problem can be solved in polynomial time. This improves a recent result by (2007)." ] }
1209.4761
2215718072
Two problems in the search of metric characteristics on weighted undirected graphs with non-negative edge weights are being considered. The first problem: a weighted undirected graph with non-negative edge weight is given. The radius, diameter and at least one center and one pair of peripheral vertices of the graph are to be found. In the second problem we have additionally calculated the distances matrix. For the problems being considered, we proposed fast search algorithms which use only small fraction of graph's vertices for the search of the metric characteristics. The proposed algorithms have been compared to other popular methods of solving problems considered on various inputs.
* In a case where the distance matrix of a graph is unknown, the most common method of searching for the graph's metric characteristics is to solve the all-pairs shortest path problem (APSP). Most of the algorithms solving APSP problem aren't universal and they show a good solution speed for graphs only with a certain set of properties. In particular, there are algorithms for sparse graphs @cite_2 , for graphs with bounded integer edge weights @cite_6 etc. Although the complexity of solving APSP gradually decreases with the invention of new algorithms, this approach for the search of metric characteristics of weighted graphs still remains unpractical for graphs with a large number of vertices .
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "1823654214", "2169528473" ], "abstract": [ "We show that the all pairs shortest paths (APSP) problem for undirected graphs with integer edge weights taken from the range 1, 2, ..., M can be solved using only a logarithmic number of distance products of matrices with elements in the range (1, 2, ..., M). As a result, we get an algorithm for the APSP problem in such graphs that runs in O (Mn sup spl omega ) time, where n is the number of vertices in the input graph, M is the largest edge weight in the graph, and spl omega <2.376 is the exponent of matrix multiplication. This improves, and also simplifies, an O (M sup ( spl omega +1) 2 n sup spl omega ) time algorithm of Galil and Margalit (1997).", "We consider n points (nodes), some or all pairs of which are connected by a branch; the length of each branch is given. We restrict ourselves to the case where at least one path exists between any two nodes. We now consider two problems. Problem 1. Constrnct the tree of minimum total length between the n nodes. (A tree is a graph with one and only one path between every two nodes.) In the course of the construction that we present here, the branches are subdivided into three sets: I. the branches definitely assignec to the tree under construction (they will form a subtree) ; II. the branches from which the next branch to be added to set I, will be selected ; III. the remaining branches (rejected or not yet considered). The nodes are subdivided into two sets: A. the nodes connected by the branches of set I, B. the remaining nodes (one and only one branch of set II will lead to each of these nodes), We start the construction by choosing an arbitrary node as the only member of set A, and by placing all branches that end in this node in set II. To start with, set I is empty. From then onwards we perform the following two steps repeatedly. Step 1. The shortest branch of set II is removed from this set and added to" ] }
1209.4761
2215718072
Two problems in the search of metric characteristics on weighted undirected graphs with non-negative edge weights are being considered. The first problem: a weighted undirected graph with non-negative edge weight is given. The radius, diameter and at least one center and one pair of peripheral vertices of the graph are to be found. In the second problem we have additionally calculated the distances matrix. For the problems being considered, we proposed fast search algorithms which use only small fraction of graph's vertices for the search of the metric characteristics. The proposed algorithms have been compared to other popular methods of solving problems considered on various inputs.
There are algorithms that find metric characteristics for graphs having a special organization and describing peculiar structures. Among these, for example, is the algorithm for finding of the diameter for Small World Networks graphs @cite_5 or the algorithm for finding of the center and diameter for Benzenoid Systems' graphs @cite_4 . To increase the speed of a solution, these algorithms use peculiarities of the respective graphs, and therefore the range of their effective application is rigidly restricted.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2143813994", "2024502228" ], "abstract": [ "In this paper we present a novel approach to determine the exact diameter (longest shortest path length) of large graphs, in particular of the nowadays frequently studied small world networks. Typical examples include social networks, gene networks, web graphs and internet topology networks. Due to complexity issues, the diameter is often calculated based on a sample of only a fraction of the nodes in the graph, or some approximation algorithm is applied. We instead propose an exact algorithm that uses various lower and upper bounds as well as effective node selection and pruning strategies in order to evaluate only the critical nodes which ultimately determine the diameter. We will show that our algorithm is able to quickly determine the exact diameter of various large datasets of small world networks with millions of nodes and hundreds of millions of links, whereas before only approximations could be given.", "In this note, we present first linear time algorithms for computing the center and the diameter of several classes of face regular plane graphs: triangulations with inner vertices of degree ≥ 6, quadrangulations with inner vertices of degree ≥ 4 and the subgraphs of the regular hexagonal grid bounded by a simple circuit of this grid." ] }
1209.4115
1996167318
Compensating changes between a subjects' training and testing session in brain-computer interfacing (BCI) is challenging but of great importance for a robust BCI operation. We show that such changes are very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space. This novel approach to learning from other subjects aims to reduce the adverse effects of common nonstationarities, but does not transfer discriminative information. This is an important conceptual difference to standard multi-subject methods that, e.g., improve the covariance matrix estimation by shrinking it toward the average of other users or construct a global feature space. These methods do not reduces the shift between training and test data and may produce poor results when subjects have very different signal characteristics. In this paper, we compare our approach to two state-of-the-art multi-subject methods on toy data and two datasets of EEG recordings from subjects performing motor imagery. We show that it can not only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.
Several CSP extensions utilizing information from other subjects have been proposed in the context of zero-training BCI and small-sample setting. For instance a very recently proposed method @cite_23 learns a spatial filter for a new subject based on its own data and that of other users. Another recent work @cite_21 regularizes the Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA) algorithms based on data from a subset of automatically selected subjects. A method that aims at zero training for Brain-Computer Interfacing by utilizing knowledge from the same subject collected in previous sessions was proposed in @cite_7 @cite_0 @cite_27 . The authors of @cite_9 train a classifier that is able to learn from multiple subjects by multi-task learning. The method proposed in @cite_18 uses the similarity between subjects measured by Kullback-Leibler divergence as weight for improving the covariance estimation by shrinkage.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_21", "@cite_0", "@cite_27", "@cite_23" ], "mid": [ "2129023315", "2018364998", "1562554760", "1973574316", "", "", "2090158744" ], "abstract": [ "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.", "Electroencephalogram (EEG) signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm. This challenges a stable operation of Brain-Computer Interface (BCI) systems. The classical approach is to train users by neurofeedback to produce fixed stereotypical patterns of brain activity. In the machine learning approach, a widely adapted method for dealing with those variances is to record a so called calibration measurement on the beginning of each session in order to optimize spatial filters and classifiers specifically for each subject and each day. This adaptation of the system to the individual brain signature of each user relieves from the need of extensive user training. In this paper we suggest a new method that overcomes the requirement of these time-consuming calibration recordings for long-term BCI users. The method takes advantage of knowledge collected in previous sessions: By a novel technique, prototypical spatial filters are determined which have better generalization properties compared to single-session filters. In particular, they can be used in follow-up sessions without the need to recalibrate the system. This way the calibration periods can be dramatically shortened or even completely omitted for these ‘experienced’ BCI users. The feasibility of our novel approach is demonstrated with a series of online BCI experiments. Although performed without any calibration measurement at all, no loss of classification performance was observed.", "Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subjectspecific calibration data prior to actual use of the BCI for communication. In this paper, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process. We discuss how this out-of-the-box BCI can be further improved in a computationally efficient manner as subject-specific data becomes available. The feasibility of the approach is demonstrated on two sets of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of 19 healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and combining prior recordings with subjectspecific calibration data substantially outperforms using subject-specific data only. Our results further show that transfer between recordings under slightly different experimental setups is feasible.", "A major limitation of Brain-Computer Interfaces (BCI) is their long calibration time, as much data from the user must be collected in order to tune the BCI for this target user. In this paper, we propose a new method to reduce this calibration time by using data from other subjects. More precisely, we propose an algorithm to regularize the Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA) algorithms based on the data from a subset of automatically selected subjects. An evaluation of our approach showed that our method significantly outperformed the standard BCI design especially when the amount of data from the target user is small. Thus, our approach helps in reducing the amount of data needed to achieve a given performance level.", "", "", "Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low." ] }
1209.4115
1996167318
Compensating changes between a subjects' training and testing session in brain-computer interfacing (BCI) is challenging but of great importance for a robust BCI operation. We show that such changes are very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space. This novel approach to learning from other subjects aims to reduce the adverse effects of common nonstationarities, but does not transfer discriminative information. This is an important conceptual difference to standard multi-subject methods that, e.g., improve the covariance matrix estimation by shrinking it toward the average of other users or construct a global feature space. These methods do not reduces the shift between training and test data and may produce poor results when subjects have very different signal characteristics. In this paper, we compare our approach to two state-of-the-art multi-subject methods on toy data and two datasets of EEG recordings from subjects performing motor imagery. We show that it can not only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.
The method proposed by Lotte and Guan @cite_21 regularizes the estimated covariance matrix towards the average covariance matrix of other subjects. This kind of regularization may largely improve the estimation quality of the high dimensional covariance matrix if data is scarce. The estimation for subject @math can be written as where @math is the covariance matrix of class @math for the subject of interest, @math are the covariance matrices of the other @math subjects and @math is a regularization parameter controlling the amount of information incorporated from other users. This method is based on a very restrictive assumption, namely the similarity between covariance matrices of different subjects. The authors in @cite_21 recognized that this assumption is often violated due to large inter-subject variability, thus they proposed a sequential algorithm for subject selection. In the following we will refer to this approach as covariance-based CSP (covCSP).
{ "cite_N": [ "@cite_21" ], "mid": [ "1973574316" ], "abstract": [ "A major limitation of Brain-Computer Interfaces (BCI) is their long calibration time, as much data from the user must be collected in order to tune the BCI for this target user. In this paper, we propose a new method to reduce this calibration time by using data from other subjects. More precisely, we propose an algorithm to regularize the Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA) algorithms based on the data from a subset of automatically selected subjects. An evaluation of our approach showed that our method significantly outperformed the standard BCI design especially when the amount of data from the target user is small. Thus, our approach helps in reducing the amount of data needed to achieve a given performance level." ] }
1209.4275
2546891830
A central problem of surveillance is to monitor multiple targets moving in a large-scale, obstacle-ridden environment with occlusions. This paper presents a novel principled Partially Observable Markov Decision Process-based approach to coordinating and controlling a network of active cameras for tracking and observing multiple mobile targets at high resolution in such surveillance environments. Our proposed approach is capable of (a) maintaining a belief over the targets' states (i.e., locations, directions, and velocities) to track them, even when they may not be observed directly by the cameras at all times, (b) coordinating the cameras' actions to simultaneously improve the belief over the targets' states and maximize the expected number of targets observed with a guaranteed resolution, and (c) exploiting the inherent structure of our surveillance problem to improve its scalability (i.e., linear time) in the number of targets to be observed. Quantitative comparisons with state-of-the-art multi-camera coordination and control techniques show that our approach can achieve higher surveillance quality in real time. The practical feasibility of our approach is also demonstrated using real AXIS 214 PTZ cameras
As mentioned earlier, existing multi-camera coordination and control techniques have to operate in a fully observable surveillance environment where the locations, directions, and velocities of all the targets can be directly observed estimated by either using additional low-resolution static cameras and sensors ( @cite_2 @cite_1 @cite_14 @cite_12 @cite_0 ) or configuring one or more active cameras to zoom out to their wide view ( @cite_8 @cite_3 @cite_13 @cite_7 ). They use these targets' information to predict their trajectories in order to schedule, coordinate, and control the network of active cameras to focus on and observe these targets at high resolution.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2141442361", "", "2562785760", "", "2005815994", "2111607873", "", "2147741739", "2128453830" ], "abstract": [ "This paper presents a novel decision-theoretic approach to control and coordinate multiple active cameras for observing a number of moving targets in a surveillance system. This approach offers the advantages of being able to (a) account for the stochasticity of targets' motion via probabilistic modeling, and (b) address the trade-off between maximizing the expected number of observed targets and the resolution of the observed targets through stochastic optimization. One of the key issues faced by existing approaches in multi-camera surveillance is that of scalability with increasing number of targets. We show how its scalability can be improved by exploiting the problem structure: as proven analytically, our decision-theoretic approach incurs time that is linear in the number of targets to be observed during surveillance. As demonstrated empirically through simulations, our proposed approach can achieve high-quality surveillance of up to 50 targets in real time and its surveillance performance degrades gracefully with increasing number of targets. We also demonstrate our proposed approach with real AXIS 214 PTZ cameras in maximizing the number of Lego robots observed at high resolution over a surveyed rectangular area. The results are promising and clearly show the feasibility of our decision-theoretic approach in controlling and coordinating the active cameras in real surveillance system.", "", "A system that controls a set of Pan Tilt Zoom (PTZ) cameras for acquiring close-up imagery of subjects in a surveillance site is presented. The PTZ control is based on the output of a multi-camera, multi-target tracking system operating on a set of fixed cameras, and the main goal is to acquire imagery of subjects for biometrics purposes such as face recognition, or non-facial person identification. For this purpose, this paper introduces an algorithm to address the generic problem of collaboratively controlling a limited number of PTZ cameras to capture an observed number of subjects in an optimal fashion. Optimality is achieved by maximizing the probability of successfully completing the addressed biometrics task, which is determined by an objective function parameterized on expected capture conditions such as distance at which a subject is imaged, angle of capture and several others. Such an objective function serves to effectively balance the number of captures per subject and quality of captures. Qualitative and quantitative experimental results are provided to demonstrate the performance of the system which operates in real-time under real-world conditions on four PTZ and four static CCTV cameras, all of which are processed and controlled via a single workstation.", "", "In this work we present a consistent probabilistic approach to control multiple, but diverse pan-tilt-zoom cameras concertedly observing a scene. There are disparate goals to this control: the cameras are not only to react to objects moving about, arbitrating conflicting interests of target resolution and trajectory accuracy, they are also to anticipate the appearance of new targets.", "Five scheduling policies that have been developed and implemented to manage the active resources of a centralized active vision system are presented in this paper. These scheduling policies are tasked with making target-to-camera assignments in an attempt to maximize the number of targets that can be imaged with the system's active cameras. A comparative simulation-based evaluation has been performed to investigate the performance of the system under different target and system operating parameters for all five scheduling policies. Parameters considered include: target entry conditions, congestion levels, target-to-camera speeds, target trajectories, and number of active cameras. An overall trend in the relative performance of the scheduling algorithms was observed. The Least System Reconfiguration and Future Least System Reconfiguration scheduling policies performed the best for the majority of conditions investigated, while the Load Sharing and First Come First Serve policies performed the poorest. The performance of the Earliest Deadline First policy was highly dependent on target predictability.", "", "This paper deals with the problem of decentralized, cooperative control of a camera network. We focus on applications where events unfold over a large geographic area and need to be analyzed by multiple cameras or other kinds of imaging sensors. There is no central unit accumulating and analyzing all the data. The overall goal is to keep track of all objects (i.e., targets) in the region of deployment of the cameras, while selectively focusing at a high resolution on some particular target features based on application requirements. Efficient usage of resources in such a scenario requires that the cameras be active. However, this control cannot be based on separate analysis of the sensed video in each camera. They must act collaboratively to be able to acquire multiple targets at different resolutions. Our research focuses on developing accurate and efficient target acquisition and camera control algorithms in such scenarios using game theory. We show simulated experimental results of the approach.", "We demonstrate a video surveillance system— comprising passive and active pan tilt zoom (PTZ) cameras—that intelligently responds to scene complexity, automatically capturing higher resolution video when there are fewer people in the scene and capturing lower resolution video as the number of pedestrians present in the scene increases. To this end, we have developed behavior based-controllers for passive and active cameras, enabling these cameras to carry out multiple observation tasks simultaneously. The research presented herein is a step towards video surveillance systems—consisting of a heterogeneous set of sensors—that provide persistent coverage of large spaces, while optimizing surveillance data collection by tuning the sensing parameters of individual sensors (in a distributed manner) in response to scene activity." ] }
1209.4275
2546891830
A central problem of surveillance is to monitor multiple targets moving in a large-scale, obstacle-ridden environment with occlusions. This paper presents a novel principled Partially Observable Markov Decision Process-based approach to coordinating and controlling a network of active cameras for tracking and observing multiple mobile targets at high resolution in such surveillance environments. Our proposed approach is capable of (a) maintaining a belief over the targets' states (i.e., locations, directions, and velocities) to track them, even when they may not be observed directly by the cameras at all times, (b) coordinating the cameras' actions to simultaneously improve the belief over the targets' states and maximize the expected number of targets observed with a guaranteed resolution, and (c) exploiting the inherent structure of our surveillance problem to improve its scalability (i.e., linear time) in the number of targets to be observed. Quantitative comparisons with state-of-the-art multi-camera coordination and control techniques show that our approach can achieve higher surveillance quality in real time. The practical feasibility of our approach is also demonstrated using real AXIS 214 PTZ cameras
The major drawbacks of these techniques are: (a) They cannot be deployed in real-world surveillance environments with occlusions. In this case, they cannot observe the targets that reside in the occluded regions, hence limiting the active cameras' full surveillance capability. In contrast, our approach does not assume that all targets can be fully observed at every time instance, and hence models a belief of the targets' states to keep track of them when they are not observed by any of the cameras; (b) since the resolution of the wide-view static cameras is low, they often produce inaccurate locations of the targets. This in turn induces errors in targets' directions and velocities which consequently affect the prediction capability of existing surveillance systems. On the other hand, our approach uses only active cameras to observe the targets at high resolution, thus allowing location errors to be kept minimal; and (c) many existing techniques have serious issues of scalability in the number of targets to be observed. Our approach extends our previous work @cite_14 to achieve scalability in partially observable surveillance environments.
{ "cite_N": [ "@cite_14" ], "mid": [ "2141442361" ], "abstract": [ "This paper presents a novel decision-theoretic approach to control and coordinate multiple active cameras for observing a number of moving targets in a surveillance system. This approach offers the advantages of being able to (a) account for the stochasticity of targets' motion via probabilistic modeling, and (b) address the trade-off between maximizing the expected number of observed targets and the resolution of the observed targets through stochastic optimization. One of the key issues faced by existing approaches in multi-camera surveillance is that of scalability with increasing number of targets. We show how its scalability can be improved by exploiting the problem structure: as proven analytically, our decision-theoretic approach incurs time that is linear in the number of targets to be observed during surveillance. As demonstrated empirically through simulations, our proposed approach can achieve high-quality surveillance of up to 50 targets in real time and its surveillance performance degrades gracefully with increasing number of targets. We also demonstrate our proposed approach with real AXIS 214 PTZ cameras in maximizing the number of Lego robots observed at high resolution over a surveyed rectangular area. The results are promising and clearly show the feasibility of our decision-theoretic approach in controlling and coordinating the active cameras in real surveillance system." ] }
1209.3487
1484323222
We present a framework for a large-scale distributed eScience Articial Intelligence search. Our approach is generic and can be used for many dierent problems. Unlike many other approaches, we do not require dedicated machines, homogeneous infrastructure or the ability to communicate between nodes. We give special consideration to the robustness of the framework, minimising the loss of eort even after total loss of infrastructure, and allowing easy verication of every step of the distribution process. In contrast to most eScience applications, the input data and specication of the problem is very small, being easily given in a paragraph of text. The unique challenges our framework tackles are related to the combinatorial explosion of the space that contains the possible solutions and the robustness of long-running computations. Not only is the time required to nish the computations unknown, but also the resource requirements may change during the course of the computation. We demonstrate the applicability of our framework by using it to solve a challenging and hitherto open problem in computational mathematics. The results demonstrate that our approach easily scales to computations of a size that would have been impossible to tackle in practice just a decade ago.
Backtracking search in a distributed setting has also been investigated by several authors @cite_11 @cite_30 . A special variant for distributed scenarios, asynchronous backtracking, was proposed in @cite_23 . formalise the distributed constraint satisfaction problem and present algorithms for solving it @cite_33 .
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_23", "@cite_11" ], "mid": [ "1592205850", "2160458341", "2157740753", "2123081449" ], "abstract": [ "Many algorithms in operations research and artificial intelligence are based on the backtracking principle, i.e., depth first search in implicitly defined trees. For parallelizing these algorithms, an efficient load balancing scheme is of central importance.", "We develop a formalism called a distributed constraint satisfaction problem (distributed CSP) and algorithms for solving distributed CSPs. A distributed CSP is a constraint satisfaction problem in which variables and constraints are distributed among multiple agents. Various application problems in distributed artificial intelligence can be formalized as distributed CSPs. We present our newly developed technique called asynchronous backtracking that allows agents to act asynchronously and concurrently without any global control, while guaranteeing the completeness of the algorithm. Furthermore, we describe how the asynchronous backtracking algorithm can be modified into a more efficient algorithm called an asynchronous weak-commitment search, which can revise a bad decision without exhaustive search by changing the priority order of agents dynamically. The experimental results on various example problems show that the asynchronous weak-commitment search algorithm is, by far more, efficient than the asynchronous backtracking algorithm and can solve fairly large-scale problems.", "Viewing cooperative distributed problem solving (CDPS) as distributed constraint satisfaction provides a useful formalism for characterizing CDPS techniques. This formalism and algorithms for solving distributed constraint satisfaction problems (DCSPs) are compared. A technique called asynchronous backtracking that allows agents to act asynchronously and concurrently, in contrast to the traditional sequential backtracking techniques used in constraint satisfaction problems, is presented. Experimental results show that solving DCSPs in a distributed fashion is worthwhile when the problems solved by individual agents are loosely coupled. >", "Analytical models and experimental results concerning the average case behavior of parallel backtracking are presented. Two types of backtrack search algorithms are considered: simple backtracking, which does not use heuristics to order and prune search, and heuristic backtracking, which does. Analytical models are used to compare the average number of nodes visited in sequential and parallel search for each case. For simple backtracking, it is shown that the average speedup obtained is linear when the distribution of solutions is uniform and superlinear when the distribution of solutions is nonuniform. For heuristic backtracking, the average speedup obtained is at least linear, and the speedup obtained on a subset of instances is superlinear. Experimental results for many synthetic and practical problems run on various parallel machines that validate the theoretical analysis are presented. >" ] }
1209.3487
1484323222
We present a framework for a large-scale distributed eScience Articial Intelligence search. Our approach is generic and can be used for many dierent problems. Unlike many other approaches, we do not require dedicated machines, homogeneous infrastructure or the ability to communicate between nodes. We give special consideration to the robustness of the framework, minimising the loss of eort even after total loss of infrastructure, and allowing easy verication of every step of the distribution process. In contrast to most eScience applications, the input data and specication of the problem is very small, being easily given in a paragraph of text. The unique challenges our framework tackles are related to the combinatorial explosion of the space that contains the possible solutions and the robustness of long-running computations. Not only is the time required to nish the computations unknown, but also the resource requirements may change during the course of the computation. We demonstrate the applicability of our framework by using it to solve a challenging and hitherto open problem in computational mathematics. The results demonstrate that our approach easily scales to computations of a size that would have been impossible to tackle in practice just a decade ago.
Schulte presents the architecture of a system that uses networked computers @cite_34 . The focus of his approach is to provide a high-level and reusable design for parallel search and achieve a good speedup compared to sequential solving rather than good resource utilisation. More recent papers have explored how to transparently parallelise search without having to modify existing code @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_34" ], "mid": [ "2119327151", "185340237" ], "abstract": [ "The availability of commodity multi-core andmulti-processor machines and the inherent parallelism inconstraint programming search offer significant opportunities for constraint programming. They also present a fundamental challenge: how to exploit parallelism transparently to speed up constraint programs. This paper shows how to parallelize constraint programs transparently without changes to the code. The main technical idea consists of automatically lifting a sequential exploration strategy into its parallel counterpart, allowing workers to share and steal subproblems. Experimental results showthat the parallel implementationmay produces significant speedups on multi-core machines.", "Search in constraint programming is a time consuming task. Search can be speeded up by exploring subtrees of a search tree in parallel. This paper presents distributed search engines that achieve parallelism by distribution across networked computers. The main point of the paper is a simple design of the parallel search engine. Simplicity comes as an immediate consequence of clearly separating search, concurrency, and distribution. The obtained distributed search engines are simple yet offer substantial speedup on standard network computers." ] }
1209.3487
1484323222
We present a framework for a large-scale distributed eScience Articial Intelligence search. Our approach is generic and can be used for many dierent problems. Unlike many other approaches, we do not require dedicated machines, homogeneous infrastructure or the ability to communicate between nodes. We give special consideration to the robustness of the framework, minimising the loss of eort even after total loss of infrastructure, and allowing easy verication of every step of the distribution process. In contrast to most eScience applications, the input data and specication of the problem is very small, being easily given in a paragraph of text. The unique challenges our framework tackles are related to the combinatorial explosion of the space that contains the possible solutions and the robustness of long-running computations. Not only is the time required to nish the computations unknown, but also the resource requirements may change during the course of the computation. We demonstrate the applicability of our framework by using it to solve a challenging and hitherto open problem in computational mathematics. The results demonstrate that our approach easily scales to computations of a size that would have been impossible to tackle in practice just a decade ago.
Most of the existing work is concerned with the problem of effectively distributing the workload such that every compute node is kept busy. The most prevalent technique used to achieve this is work stealing. The compute nodes communicate with each other and nodes which are idle request a part of the work that a busy node is doing. Blumofe and Leiserson propose and discuss a work stealing scheduler for multithreaded computations in @cite_28 . Rolf and Kuchcinski investigate different algorithms for load balancing and work stealing in the specific context of distributed constraint solving @cite_18 .
{ "cite_N": [ "@cite_28", "@cite_18" ], "mid": [ "2016559894", "2151562519" ], "abstract": [ "This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is “work stealing,” in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies. Specifically, our analysis shows that the expected time to execute a fully strict computation on P processors using our work-stealing scheduler is T 1 P + O ( T ∞ , where T 1 is the minimum serial execution time of the multithreaded computation and ( T ∞ is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most S 1 P , where S 1 is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most O ( PT ∞ ( 1 + n d ) S max ), where S max is the size of the largest activation record of any thread and n d is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor.", "Program parallelization and distribution becomes increasingly important when new multi-core architectures and cheaper cluster technology provide ways to improve performance. Using declarative languages, such as constraint programming, can make the transition to parallelism easier for the programmer. In this paper, we address parallel and distributed search in constraint programming (CP) by proposing several load-balancing methods. We show how these methods improve the execution-time scalability of constraint programs. Scalability is the greatest challenge of parallelism and it is particularly an issue in constraint programming, where load-balancing is difficult. We address this problem by proposing CP-specific load-balancing methods and evaluating them on a cluster by using benchmark problems. Our experimental results show that the methods behave differently well depending on the type of problem and the type of search. This gives the programmer the opportunity to optimize the performance for a particular problem." ] }
1209.3487
1484323222
We present a framework for a large-scale distributed eScience Articial Intelligence search. Our approach is generic and can be used for many dierent problems. Unlike many other approaches, we do not require dedicated machines, homogeneous infrastructure or the ability to communicate between nodes. We give special consideration to the robustness of the framework, minimising the loss of eort even after total loss of infrastructure, and allowing easy verication of every step of the distribution process. In contrast to most eScience applications, the input data and specication of the problem is very small, being easily given in a paragraph of text. The unique challenges our framework tackles are related to the combinatorial explosion of the space that contains the possible solutions and the robustness of long-running computations. Not only is the time required to nish the computations unknown, but also the resource requirements may change during the course of the computation. We demonstrate the applicability of our framework by using it to solve a challenging and hitherto open problem in computational mathematics. The results demonstrate that our approach easily scales to computations of a size that would have been impossible to tackle in practice just a decade ago.
The decomposition of constraint problems into subproblems which can be solved independently has been proposed in @cite_7 , albeit in a different context. In this work, we explore the use of this technique for parallelisation. A similar approach was taken in @cite_18 , but requires parallelisation support in the solver.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2151562519", "2084964364" ], "abstract": [ "Program parallelization and distribution becomes increasingly important when new multi-core architectures and cheaper cluster technology provide ways to improve performance. Using declarative languages, such as constraint programming, can make the transition to parallelism easier for the programmer. In this paper, we address parallel and distributed search in constraint programming (CP) by proposing several load-balancing methods. We show how these methods improve the execution-time scalability of constraint programs. Scalability is the greatest challenge of parallelism and it is particularly an issue in constraint programming, where load-balancing is difficult. We address this problem by proposing CP-specific load-balancing methods and evaluating them on a cluster by using benchmark problems. Our experimental results show that the methods behave differently well depending on the type of problem and the type of search. This gives the programmer the opportunity to optimize the performance for a particular problem.", "Search strategies, that is, strategies that describe how to explore search trees, have raised much interest for constraint satisfaction in recent years. In particular, limited discrepancy search and its variations have been shown to achieve significant improvements in efficiency over depth-first search for some classes of applications.This article reconsiders the implementation of discrepancy search, and of search strategies in general, for applications where the search procedure is dynamic, randomized, and or generates global cuts (or nogoods) that apply to the remaining search. It illustrates that recomputation-based implementations of discrepancy search are not robust with respect to these extensions and require special care which may increase the memory requirements significantly and destroy the genericity of the implementation.To remedy these limitations, the article proposes a novel implementation scheme based on problem decomposition, which combines the efficiency of the recomputation-based implementations with the robustness of traditional iterative implementations. Experimental results on job-shop scheduling problems illustrate the potential of this new implementation scheme, which, surprisingly, may significantly outperform recomputation-based schemes." ] }
1209.3432
2949887756
This paper presents a new approach to distributed controller design that exploits a partial-structure representation of linear time invariant systems to characterize the structure of a system. This partial-structure representation, called the dynamical structure function, characterizes the signal structure , or open-loop causal dependencies among manifest variables, capturing a significantly richer notion of structure than the sparsity pattern of the transfer function. The design technique sequentially constructs each link in an arbitrary controller signal structure, and the main result proves that the resulting controller is either stabilizing or no controller with the desired structure can stabilize the system.
In @cite_2 the authors show that if the structure of the transfer function matrices of the plant and the controller meets the quadratic invariance condition then the problem of synthesizing the optimal controller is convex. In @cite_1 the authors show that the quadratic invariance condition is necessary and sufficient for the problem of synthesizing the optimal controller to be convex. This method requires a decentralized stabilizing controller to initialize the convex optimization problem, so to complete the process, an algorithm to obtain such a controller is provided in @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_2" ], "mid": [ "2064463660", "", "2165894964" ], "abstract": [ "In this paper we deal with the problem of stabilizing linear, time-invariant plants using feedback control configurations that are subject to sparsity constraints. Recent results show that given a strongly stabilizable plant, the class of all stabilizing controllers that satisfy certain given sparsity constraints admits a convex representation via Zames's Q-parametrization. More precisely, if the pre-specified sparsity constraints imposed on the controller are quadratically invariant with respect to the plant, then such a convex representation is guaranteed to exist. The most useful feature of the aforementioned results is that the sparsity constraints on the controller can be recast as convex constraints on the Q-parameter, which makes this approach suitable for optimal controller design (in the ℋ2 sense) using numerical tools readily available from the classical, centralized optimal ℋ2 synthesis. All these procedures rely crucially on the fact that some stabilizing controller that verifies the imposed sparsity constraints is a priori known, while design procedures for such a controller to initialize the aforementioned optimization schemes are not yet available. This paper provides necessary and sufficient conditions for such a plant to be stabilizable with a controller having the given sparsity pattern. These conditions are formulated in terms of the existence of a doubly coprime factorization of the plant with additional sparsity constraints on certain factors. We show that the computation of such a factorization is equivalent to solving an exact model-matching problem. We also give the parametrization of the set of all decentralized stabilizing controllers by imposing additional constraints on the Youla parameter. These constraints are for the Youla parameter to lie in the set of all stable transfer function matrices belonging to a certain linear subspace.", "", "We consider the problem of constructing optimal decentralized controllers. We formulate this problem as one of minimizing the closed-loop norm of a feedback system subject to constraints on the controller structure. We define the notion of quadratic invariance of a constraint set with respect to a system, and show that if the constraint set has this property, then the constrained minimum-norm problem may be solved via convex programming. We also show that quadratic invariance is necessary and sufficient for the constraint set to be preserved under feedback. These results are developed in a very general framework, and are shown to hold in both continuous and discrete time, for both stable and unstable systems, and for any norm. This notion unifies many previous results identifying specific tractable decentralized control problems, and delineates the largest known class of convex problems in decentralized control. As an example, we show that optimal stabilizing controllers may be efficiently computed in the case where distributed controllers can communicate faster than their dynamics propagate. We also show that symmetric synthesis is included in this classification, and provide a test for sparsity constraints to be quadratically invariant, and thus amenable to convex synthesis." ] }
1209.3432
2949887756
This paper presents a new approach to distributed controller design that exploits a partial-structure representation of linear time invariant systems to characterize the structure of a system. This partial-structure representation, called the dynamical structure function, characterizes the signal structure , or open-loop causal dependencies among manifest variables, capturing a significantly richer notion of structure than the sparsity pattern of the transfer function. The design technique sequentially constructs each link in an arbitrary controller signal structure, and the main result proves that the resulting controller is either stabilizing or no controller with the desired structure can stabilize the system.
A different type of distributed controller design has been proposed in @cite_3 . The approach taken in this paper enforces the controller to have the same network structure as the plant. The structure in this paper is defined as the constraint on the interconnection of sub-systems, or the subsystem structure. Hence, the plant and the controller can share the same communication network reducing the implementation cost. An algorithm to synthesize a sub-optimal controller with such structure is also provided in this paper.
{ "cite_N": [ "@cite_3" ], "mid": [ "2146378835" ], "abstract": [ "We consider the problem of designing stabilizing distributed output-feedback controllers that achieve H 2 and H ∞ performance objectives for a group of sub-systems dynamically interconnected via an arbitrary directed communication network. For a particular class of discrete-time linear time-invariant interconnected systems that are characterized by a structural property of their state-space matrices, we design stabilizing distributed controllers which can use the available network along with the sub-systems of the interconnected system. This is achieved by means of a parameterization for the output-feedback linear controllers that linearizes the closed-loop H 2 and H ∞ norm conditions and provide equivalent linear matrix inequalities (LMIs). Using these LMIs, we formulate the minimization of H 2 and H ∞ norms as semi-definite programs (SDPs) that can be efficiently solved using well-established techniques and tools. The solutions of these SDPs allow us to synthesize the corresponding controllers that are realizable over the given network. Even though we provide only sufficiency conditions for the design of stabilizing distributed controllers, simulations show that the synthesized controllers we obtain provide good performance in spite of being suboptimal compared to the centralized controller. In essence, we gain the advantage of designing realizable distributed controllers at the expense of slight performance degradation compared to the centralized solutions." ] }
1209.3432
2949887756
This paper presents a new approach to distributed controller design that exploits a partial-structure representation of linear time invariant systems to characterize the structure of a system. This partial-structure representation, called the dynamical structure function, characterizes the signal structure , or open-loop causal dependencies among manifest variables, capturing a significantly richer notion of structure than the sparsity pattern of the transfer function. The design technique sequentially constructs each link in an arbitrary controller signal structure, and the main result proves that the resulting controller is either stabilizing or no controller with the desired structure can stabilize the system.
In @cite_6 , @cite_12 , etc., sequential design methods have been used to construct decentralized controllers. Although these methods do not produce the optimal controller, they provide an efficient method to synthesize a nominal stabilizing controller with a desired decentralized sparsity pattern in its transfer function. We will use a similar strategy to design a stabilizing controller with constraints on the signal structure in Section . In the event that this process cannot produce a stabilizing controller, we will show that there is no controller of the given signal structure that stabilizes the plant.
{ "cite_N": [ "@cite_12", "@cite_6" ], "mid": [ "1513798530", "2018889945" ], "abstract": [ "Abstract: This paper studies the effect of decentralized feedback on the closed-loop properties of jointly controllable, jointly observablek-channel linear systems. Channel interactions within such systems are described by means of suitably defined directed graphs. The concept of a complete system is introduced. Complete systems prove to be precisely those systems which can be made both controllable and observable through a single channel by applying nondynamic decentralized feedback to all channels. Explicit conditions are derived for determining when the closed-loop spectrum of ak-channel linear system can be freely assigned or stabilized with decentralized control.", "This paper considers designs of decentralized controllers for continuous-time linear time-invariant multivariable control systems. The purpose is to show how to attain robust stability and robust performance of decentralized control systems by sequential design procedure. Compared with independent designs in which each controller block is designed independently of others, sequential designs can reduce conservatism associated with the controller design since they can exploit information about the other loops which have been already designed. A novel method of sequential designs for the robust performance are proposed. Moreover, an approach to a kind of failure tolerance is considered. The proposed sequential design procedure is based on expanding construction of decentralized control systems, so that the proposed methods can deal with extension of subsystems." ] }
1209.3026
2951554315
Social media content has grown exponentially in the recent years and the role of social media has evolved from just narrating life events to actually shaping them. In this paper we explore how many resources shared in social media are still available on the live web or in public web archives. By analyzing six different event-centric datasets of resources shared in social media in the period from June 2009 to March 2012, we found about 11 lost and 20 archived after just a year and an average of 27 lost and 41 archived after two and a half years. Furthermore, we found a nearly linear relationship between time of sharing of the resource and the percentage lost, with a slightly less linear relationship between time of sharing and archiving coverage of the resource. From this model we conclude that after the first year of publishing, nearly 11 of shared resources will be lost and after that we will continue to lose 0.02 per day.
To our knowledge, no prior study has analyzed the amount of shared resources in social media lost through time. There have been many studies analyzing the behavior of users within a social network, how they interact, and what content they share @cite_1 @cite_12 @cite_13 @cite_16 . As for Twitter, @cite_11 studied its nature and its topological characteristics and found a deviation from known characteristics of human social networks that were analyzed by Newman and Park @cite_17 . Lee analyzed the reasons behind sharing news in social media and found that informativeness was the strongest motivation in predicting news sharing intention, followed by socializing and status seeking @cite_3 . Also shared content in social media like Twitter move and diffuse relatively fast as stated by @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_17", "@cite_3", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "", "2117410972", "", "", "2047443612", "2093842354", "2112896229", "2101196063" ], "abstract": [ "", "Understanding how users behave when they connect to social networking sites creates opportunities for better interface design, richer studies of social interactions, and improved design of content distribution systems. In this paper, we present a first of a kind analysis of user workloads in online social networks. Our study is based on detailed clickstream data, collected over a 12-day period, summarizing HTTP sessions of 37,024 users who accessed four popular social networks: Orkut, MySpace, Hi5, and LinkedIn. The data were collected from a social network aggregator website in Brazil, which enables users to connect to multiple social networks with a single authentication. Our analysis of the clickstream data reveals key features of the social network workloads, such as how frequently people connect to social networks and for how long, as well as the types and sequences of activities that users conduct on these sites. Additionally, we crawled the social network topology of Orkut, so that we could analyze user interaction data in light of the social graph. Our data analysis suggests insights into how users interact with friends in Orkut, such as how frequently users visit their friends' or non-immediate friends' pages. In summary, our analysis demonstrates the power of using clickstream data in identifying patterns in social network workloads and social interactions. Our analysis shows that browsing, which cannot be inferred from crawling publicly available data, accounts for 92 of all user activities. Consequently, compared to using only crawled data, considering silent interactions like browsing friends' pages increases the measured level of interaction among users.", "", "", "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone. This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.", "Micro-blogs, a relatively new phenomenon, provide a new communication channel for people to broadcast information that they likely would not share otherwise using existing channels (e.g., email, phone, IM, or weblogs). Micro-blogging has become popu-lar quite quickly, raising its potential for serving as a new informal communication medium at work, providing a variety of impacts on collaborative work (e.g., enhancing information sharing, building common ground, and sustaining a feeling of connectedness among colleagues). This exploratory research project is aimed at gaining an in-depth understanding of how and why people use Twitter - a popular micro-blogging tool - and exploring micro-blog's poten-tial impacts on informal communication at work.", "We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter known as \"lists\" to distinguish between elite users - by which we mean celebrities, bloggers, and representatives of media outlets and other formal organizations - and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter, in that roughly 50 of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical \"two-step flow\" theory of communications, finding considerable support for it on Twitter. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics.", "Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it." ] }
1209.3026
2951554315
Social media content has grown exponentially in the recent years and the role of social media has evolved from just narrating life events to actually shaping them. In this paper we explore how many resources shared in social media are still available on the live web or in public web archives. By analyzing six different event-centric datasets of resources shared in social media in the period from June 2009 to March 2012, we found about 11 lost and 20 archived after just a year and an average of 27 lost and 41 archived after two and a half years. Furthermore, we found a nearly linear relationship between time of sharing of the resource and the percentage lost, with a slightly less linear relationship between time of sharing and archiving coverage of the resource. From this model we conclude that after the first year of publishing, nearly 11 of shared resources will be lost and after that we will continue to lose 0.02 per day.
Further more, many concerns were raised about the persistence of shared resources and web content in general. Nelson and Allen studied the persistence of objects in a digital library and found that, with just over a year, 3 web resources referenced from papers in scholarly repositories using Memento and found that 28 of the web is archived and found it ranges from 16 on the starting seed URIs. examined the factors affecting reconstructing websites (using caches and archives) and found that PageRank, Age, and the number of hops from the top-level of the site were most influential @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2951582856" ], "abstract": [ "In this paper we present the results of a study into the persistence and availability of web resources referenced from papers in scholarly repositories. Two repositories with different characteristics, arXiv and the UNT digital library, are studied to determine if the nature of the repository, or of its content, has a bearing on the availability of the web resources cited by that content. Memento makes it possible to automate discovery of archived resources and to consider the time between the publication of the research and the archiving of the referenced URLs. This automation allows us to process more than 160000 URLs, the largest known such study, and the repository metadata allows consideration of the results by discipline. The results are startling: 45 (66096) of the URLs referenced from arXiv still exist, but are not preserved for future generations, and 28 of resources referenced by UNT papers have been lost. Moving forwards, we provide some initial recommendations, including that repositories should publish URL lists extracted from papers that could be used as seeds for web archiving systems." ] }
1209.3312
2953371750
The fields of compressed sensing (CS) and matrix completion have shown that high-dimensional signals with sparse or low-rank structure can be effectively projected into a low-dimensional space (for efficient acquisition or processing) when the projection operator achieves a stable embedding of the data by satisfying the Restricted Isometry Property (RIP). It has also been shown that such stable embeddings can be achieved for general Riemannian submanifolds when random orthoprojectors are used for dimensionality reduction. Due to computational costs and system constraints, the CS community has recently explored the RIP for structured random matrices (e.g., random convolutions, localized measurements, deterministic constructions). The main contribution of this paper is to show that any matrix satisfying the RIP (i.e., providing a stable embedding for sparse signals) can be used to construct a stable embedding for manifold-modeled signals by randomizing the column signs and paying reasonable additional factors in the number of measurements. We demonstrate this result with several new constructions for stable manifold embeddings using structured matrices. This result allows advances in efficient projection schemes for sparse signals to be immediately applied to manifold signal models.
In this work, we adopt the same general proof approach but replace the JL lemma for random orthoprojectors with a JL lemma for operators satisfying the RIP. The following theorem, adapted from @cite_21 , expresses this JL lemma: In words, any operator satisfying the RIP can be used to approximately preserve the norms of any orthogonal transform of the signals in a given finite point cloud when the signs of the columns of the operator are randomly chosen. We remark that if the finite point cloud @math is the set of all differences between points in another finite set @math , then a matrix @math satisfying the RIP of order @math (and conditioning @math ) in Theorem can provide a stable embedding of @math with high probability when the column signs of @math are randomized.
{ "cite_N": [ "@cite_21" ], "mid": [ "2963262327" ], "abstract": [ "Consider an @math matrix @math with the restricted isometry property of order k and level @math ; that is, the norm of any k-sparse vector in @math is preserved to within a multiplicative factor of @math under application of @math . We show that by randomizing the column signs of such a matrix @math , the resulting map with high probability embeds any fixed set of @math points in @math into @math without distorting the norm of any point in the set by more than a factor of @math . Consequently, matrices with the restricted isometry property and with randomized column signs provide optimal Johnson–Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound @math given by Ailon and Liberty to $m..." ] }
1209.2185
2949774333
We present a fast algorithm for approximate Canonical Correlation Analysis (CCA). Given a pair of tall-and-thin matrices, the proposed algorithm first employs a randomized dimensionality reduction transform to reduce the size of the input matrices, and then applies any CCA algorithm to the new pair of matrices. The algorithm computes an approximate CCA to the original pair of matrices with provable guarantees, while requiring asymptotically less operations than the state-of-the-art exact algorithms.
Dimensionality reduction has been the driving force behind many recent algorithms for accelerating key machine learning and linear algebraic tasks. A representative example is linear regression, i.e., solve the least squares problem @math , where @math . If @math , then one can use the SRHT to reduce the dimension of @math and @math , to form @math and @math , and then solve the small problem @math . This process will return an approximate solution to the original problem @cite_27 @cite_30 @cite_18 . Alternatively, one can observe that @math and @math are spectrally close, so @math is an effective preconditioner for @math @cite_12 @cite_25 . Other problems that can be accelerated using dimensionality reduction include: (i) approximate PCA (via low-rank matrix approximation) @cite_13 ; (ii) matrix multiplication @cite_27 ; (iii) K-means clustering @cite_8 ; (iv) approximation of matrix coherence and statistical leverage @cite_22 ; to name only a few.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_8", "@cite_27", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2120565458", "", "2129566246", "2952682616", "2045390367", "2117756735", "", "" ], "abstract": [ "Abstract Constrained least-squares regression problems, such as the Nonnegative Least Squares (NNLS) problem, where the variables are restricted to take only nonnegative values, often arise in applications. Motivated by the recent development of the fast Johnson–Lindestrauss transform, we present a fast random projection type approximation algorithm for the NNLS problem. Our algorithm employs a randomized Hadamard transform to construct a much smaller NNLS problem and solves this smaller problem using a standard NNLS solver. We prove that our approach finds a nonnegative solution vector that, with high probability, is close to the optimum nonnegative solution in a relative error approximation sense. We experimentally evaluate our approach on a large collection of term-document data and verify that it does offer considerable speedups without a significant loss in accuracy. Our analysis is based on a novel random projection type result that might be of independent interest. In particular, given a tall and thin matrix Φ ∈ R n × d ( n ≫ d ) and a vector y ∈ R d , we prove that the Euclidean length of Φ y can be estimated very accurately by the Euclidean length of Φ ∼ y , where Φ ∼ consists of a small subset of (appropriately rescaled) rows of Φ .", "", "The statistical leverage scores of a data matrix are the squared row-norms of any matrix whose columns are obtained by orthogonalizing the columns of the data matrix; and, the coherence is the largest leverage score. These quantities play an important role in several machine learning algorithms because they capture the key structural nonuniformity of the data matrix that must be dealt with in developing efficient randomized algorithms. Our main result is a randomized algorithm that takes as input an arbitrary n × d matrix A, with n ≫ d, and returns, as output, relative-error approximations to all n of the statistical leverage scores. The proposed algorithm runs in O(nd log n) time, as opposed to the O(nd2) time required by the naive algorithm that involves computing an orthogonal basis for the range of A. This resolves an open question from (, 2006) and (Mohri & Talwalkar, 2011); and our result leads to immediate improvements in coreset-based l2-regression, the estimation of the coherence of a matrix, and several related low-rank matrix problems. Interestingly, to achieve our result we judiciously apply random projections on both sides of A.", "This paper discusses the topic of dimensionality reduction for @math -means clustering. We prove that any set of @math points in @math dimensions (rows in a matrix @math ) can be projected into @math dimensions, for any @math , in @math time, such that with constant probability the optimal @math -partition of the point set is preserved within a factor of @math . The projection is done by post-multiplying @math with a @math random matrix @math having entries @math or @math with equal probability. A numerical implementation of our technique and experiments on a large face images dataset verify the speed and the accuracy of our theoretical results.", "Recently several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear ( 2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. --Independent of the recent results of Har-Peled and of Deshpande and Vempala, one of the first -- and to the best of our knowledge the most efficient -- relative error (1 + ) A - A_k _F approximation algorithms for the singular value decomposition of an m ? n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O ( ( M( k + k k) + (n + m)( k + k k)^2 ) 1 ) --The first o(nd^ 2 ) time (1 + ) relative error approximation algorithm for n ? d linear ( ) regression. --A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool.", "Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the @math dominant components of the singular value decomposition of an @math matrix. (i) For a dense input matrix, randomized algorithms require @math floating-point operations (flops) in contrast to @math for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to @math passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.", "", "" ] }