aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1407.8368
2012927050
Opportunistic routing is being investigated to enable the proliferation of low-cost wireless applications. A recent trend is looking at social structures, inferred from the social nature of human mobility, to bring messages close to a destination. To have a better picture of social structures, social-based opportunistic routing solutions should consider the dynamism of users' behavior resulting from their daily routines. We address this challenge by presenting dLife, a routing algorithm able to capture the dynamics of the network represented by time-evolving social ties between pair of nodes. Experimental results based on synthetic mobility models and real human traces show that dLife has better delivery probability, latency, and cost than proposals based on social structures.
On the other hand, prior-art also shows that users have routines that can be used to derive future behavior @cite_6 . It has been proven that mapping real social interactions to a clean (i.e., more stable) connectivity representation is rather useful to improve delivery @cite_5 . With , users' daily routines are considered to quantify the time-evolving strength of social interactions and so to foresee more accurately future social contacts than with proximity graphs inferred directly from inter-contact times.
{ "cite_N": [ "@cite_5", "@cite_6" ], "mid": [ "2044184930", "2171634212" ], "abstract": [ "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are generally made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict future contact opportunities. The use of complex network analysis has been recently suggested to perform this prediction task and improve the performance of DTN routing. Contacts seen in the past are aggregated to a social graph, and a variety of metrics (e.g., centrality and similarity) or algorithms (e.g., community detection) have been proposed to assess the utility of a node to deliver a content or bring it closer to the destination. In this paper, we argue that it is not so much the choice or sophistication of social metrics and algorithms that bears the most weight on performance, but rather the mapping from the mobility process generating contacts to the aggregated social graph. We first study two well-known DTN routing algorithms - SimBet and BubbleRap - that rely on such complex network analysis, and show that their performance heavily depends on how the mapping (contact aggregation) is performed. What is more, for a range of synthetic mobility models and real traces, we show that improved performances (up to a factor of 4 in terms of delivery ratio) are consistently achieved for a relatively narrow range of aggregation levels only, where the aggregated graph most closely reflects the underlying mobility structure. To this end, we propose an online algorithm that uses concepts from unsupervised learning and spectral graph theory to infer this 'correct' graph structure; this algorithm allows each node to locally identify and adjust to the optimal operating point, and achieves good performance in all scenarios considered.", "Longitudinal behavioral data generally contains a significant amount of structure. In this work, we identify the structure inherent in daily behavior with models that can accurately analyze, predict, and cluster multimodal data from individuals and communities within the social network of a population. We represent this behavioral structure by the principal components of the complete behavioral dataset, a set of characteristic vectors we have termed eigenbehaviors. In our model, an individual’s behavior over a specific day can be approximated by a weighted sum of his or her primary eigenbehaviors. When these weights are calculated halfway through a day, they can be used to predict the day’s remaining behaviors with 79 accuracy for our test subjects. Additionally, we demonstrate the potential for this dimensionality reduction technique to infer community affiliations within the subjects’ social network by clustering individuals into a “behavior space” spanned by a set of their aggregate eigenbehaviors. These behavior spaces make it possible to determine the behavioral similarity between both individuals and groups, enabling 96 classification accuracy of community affiliations within the population-level social network. Additionally, the distance between individuals in the behavior space can be used as an estimate for relational ties such as friendship, suggesting strong behavioral homophily amongst the subjects. This approach capitalizes on the large amount of rich data previously captured during the Reality Mining study from mobile phones continuously logging location, proximate phones, and communication of 100 subjects at MIT over the course of 9 months. As wearable sensors continue to generate these types of rich, longitudinal datasets, dimensionality reduction techniques such as eigenbehaviors will play an increasingly important role in behavioral research." ] }
1407.8147
2398088337
has been established as a model organism for investigating the fundamental principles of developmental gene interactions. The gene expression patterns of can be documented as digital images, which are annotated with anatomical ontology terms to facilitate pattern discovery and comparison. The automated annotation of gene expression pattern images has received increasing attention due to the recent expansion of the image database. The effectiveness of gene expression pattern annotation relies on the quality of feature representation. Previous studies have demonstrated that sparse coding is effective for extracting features from gene expression images. However, solving sparse coding remains a computationally challenging problem, especially when dealing with large-scale data sets and learning large size dictionaries. In this paper, we propose a novel algorithm to solve the sparse coding problem, called Stochastic Coordinate Coding (SCC). The proposed algorithm alternatively updates the sparse codes via just a few steps of coordinate descent and updates the dictionary via second order stochastic gradient descent. The computational cost is further reduced by focusing on the non-zero components of the sparse codes and the corresponding columns of the dictionary only in the updating procedure. Thus, the proposed algorithm significantly improves the efficiency and the scalability, making sparse coding applicable for large-scale data sets and large dictionary sizes. Our experiments on Drosophila gene expression data sets demonstrate the efficiency and the effectiveness of the proposed algorithm.
We summarize the optimization methods in the following. First we initialize the dictionary @math . Many dictionary initialization methods have been proposed, such as random weights @cite_1 , random patches and k-means. A detailed comparison of the performance among these initialization methods has been discussed in @cite_7 . With the initial dictionary, conventional sparse coding algorithms include the following main steps: Get an image patch @math . Calculate the sparse code @math by using LARS, FISTA or coordinate descent. Update the dictionary @math by performing stochastic gradient descent. Go to step 1 and iterate. We call each cycle, i.e. each image patch has been trained once, as an . Usually, several epochs are required to obtain a satisfactory result. When the number of image patches and the dictionary size is large, step 2 and step 3 are still very slow. We propose a novel algorithm to improve both of these parts, which is presented in the next section.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2546302380", "2184852195" ], "abstract": [ "In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63 recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6 ) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (≫ 65 ), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53 ).", "While vector quantization (VQ) has been applied widely to generate features for visual recognition problems, much recent work has focused on more powerful methods. In particular, sparse coding has emerged as a strong alternative to traditional VQ approaches and has been shown to achieve consistently higher performance on benchmark datasets. Both approaches can be split into a training phase, where the system learns a dictionary of basis functions, and an encoding phase, where the dictionary is used to extract features from new inputs. In this work, we investigate the reasons for the success of sparse coding over VQ by decoupling these phases, allowing us to separate out the contributions of training and encoding in a controlled way. Through extensive experiments on CIFAR, NORB and Caltech 101 datasets, we compare several training and encoding schemes, including sparse coding and a form of VQ with a soft threshold activation function. Our results show not only that we can use fast VQ algorithms for training, but that we can just as well use randomly chosen exemplars from the training set. Rather than spend resources on training, we find it is more important to choose a good encoder—which can often be a simple feed forward non-linearity. Our results include state-of-the-art performance on both CIFAR and NORB." ] }
1407.7844
2952823586
While smartphone usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as "tracking devices". The concern is not only related to an adversary taking physical or remote control of the device (e.g., via a malicious app), but also to what a passive adversary (without the above capabilities) can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this paper, we move a step forward: we investigate to which extent it is feasible to identify the specific actions that a user is doing on his mobile device, by simply eavesdropping the device's network traffic. In particular, we aim at identifying actions like browsing someone's profile on a social network, posting a message on a friend's wall, or sending an email. We design a system that achieves this goal starting from encrypted TCP IP packets: it works through identification of network flows and application of machine learning techniques. We did a complete implementation of this system and run a thorough set of experiments, which show that it can achieve accuracy and precision higher than 95 , for most of the considered actions.
In the literature, several works proposed to track user activities on the web by analyzing unencrypted HTTP requests and responses @cite_26 @cite_38 @cite_14 . With this analysis it was possible to understand user actions inferring interests and habits. However, in recent years, websites and social networks started to use SSL TLS encryption protocol, both for web and mobile services. This means that communications between endpoints are encrypted and this type of analysis cannot be performed anymore.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_26" ], "mid": [ "2056278891", "2038817904", "2067257553" ], "abstract": [ "Online Social Networks (OSNs) have already attracted more than half a billion users. However, our understanding of which OSN features attract and keep the attention of these users is poor. Studies thus far have relied on surveys or interviews of OSN users or focused on static properties, e. g., the friendship graph, gathered via sampled crawls. In this paper, we study how users actually interact with OSNs by extracting clickstreams from passively monitored network traffic. Our characterization of user interactions within the OSN for four different OSNs (Facebook, LinkedIn, Hi5, and StudiVZ) focuses on feature popularity, session characteristics, and the dynamics within OSN sessions. We find, for example, that users commonly spend more than half an hour interacting with the OSNs while the byte contributions per OSN session are relatively small.", "Understanding how users navigate and interact when they connect to social networking sites creates opportunities for better interface design, richer studies of social interactions, and improved design of content distribution systems. In this paper, we present an in-depth analysis of user workloads in online social networks. This study is based on detailed clickstream data, collected over a 12-day period, summarizing HTTP sessions of 37,024 users who accessed four popular social networks: Orkut, MySpace, Hi5, and LinkedIn. The data were collected from a social network aggregator website in Brazil, which enables users to connect to multiple social networks with a single authentication. Our analysis of the clickstream data reveals key features of the social network workloads, such as how frequently people connect to social networks and for how long, as well as the types and sequences of activities that users conduct on these sites. Additionally, we gather the social network topology of Orkut, so that we could analyze user interaction data in light of the social graph. Our data analysis suggests insights into how users interact with friends in Orkut, such as how frequently users visit their friends' and non-immediate friends' pages. Results show that browsing, which cannot be inferred from crawling publicly available data, accounts for 92 of all user activities. Consequently, compared to using only crawled data, silent interactions like browsing friends' pages increase the measured level of interaction among users. Additionally, we find that friends requesting content are often within close geographical proximity of the uploader. We also discuss a series of implications of our findings for efficient system and interface design as well as for advertisement placement in online social networks.", "In this paper, we investigate how detailed tracking of user interaction can be monitored using standard web technologies. Our motivation is to enable implicit interaction and to ease usability evaluation of web applications outside the lab. To obtain meaningful statements on how users interact with a web application, the collected information needs to be more detailed and fine-grained than that provided by classical log files. We focus on tasks such as classifying the user with regard to computer usage proficiency or making a detailed assessment of how long it took users to fill in fields of a form. Additionally, it is important in the context of our work that usage tracking should not alter the user's experience and that it should work with existing server and browser setups. We present an implementation for detailed tracking of user actions on web pages. An HTTP proxy modifies HTML pages by adding JavaScript code before delivering them to the client. This JavaScript tracking code collects data about mouse movements, keyboard input and more. We demonstrate the usefulness of our approach in a case study." ] }
1407.7844
2952823586
While smartphone usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as "tracking devices". The concern is not only related to an adversary taking physical or remote control of the device (e.g., via a malicious app), but also to what a passive adversary (without the above capabilities) can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this paper, we move a step forward: we investigate to which extent it is feasible to identify the specific actions that a user is doing on his mobile device, by simply eavesdropping the device's network traffic. In particular, we aim at identifying actions like browsing someone's profile on a social network, posting a message on a friend's wall, or sending an email. We design a system that achieves this goal starting from encrypted TCP IP packets: it works through identification of network flows and application of machine learning techniques. We did a complete implementation of this system and run a thorough set of experiments, which show that it can achieve accuracy and precision higher than 95 , for most of the considered actions.
Unfortunately, none of the aforementioned works was designed for (or could easily be extended) to mobile devices. In fact, all of them focus on web pages identification in desktop environment (in particular, in desktop browsers), where the generated HTTP traffic strictly depends on how web pages are designed. Conversely, mobile users mostly access the contents through the apps installed on their devices @cite_0 . These apps communicate with a service provider (e.g., Facebook) through a set of APIs. An example of such differences between desktop web browsers and mobile apps is the validation of SSL certificates @cite_41 @cite_15 .
{ "cite_N": [ "@cite_0", "@cite_41", "@cite_15" ], "mid": [ "2132187255", "2145994642", "2951313522" ], "abstract": [ "The current architecture supporting data services to mobile devices is built below the network layer (IP) and users receive the payload at the application layer. Between them is the transport layer that can cause data consumption inflation due to the retransmission mechanism that provides reliable delivery. In this paper, we examine the accounting policies of five large cellular ISPs in the U.S. and South Korea. We look at their policies regarding the transport layer reliability mechanism with TCP's retransmission and show that the current implementation of accounting policies either fails to meet the billing fairness or is vulnerable to charge evasions. Three of the ISPs surveyed charge for all IP packets regardless of retransmission, allowing attackers to inflate a victim's bill by intentionally retransmitting packets. The other two ISPs deduct the retransmitted amount from the user's bill thus allowing tunneling through TCP retransmissions. We show that a \"free-riding\" attack is viable with these ISPs and discuss some of the mitigation techniques.", "SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon's EC2 Java library and all cloud clients based on it; Amazon's and PayPal's merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations.", "Recent studies have shown that a significant number of mobile applications, often handling sensitive data such as bank accounts and login credentials, suffers from SSL vulnerabilities. Most of the time, these vulnerabilities are due to improper use of the SSL protocol (in particular, in its phase), resulting in applications exposed to man-in-the-middle attacks. In this paper, we present MITHYS, a system able to: (i) detect applications vulnerable to man-in-the-middle attacks, and (ii) protect them against these attacks. We demonstrate the feasibility of our proposal by means of a prototype implementation in Android, named MITHYSApp. A thorough set of experiments assesses the validity of our solution in detecting and protecting mobile applications from man-in-the-middle attacks, without introducing significant overheads. Finally, MITHYSApp does not require any special permissions nor OS modifications, as it operates at the application level. These features make MITHYSApp immediately deployable on a large user base." ] }
1407.7072
35416430
Comments for a product or a news article are rapidly growing and became a medium of measuring quality products or services. Consequently, spammers have been emerged in this area to bias them toward their favor. In this paper, we propose an efficient spammer detection method using structural rank of author specific term-document matrices. The use of structural rank was found effective and far faster than similar methods.
Our work is close to social media spam detection as they usually deal with short documents with large number of authors. The approaches are slightly different from traditional spam detection which focuses emails or websites. @cite_13 is a good survey of dealing with spam in social media.
{ "cite_N": [ "@cite_13" ], "mid": [ "2127124926" ], "abstract": [ "In recent years, social Web sites have become important components of the Web. With their success, however, has come a growing influx of spam. If left unchecked, spam threatens to undermine resource sharing, interactivity, and openness. This article surveys three categories of potential countermeasures - those based on detection, demotion, and prevention. Although many of these countermeasures have been proposed before for email and Web spam, the authors find that their applicability to social Web sites differs." ] }
1407.7072
35416430
Comments for a product or a news article are rapidly growing and became a medium of measuring quality products or services. Consequently, spammers have been emerged in this area to bias them toward their favor. In this paper, we propose an efficient spammer detection method using structural rank of author specific term-document matrices. The use of structural rank was found effective and far faster than similar methods.
Various content-based features were found effective detecting spams or spammers. @cite_11 used language models to detect spams in blog posts. Bag-of-anchors and bag-of-url were used in @cite_6 . @cite_2 defined folksonomy which are tags co-occurancing in network neighbors to detect spammers. @cite_8 @cite_12 computed average all-pair cosine similarity of one specific author with the help of other features.
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2159359879", "106005634", "2139143639", "2189187207", "2401383085" ], "abstract": [ "This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers.", "Weblogs or blogs are an important new way to publish information, engage in discussions, and form communities on the Internet. The Blogosphere has unfortunately been infected by several varieties of spam-like content. Blog search engines, for example, are inundated by posts from splogs - false blogs with machine generated or hijacked content whose sole purpose is to host ads or raise the PageRank of target sites. We discuss how SVM models based on local and link-based features can be used to detect splogs. We present an evaluation of learned models and their utility to blog search engines; systems that employ techniques differing from those of conventional web search engines.", "Social resource sharing systems like YouTube and del.icio.us have acquired a large number of users within the last few years. They provide rich resources for data analysis, information retrieval, and knowledge discovery applications. A first step towards this end is to gain better insights into content and structure of these systems. In this paper, we will analyse the main network characteristics of two of these systems. We consider their underlying data structures - so-called folksonomies - as tri-partite hypergraphs, and adapt classical network measures like characteristic path length and clustering coefficient to them. Subsequently, we introduce a network of tag co-occurrence and investigate some of its statistical properties, focusing on correlations in node connectivity and pointing out features that reflect emergent semantics within the folksonomy. We show that simple statistical indicators unambiguously spot non-social behavior such as spam.", "Online product reviews have become an important source of user opinions. Due to profit or fame, imposters have been writing deceptive or fake reviews to promote and or to demote some target products or services. Such imposters are called review spammers. In the past few years, several approaches have been proposed to deal with the problem. In this work, we take a different approach, which exploits the burstiness nature of reviews to identify review spammers. Bursts of reviews can be either due to sudden popularity of products or spam attacks. Reviewers and reviews appearing in a burst are often related in the sense that spammers tend to work with other spammers and genuine reviewers tend to appear together with other genuine reviewers. This paves the way for us to build a network of reviewers appearing in different bursts. We then model reviewers and their cooccurrence in bursts as a Markov Random Field (MRF), and employ the Loopy Belief Propagation (LBP) method to infer whether a reviewer is a spammer or not in the graph. We also propose several features and employ feature induced message passing in the LBP framework for network inference. We further propose a novel evaluation method to evaluate the detected spammers automatically using supervised classification of their reviews. Additionally, we employ domain experts to perform a human evaluation of the identified spammers and non-spammers. Both the classification result and human evaluation result show that the proposed method outperforms strong baselines, which demonstrate the effectiveness of the method.", "We present an approach for detecting link spam common in blog comments by comparing the language models used in the blog post, the comment, and pages linked by the comments. In contrast to other link spam filtering approaches, our method requires no training, no hard-coded rule sets, and no knowledge of complete-web connectivity. Preliminary experiments with identification of typical blog spam show promising results." ] }
1407.7072
35416430
Comments for a product or a news article are rapidly growing and became a medium of measuring quality products or services. Consequently, spammers have been emerged in this area to bias them toward their favor. In this paper, we propose an efficient spammer detection method using structural rank of author specific term-document matrices. The use of structural rank was found effective and far faster than similar methods.
User networking behavior were also well studied in this area. @cite_10 make use of tagging behavior of a user, such as user concurrence with other spammers. @cite_0 did similar approach categorizing users on Youtube into spammers, promoters, or legitimate users. @cite_3 did similar approach on Twitter. @cite_8 proposed a behavior model adding review score features testing such as its fairness with other features. @cite_12 additionally takes in to account trends of the review (called burstness of review). Graph similarity based detection was used in @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_3", "@cite_0", "@cite_10", "@cite_12" ], "mid": [ "2112213600", "2159359879", "196347582", "2097436041", "2009119083", "2189187207" ], "abstract": [ "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.", "This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers.", "As online social networks acquire a larger user base, they also become more interesting targets for spammers. Spam can take very different forms on social web sites and can not always be detected by analyzing textual content. However, the platform’s social nature also offers new ways of approaching the spam problem. In this work we analyze a user’s friends and followers to gain information on him. Next, we evaluate them using different metrics to determine the amount of trust his peers give him. We use the Twitter microblogging platform for this case study.", "A number of online video social networks, out of which YouTube is the most popular, provides features that allow users to post a video as a response to a discussion topic. These features open opportunities for users to introduce polluted content, or simply pollution, into the system. For instance, spammers may post an unrelated video as response to a popular one aiming at increasing the likelihood of the response being viewed by a larger number of users. Moreover, opportunistic users--promoters--may try to gain visibility to a specific video by posting a large number of (potentially unrelated) responses to boost the rank of the responded video, making it appear in the top lists maintained by the system. Content pollution may jeopardize the trust of users on the system, thus compromising its success in promoting social interactions. In spite of that, the available literature is very limited in providing a deep understanding of this problem. In this paper, we go a step further by addressing the issue of detecting video spammers and promoters. Towards that end, we manually build a test collection of real YouTube users, classifying them as spammers, promoters, and legitimates. Using our test collection, we provide a characterization of social and content attributes that may help distinguish each user class. We also investigate the feasibility of using a state-of-the-art supervised classification algorithm to detect spammers and promoters, and assess its effectiveness in our test collection. We found that our approach is able to correctly identify the majority of the promoters, misclassifying only a small percentage of legitimate users. In contrast, although we are able to detect a significant fraction of spammers, they showed to be much harder to distinguish from legitimate users.", "The annotation of web sites in social bookmarking systems has become a popular way to manage and find information on the web. The community structure of such systems attracts spammers: recent post pages, popular pages or specific tag pages can be manipulated easily. As a result, searching or tracking recent posts does not deliver quality results annotated in the community, but rather unsolicited, often commercial, web sites. To retain the benefits of sharing one's web content, spam-fighting mechanisms that can face the flexible strategies of spammers need to be developed. A classical approach in machine learning is to determine relevant features that describe the system's users, train different classifiers with the selected features and choose the one with the most promising evaluation results. In this paper we will transfer this approach to a social bookmarking setting to identify spammers. We will present features considering the topological, semantic and profile-based information which people make public when using the system. The dataset used is a snapshot of the social bookmarking system BibSonomy and was built over the course of several months when cleaning the system from spam. Based on our features, we will learn a large set of different classification models and compare their performance. Our results represent the groundwork for a first application in BibSonomy and for the building of more elaborate spam detection mechanisms.", "Online product reviews have become an important source of user opinions. Due to profit or fame, imposters have been writing deceptive or fake reviews to promote and or to demote some target products or services. Such imposters are called review spammers. In the past few years, several approaches have been proposed to deal with the problem. In this work, we take a different approach, which exploits the burstiness nature of reviews to identify review spammers. Bursts of reviews can be either due to sudden popularity of products or spam attacks. Reviewers and reviews appearing in a burst are often related in the sense that spammers tend to work with other spammers and genuine reviewers tend to appear together with other genuine reviewers. This paves the way for us to build a network of reviewers appearing in different bursts. We then model reviewers and their cooccurrence in bursts as a Markov Random Field (MRF), and employ the Loopy Belief Propagation (LBP) method to infer whether a reviewer is a spammer or not in the graph. We also propose several features and employ feature induced message passing in the LBP framework for network inference. We further propose a novel evaluation method to evaluate the detected spammers automatically using supervised classification of their reviews. Additionally, we employ domain experts to perform a human evaluation of the identified spammers and non-spammers. Both the classification result and human evaluation result show that the proposed method outperforms strong baselines, which demonstrate the effectiveness of the method." ] }
1407.7216
1859600792
We consider Approval Voting systems where each voter decides on a subset of candidates he she approves. We focus on the optimization problem of finding the committee of fixed size k, minimizing the maximal Hamming distance from a vote. In this paper we give a PTAS for this problem and hence resolve the open question raised by [AAAI’10]. The result is obtained by adapting the techniques developed by [JACM’02] originally used for the less constrained Closest String problem. The technique relies on extracting information and structural properties of constant size subsets of votes.
In this paper we give a PTAS for the Minimax Approval Voting problem. Our work is based on the PTAS for Closest String @cite_1 , which is a similar problem to @math but there we do not have the restriction on the number of 1's in the result. Technically, our contribution is the method of handling the number of 1's in the output. We also believe that our presentation is somewhat more intuitive.
{ "cite_N": [ "@cite_1" ], "mid": [ "2047386086" ], "abstract": [ "The problem of finding a center string that is \"close\" to everygiven string arises in computational molecular biology and codingtheory. This problem has two versions: the Closest String problemand the Closest Substring problem. Given a set of strings S= s1, s2, ...,sn , each of length m, the Closest Stringproblem is to find the smallest d and a string s of lengthm which is within Hamming distance d to eachsi e S. This problem comes fromcoding theory when we are looking for a code not too far away froma given set of codes. Closest Substring problem, with an additionalinput integer L, asks for the smallest d and a strings, of length L, which is within Hamming distance daway from a substring, of length L, of each si. This problemis much more elusive than the Closest String problem. The ClosestSubstring problem is formulated from applications in findingconserved regions, identifying genetic drug targets and generatinggenetic probes in molecular biology. Whether there are efficientapproximation algorithms for both problems are major open questionsin this area. We present two polynomial-time approximationalgorithms with approximation ratio 1 + e for any smalle to settle both questions." ] }
1407.7216
1859600792
We consider Approval Voting systems where each voter decides on a subset of candidates he she approves. We focus on the optimization problem of finding the committee of fixed size k, minimizing the maximal Hamming distance from a vote. In this paper we give a PTAS for this problem and hence resolve the open question raised by [AAAI’10]. The result is obtained by adapting the techniques developed by [JACM’02] originally used for the less constrained Closest String problem. The technique relies on extracting information and structural properties of constant size subsets of votes.
Approval Voting systems are also analyzed in respect of manipulability, see e.g., @cite_0 or @cite_5 . In particular, @cite_5 proved that each strategy-proof algorithm for @math must have approximation ratio at least @math , which implies that our PTAS cannot be strategy-proof.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2951884201", "132136377" ], "abstract": [ "We study computational aspects of three prominent voting rules that use approval ballots to elect multiple winners. These rules are satisfaction approval voting, proportional approval voting, and reweighted approval voting. We first show that computing the winner for proportional approval voting is NP-hard, closing a long standing open problem. As none of the rules are strategyproof, even for dichotomous preferences, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots from the other agents.", "We consider approval voting elections in which each voter votes for a (possibly empty) set of candidates and the outcome consists of a set of k candidates for some parameter k, e.g., committee elections. We are interested in the min-imax approval voting rule in which the outcome represents a compromise among the voters, in the sense that the maximum distance between the preference of any voter and the outcome is as small as possible. This voting rule has two main drawbacks. First, computing an outcome that minimizes the maximum distance is computationally hard. Furthermore, any algorithm that always returns such an outcome provides incentives to voters to misreport their true preferences. In order to circumvent these drawbacks, we consider approximation algorithms, i.e., algorithms that produce an outcome that approximates the minimax distance for any given instance. Such algorithms can be considered as alternative voting rules. We present a polynomial-time 2-approximation algorithm that uses a natural linear programming relaxation for the underlying optimization problem and deterministically rounds the fractional solution in order to compute the outcome; this result improves upon the previously best known algorithm that has an approximation ratio of 3. We are furthermore interested in approximation algorithms that are resistant to manipulation by (coalitions of) voters, i.e., algorithms that do not motivate voters to misreport their true preferences in order to improve their distance from the outcome. We complement previous results in the literature with new upper and lower bounds on strategyproof and group-strategyproof algorithms." ] }
1407.7584
2952278693
Scaling feature values is an important step in numerous machine learning tasks. Different features can have different value ranges and some form of a feature scaling is often required in order to learn an accurate classifier. However, feature scaling is conducted as a preprocessing task prior to learning. This is problematic in an online setting because of two reasons. First, it might not be possible to accurately determine the value range of a feature at the initial stages of learning when we have observed only a few number of training instances. Second, the distribution of data can change over the time, which render obsolete any feature scaling that we perform in a pre-processing step. We propose a simple but an effective method to dynamically scale features at train time, thereby quickly adapting to any changes in the data stream. We compare the proposed dynamic feature scaling method against more complex methods for estimating scaling parameters using several benchmark datasets for binary classification. Our proposed feature scaling method consistently outperforms more complex methods on all of the benchmark datasets and improves classification accuracy of a state-of-the-art online binary classifier algorithm.
One-Pass Online Learning () (also known as ) @cite_21 is a special case of online learning in which by the learning algorithm. Typically, an online learning algorithm requires multiple passes over a training dataset to reach a convergent point. This setting can be considered as an extreme case where the train batch size is limited to only one instance. The OPOL setting is more restrictive than the classical online learning setting where a learning algorithm is allowed to traverse multiple times over the training dataset. However, OPOL becomes the only possible alternative in the following scenarios.
{ "cite_N": [ "@cite_21" ], "mid": [ "51915423" ], "abstract": [ "Domain adaptation, the problem of adapting a natural language processing system trained in one domain to perform well in a different domain, has received significant attention. This paper addresses an important problem for deployed systems that has received little attention - detecting when such adaptation is needed by a system operating in the wild, i.e., performing classification over a stream of unlabeled examples. Our method uses A-distance, a metric for detecting shifts in data streams, combined with classification margins to detect domain shifts. We empirically show effective domain shift detection on a variety of data sets and shift conditions." ] }
1407.7504
2950124142
Typography and layout lead to the hierarchical organisation of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing text detection methods. This paper addresses the problem of text segmentation in natural scenes from a hierarchical perspective. Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based in perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state of the art methods in unconstrained scenarios.
Region-based methods, on the other hand, are based on a typical bottom-up pipeline: first performing an image segmentation and subsequently classifying the resulting regions into text or non-text ones. Yao al @cite_13 extract regions in the Stroke Width Transform (SWT) domain, proposed earlier for text detection by Epshtein al @cite_1 . Yin al @cite_0 obtain state-of-the-art performance with a method that prunes the tree of Maximally Stable Extremal Regions (MSER) using the strategy of minimizing regularized variations. The effectiveness of MSER for character candidates detection is also exploited by Chen al @cite_23 and Novikova al @cite_27 , while Neumann al @cite_10 propose a region representation derived from MSER where character non-character classification is done for each possible Extremal Region (ER).
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_0", "@cite_27", "@cite_23", "@cite_13" ], "mid": [ "2061802763", "2142159465", "2148214126", "1569614731", "2078997308", "" ], "abstract": [ "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76 , much better than the state-of-the-art performance of 71 . Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method.", "This paper proposes a new model for the task of word recognition in natural images that simultaneously models visual and lexicon consistency of words in a single probabilistic model. Our approach combines local likelihood and pairwise positional consistency priors with higher order priors that enforce consistency of characters (lexicon) and their attributes (font and colour). Unlike traditional stage-based methods, word recognition in our framework is performed by estimating the maximum a posteriori (MAP) solution under the joint posterior distribution of the model. MAP inference in our model is performed through the use of weighted finite-state transducers (WFSTs). We show how the efficiency of certain operations on WFSTs can be utilized to find the most likely word under the model in an efficient manner. We evaluate our method on a range of challenging datasets (ICDAR'03, SVT, ICDAR'11). Experimental results demonstrate that our method outperforms state-of-the-art methods for cropped word recognition.", "Detecting text in natural images is an important prerequisite. In this paper, we propose a novel text detection algorithm, which employs edge-enhanced Maximally Stable Extremal Regions as basic letter candidates. These candidates are then filtered using geometric and stroke width information to exclude non-text objects. Letters are paired to identify text lines, which are subsequently separated into words. We evaluate our system using the ICDAR competition dataset and our mobile document database. The experimental results demonstrate the excellent performance of the proposed method.", "" ] }
1407.7504
2950124142
Typography and layout lead to the hierarchical organisation of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing text detection methods. This paper addresses the problem of text segmentation in natural scenes from a hierarchical perspective. Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based in perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state of the art methods in unconstrained scenarios.
Most of the region-based methods are complemented with a post-processing step where regions assessed to be characters are grouped together into words or text lines. The hierarchical structure of text has been traditionally exploited in a post-processing stage with heuristic rules @cite_1 @cite_23 usually constrained to search for horizontally aligned text in order to avoid a combinatorial explosion of enumerating all possible text lines. Neumann and Matas @cite_10 introduce an efficient exhaustive search algorithm using heuristic verification functions at different grouping levels (i.e. region pairs, triplets, etc.), but still constrained to horizontal text. Yao al @cite_13 make use of a greedy agglomerative clustering where regions are grouped if their average alignment is under a certain threshold. Yin al @cite_0 use a self-training distance metric learning algorithm that can learn distance weights and clustering thresholds simultaneously and automatically for text groups detection in a similarity feature space.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_0", "@cite_23", "@cite_13" ], "mid": [ "2061802763", "2142159465", "2148214126", "2078997308", "" ], "abstract": [ "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks. In this paper, we propose an accurate and robust method for detecting texts in natural scene images. A fast and effective pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSERs) as character candidates using the strategy of minimizing regularized variations. Character candidates are grouped into text candidates by the single-link clustering algorithm, where distance weights and clustering threshold are learned automatically by a novel self-training distance metric learning algorithm. The posterior probabilities of text candidates corresponding to non-text are estimated with a character classifier; text candidates with high non-text probabilities are eliminated and texts are identified with a text classifier. The proposed system is evaluated on the ICDAR 2011 Robust Reading Competition database; the f-measure is over 76 , much better than the state-of-the-art performance of 71 . Experiments on multilingual, street view, multi-orientation and even born-digital databases also demonstrate the effectiveness of the proposed method.", "Detecting text in natural images is an important prerequisite. In this paper, we propose a novel text detection algorithm, which employs edge-enhanced Maximally Stable Extremal Regions as basic letter candidates. These candidates are then filtered using geometric and stroke width information to exclude non-text objects. Letters are paired to identify text lines, which are subsequently separated into words. We evaluate our system using the ICDAR competition dataset and our mobile document database. The experimental results demonstrate the excellent performance of the proposed method.", "" ] }
1407.7283
2951427339
Every bi-uniform matroid is representable over all sufficiently large fields. But it is not known exactly over which finite fields they are representable, and the existence of efficient methods to find a representation for every given bi-uniform matroid has not been proved. The interest of these problems is due to their implications to secret sharing. The existence of efficient methods to find representations for all bi-uniform matroids is proved here for the first time. The previously known efficient constructions apply only to a particular class of bi-uniform matroids, while the known general constructions were not proved to be efficient. In addition, our constructions provide in many cases representations over smaller finite fields.
This problem is avoided in the method proposed by Ng @cite_19 , which provides a representation for every given bi-uniform matroid. Specifically, Ng gives a representation for the bi-uniform matroid with rank @math and sub-ranks @math over every finite field of the form @math , where @math , each clonal class has at most @math elements, and @math is at least @math and co-prime with @math . This method may be efficient, but this fact is not proved in @cite_19 . In addition, the degree @math of the extension field depends on the rank @math , while in our efficient construction in Theorem , this degree depends only on @math . Therefore, if @math is small compared to @math , our construction works over smaller fields. Efficient methods to construct ideal hierarchical secret sharing schemes were given by Brickell @cite_16 and by Tassa @cite_8 . When applied to some particular cases, these methods provide representations for bi-uniform matroids in which one of the sub-ranks is equal to the rank.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_8" ], "mid": [ "1555179723", "2223862429", "2083952939" ], "abstract": [ "Deciding whether a matroid is secret sharing or not is a well-known open problem. In Ng and Walker [6] it was shown that a matroid decomposes into uniform matroids under strong connectivity. The question then becomes as follows: when is a matroid m with N uniform components secret sharing? When N e 1, m corresponds to a uniform matroid and hence is secret sharing. In this paper we show, by constructing a representation using projective geometry, that all connected matroids with two uniform components are secret sharing", "In a secret sharing scheme, a dealer has a secret. The dealer gives each participant in the scheme a share of the secret. There is a set Γ of subsets of the participants with the property that any subset of participants that is in Γ can determine the secret. In a perfect secret sharing scheme, any subset of participants that is not in Γ cannot obtain any information about the secret. We will say that a perfect secret sharing scheme is ideal if all of the shares are from the same domain as the secret. Shamir and Blakley constructed ideal threshold schemes, and Benaloh has constructed other ideal secret sharing schemes. In this paper, we construct ideal secret sharing schemes for more general access structures which include the multilevel and compartmented access structures proposed by Simmons.", "We consider the problem of threshold secret sharing in groups with hierarchical structure. In such settings, the secret is shared among a group of participants that is partitioned into levels. The access structure is then determined by a sequence of threshold requirements: a subset of participants is authorized if it has at least k0 0 members from the highest level, as well as at least k1 > k0 members from the two highest levels and so forth. Such problems may occur in settings where the participants differ in their authority or level of confidence and the presence of higher level participants is imperative to allow the recovery of the common secret. Even though secret sharing in hierarchical groups has been studied extensively in the past, none of the existing solutions addresses the simple setting where, say, a bank transfer should be signed by three employees, at least one of whom must be a department manager. We present a perfect secret sharing scheme for this problem that, unlike most secret sharing schemes that are suitable for hierarchical structures, is ideal. As in Shamir's scheme, the secret is represented as the free coefficient of some polynomial. The novelty of our scheme is the usage of polynomial derivatives in order to generate lesser shares for participants of lower levels. Consequently, our scheme uses Birkhoff interpolation, i.e., the construction of a polynomial according to an unstructured set of point and derivative values. A substantial part of our discussion is dedicated to the question of how to assign identities to the participants from the underlying finite field so that the resulting Birkhoff interpolation problem will be well posed. In addition, we devise an ideal and efficient secret sharing scheme for the closely related hierarchical threshold access structures that were studied by Simmons and Brickell." ] }
1407.7448
2953044610
In modern Commercial Off-The-Shelf (COTS) multicore systems, each core can generate many parallel memory requests at a time. The processing of these parallel requests in the DRAM controller greatly affects the memory interference delay experienced by running tasks on the platform. In this paper, we model a modern COTS multicore system which has a nonblocking last-level cache (LLC) and a DRAM controller that prioritizes reads over writes. To minimize interference, we focus on LLC and DRAM bank partitioned systems. Based on the model, we propose an analysis that computes a safe upper bound for the worst-case memory interference delay. We validated our analysis on a real COTS multicore platform with a set of carefully designed synthetic benchmarks as well as SPEC2006 benchmarks. Evaluation results show that our analysis is more accurately capture the worst-case memory interference delay and provides safer upper bounds compared to a recently proposed analysis which significantly under-estimate the delay.
Initially, many researchers model the cost to access the main memory as a constant and view the main memory as a single resource shared by the cores @cite_29 @cite_25 @cite_26 @cite_4 . However, modern DRAM systems are composed of many sophisticated components and the memory access cost is far from being a constant as it varies significant depending on the states of the variety of components comprising the system.
{ "cite_N": [ "@cite_29", "@cite_26", "@cite_4", "@cite_25" ], "mid": [ "2114620069", "2061663604", "1993079513", "2116514438" ], "abstract": [ "Shared resource access interference, particularly memory and system bus, is a big challenge in designing predictable real-time systems because its worst case behavior can significantly differ. In this paper, we propose a software based memory throttling mechanism to explicitly control the memory interference. We developed analytic solutions to compute proper throttling parameters that satisfy schedulability of critical tasks while minimize performance impact caused by throttling. We implemented the mechanism in Linux kernel and evaluated isolation guarantee and overall performance impact using a set of synthetic and real applications.", "Memory resources are a serious bottleneck in many real-time multicore systems. Previous work has shown that, in the worst case, execution time of memory intensive tasks can grow linearly with the number of cores in the system. To improve hard real-time utilization, a real-time multicore system should be scheduled according to a memory-centric scheduling approach if its workload is dominated by memory intensive tasks. In this work, a memory-centric scheduling technique is proposed where (a) core isolation is provided through a coarse-grained (high-level) Time Division Multiple Access (TDMA) memory schedule; and (b) the scheduling policy of each core \"promotes\" the priority of its memory intensive computations above CPU-only computation when memory access is permitted by the high-level schedule. Our evaluation reveals that under high memory demand, our scheduling approach can improve hard real-time task utilization significantly compared to traditional multicore scheduling.", "Modern computing systems have adopted multicore architectures and multiprocessor systems on chip (MPSoCs) for accommodating the increasing demand on computation power. However, performance boosting is constrained by shared resources, such as buses, main memory, DMA, etc.This paper analyzes the worst-case completion (response) time for real-time tasks when time division multiple access (TDMA) policies are applied for resource arbitration.Real-time tasks execute periodically on a processing element and are constituted by sequential superblocks. A superblock is characterized by its accesses to a shared resource and its computation time. We explore three models of accessing shared resources: (1)dedicated access model, in which accesses happen only at the beginning and the end of a superblock, (2) general access model, in which accesses could happen anytime during the execution of a superblock, and (3) hybrid access model, which combines the dedicated and general access models. We present a framework to analyze the worst-case completion time of real-time tasks (superblocks) under these three access models, for a given TDMA arbiter. We compare the timing analysis of the three proposed models for a real-world application.", "Employing COTS components in real-time embedded systems leads to timing challenges. When multiple CPU cores and DMA peripherals run simultaneously, contention for access to main memory can greatly increase a task's WCET. In this paper, we introduce an analysis methodology that computes upper bounds to task delay due to memory contention. First, an arrival curve is derived for each core representing the maximum memory traffic produced by all tasks executed on it. Arrival curves are then combined with a representation of the cache behavior for the task under analysis to generate a delay bound. Based on the computed delay, we show how tasks can be feasibly scheduled according to assigned time slots on each core." ] }
1407.7448
2953044610
In modern Commercial Off-The-Shelf (COTS) multicore systems, each core can generate many parallel memory requests at a time. The processing of these parallel requests in the DRAM controller greatly affects the memory interference delay experienced by running tasks on the platform. In this paper, we model a modern COTS multicore system which has a nonblocking last-level cache (LLC) and a DRAM controller that prioritizes reads over writes. To minimize interference, we focus on LLC and DRAM bank partitioned systems. Based on the model, we propose an analysis that computes a safe upper bound for the worst-case memory interference delay. We validated our analysis on a real COTS multicore platform with a set of carefully designed synthetic benchmarks as well as SPEC2006 benchmarks. Evaluation results show that our analysis is more accurately capture the worst-case memory interference delay and provides safer upper bounds compared to a recently proposed analysis which significantly under-estimate the delay.
Many researchers turn to hardware approaches and develop specially designed DRAM controllers that are highly predictable and provide certain performance guarantees @cite_35 @cite_38 @cite_11 @cite_7 @cite_17 . The work in @cite_35 and @cite_38 both implement hardware based private banking scheme which eliminate interferences caused by sharing the banks. They differ in that the controller in @cite_35 uses close page policy with TDMA scheduling while the work in @cite_38 uses open page policy with FCFS arbitration. AMC @cite_7 and Predator @cite_11 utilize interleaved bank and close page policy. Both approaches treat multiple memory banks as a single unit of access to simplify resource management. They differ in that AMC uses a round-robin arbiter while Predator uses the credit-controlled static-priority (CCSP) arbitration @cite_0 , which assigns priorities to requestors in order to guarantee minimum bandwidth and bounded latency. While these proposals are valuable, especially for hard real-time systems, they are not available in COTS systems.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_7", "@cite_17", "@cite_0", "@cite_11" ], "mid": [ "", "1981191435", "2116826559", "2148543770", "2156588404", "" ], "abstract": [ "", "As multi-core systems are becoming more popular in real-time embedded systems, strict timing requirements for accessing shared resources must be met. In particular, a detailed latency analysis for Double Data Rate Dynamic RAM (DDR DRAM) is highly desirable. Several researchers have proposed predictable memory controllers to provide guaranteed memory access latency. However, the performance of such controllers sharply decreases as DDR devices become faster and the width of memory buses is increased. In this paper, we present a novel, composable worst case analysis for DDR DRAM that provides improved latency bounds compared to existing works by explicitly modeling the DRAM state. In particular, our approach scales better with increasing number of requestors and memory speed. Benchmark evaluations show up to 62 improvement in worst case task execution time compared to a competing predictable memory controller for a system with 8 requestors.", "Multicore processors (CMPs) represent a good solution to provide the performance required by current and future hard real-time systems. However, it is difficult to compute a tight WCET estimation for CMPs due to interferences that tasks suffer when accessing shared hardware resources. We propose an analyzable JEDEC-compliant DDRx SDRAM memory controller (AMC) for hard real-time CMPs, that reduces the impact of memory interferences caused by other tasks on WCET estimation, providing a predictable memory access time and allowing the computation of tight WCET estimations.", "Complex Systems-on-Chips (SoC) are mixed time-criticality systems that have to support firm real-time (FRT) and soft real-time (SRT) applications running in parallel. This is challenging for critical SoC components, such as memory controllers. Existing memory controllers focus on either firm real-time or soft real-time applications. FRT controllers use a close-page policy that maximizes worst-case performance and ignore opportunities to exploit locality, since it cannot be guaranteed. Conversely, SRT controllers try to reduce latency and consequently processor stalling by speculating on locality. They often use an open-page policy that sacrifices guaranteed performance, but is beneficial in the average case. This paper proposes a conservative open-page policy that improves average-case performance of a FRT controller in terms of bandwidth and latency without sacrificing real-time guarantees. As a result, the memory controller efficiently handles both FRT and SRT applications. The policy keeps pages open as long as possible without sacrificing guarantees and captures locality in this window. Experimental results show that on average 70 of the locality is captured for applications in the CHStone benchmark, reducing the execution time by 17 compared to a close-page policy. The effectiveness of the policy is also evaluated in a multi-application use-case, and we show that the overall average-case performance improves if there is at least one FRT or SRT application that exploits locality.", "The convergence of application domains in new systems-on-chip (SoC) results in systems with many applications with a mix of soft and hard real-time requirements. To reduce cost, resources, such as memories and interconnect, are shared between applications. However, resource sharing introduces interference between the sharing applications, making it difficult to satisfy their real-time requirements. Existing arbiters do not efficiently satisfy the requirements of applications in SoCs, as they either couple rate or allocation granularity to latency, or cannot run at high speeds in hardware with a low-cost implementation. The contribution of this paper is an arbiter called credit- controlled static-priority (CCSP), consisting of a rate regulator and a static-priority scheduler. The rate regulator isolates applications by regulating the amount of provided service in a way that decouples allocation granularity and latency. The static-priority scheduler decouples latency and rate, such that low latency can be provided to any application, regardless of the allocated rate. We show that CCSP belongs to the class of latency-rate servers and guarantees the allocated rate within a maximum latency, as required by hard real-time applications. We present a hardware implementation of the arbiter in the context of a DDR2 SDRAM controller. An instance with six ports running at 200 MHz requires an area of 0.0223 mm2 in a 90 nm CMOS process.", "" ] }
1407.7448
2953044610
In modern Commercial Off-The-Shelf (COTS) multicore systems, each core can generate many parallel memory requests at a time. The processing of these parallel requests in the DRAM controller greatly affects the memory interference delay experienced by running tasks on the platform. In this paper, we model a modern COTS multicore system which has a nonblocking last-level cache (LLC) and a DRAM controller that prioritizes reads over writes. To minimize interference, we focus on LLC and DRAM bank partitioned systems. Based on the model, we propose an analysis that computes a safe upper bound for the worst-case memory interference delay. We validated our analysis on a real COTS multicore platform with a set of carefully designed synthetic benchmarks as well as SPEC2006 benchmarks. Evaluation results show that our analysis is more accurately capture the worst-case memory interference delay and provides safer upper bounds compared to a recently proposed analysis which significantly under-estimate the delay.
To improve performance isolation in COTS systems, several recent papers proposed software based bank partitioning techniques @cite_1 @cite_5 @cite_31 . They exploit the virtual memory of modern operating systems to allocate memory on specific DRAM banks without requiring any other special hardware support. Similar techniques has long been applied in partitioning shared caches @cite_16 @cite_8 @cite_39 @cite_15 @cite_28 @cite_19 @cite_18 . These resource partitioning techniques eliminate space contention of the partitioned resources, hence improve performance isolation. However, as shown in @cite_1 @cite_36 , modern COTS systems have many other still shared components that affect memory performance. A recent attempt to analyze these effects @cite_36 , which is reviewed in , greatly increased our understanding on the DRAM controller, but its system model is still far from real COTS systems, particularly on its assumption of one outstanding memory request per core. In contrast, our work models a more realistic COTS DRAM controller that handles multiple outstanding memory requests from each core and out-of-order memory request processing (i.e., prioritizing reads over writes). We believe our system model and the analysis capture commonly found architectural features in modern COTS systems, hence better applicable in analyzing memory interference on COTS multicore systems.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_15", "@cite_28", "@cite_36", "@cite_1", "@cite_39", "@cite_19", "@cite_5", "@cite_31", "@cite_16" ], "mid": [ "2118378639", "2160609361", "2145035826", "2151971447", "", "2015827499", "2001986196", "", "", "2048242615", "2115078506" ], "abstract": [ "Multi-core architectures are shaking the fundamental assumption that in real-time systems the WCET, used to analyze the schedulability of the complete system, is calculated on individual tasks. This is not even true in an approximate sense in a modern multi-core chip, due to interference caused by hardware resource sharing. In this work we propose (1) a complete framework to analyze and profile task memory access patterns and (2) a novel kernel-level cache management technique to enforce an efficient and deterministic cache allocation of the most frequently accessed memory areas. In this way, we provide a powerful tool to address one of the main sources of interference in a system where the last level of cache is shared among two or more CPUs. The technique has been implemented on commercial hardware and our evaluations show that it can be used to significantly improve the predictability of a given set of critical tasks.", "Cache partitioning and sharing is critical to the effective utilization of multicore processors. However, almost all existing studies have been evaluated by simulation that often has several limitations, such as excessive simulation time, absence of OS activities and proneness to simulation inaccuracy. To address these issues, we have taken an efficient software approach to supporting both static and dynamic cache partitioning in OS through memory address mapping. We have comprehensively evaluated several representative cache partitioning schemes with different optimization objectives, including performance, fairness, and quality of service (QoS). Our software approach makes it possible to run the SPEC CPU2006 benchmark suite to completion. Besides confirming important conclusions from previous work, we are able to gain several insights from whole-program executions, which are infeasible from simulation. For example, giving up some cache space in one program to help another one may improve the performance of both programs for certain workloads due to reduced contention for memory bandwidth. Our evaluation of previously proposed fairness metrics is also significantly different from a simulation-based study. The contributions of this study are threefold. (1) To the best of our knowledge, this is a highly comprehensive execution- and measurement-based study on multicore cache partitioning. This paper not only confirms important conclusions from simulation-based studies, but also provides new insights into dynamic behaviors and interaction effects. (2) Our approach provides a unique and efficient option for evaluating multicore cache partitioning. The implemented software layer can be used as a tool in multicore performance evaluation and hardware design. (3) The proposed schemes can be further refined for OS kernels to improve performance.", "It is well recognized that LRU cache-line replacement can be ineffective for applications with large working sets or non-localized memory access patterns. Specifically, in last-level processor caches, LRU can cause cache pollution by inserting non-reuseable elements into the cache while evicting reusable ones. The work presented in this paper addresses last-level cache pollution through a dynamic operating system mechanism, called ROCS, requiring no change to underlying hardware and no change to applications. ROCS employs hardware performance counters on a commodity processor to characterize application cache behavior at run-time. Using this online profiling, cache unfriendly pages are dynamically mapped to a pollute buffer in the cache, eliminating competition between reusable and non-reusable cache lines. The operating system implements the pollute buffer through a page-coloring based technique, by dedicating a small slice of the last-level cache to store non-reusable pages. Measurements show that ROCS, implemented in the Linux 2.6.24 kernel and running on a 2.3 GHz PowerPC 970FX, improves performance of memory-intensive SPEC CPU 2000 and NAS benchmarks by up to 34 , and 16 on average.", "Buffer caches in operating systems keep active file blocks in memory to reduce disk accesses. Related studies have been focused on how to minimize buffer misses and the caused performance degradation. However, the side effects and performance implications of accessing the data in buffer caches (i.e. buffer cache hits) have not been paid attention. In this paper, we show that accessing buffer caches can cause serious performance degradation on multicores, particularly with shared last level caches (LLCs). There are two reasons for this problem. First, data in files normally have weaker localities than data objects in virtual memory spaces. Second, due to the shared structure of LLCs on multicore processors, an application accessing the data in a buffer cache may flush the to-be-reused data of its co-running applications from the shared LLC and significantly slow down these applications. The paper proposes a buffer cache design called Selected Region Mapping Buffer (SRM-buffer) for multicore systems to effectively address the cache pollution problem caused by OS buffer. SRM-buffer improves existing OS buffer management with an enhanced page allocation policy that carefully selects mapping physical pages upon buffer misses. For a sequence of blocks accessed by an application, SRM-buffer allocates physical pages that are mapped to a selected region consisting of a small portion of sets in LLC. Thus, when these blocks are accessed, cache pollution is effectively limited within the small cache region. We have implemented a prototype of SRM-buffer into Linux kernel, and tested it with extensive workloads. Performance evaluation shows SRM-buffer can improve system performance and decrease the execution times of workloads by up to 36 .", "", "DRAM consists of multiple resources called banks that can be accessed in parallel and independently maintain state information. In Commercial Off-The-Shelf (COTS) multicore platforms, banks are typically shared among all cores, even though programs running on the cores do not share memory space. In this situation, memory performance is highly unpredictable due to contention in the shared banks.", "Modern multi-core processors present new resource management challenges due to the subtle interactions of simultaneously executing processes sharing on-chip resources (particularly the L2 cache). Recent research demonstrates that the operating system may use the page coloring mechanism to control cache partitioning, and consequently to achieve fair and efficient cache utilization. However, page coloring places additional constraints on memory space allocation, which may conflict with application memory needs. Further, adaptive adjustments of cache partitioning policies in a multi-programmed execution environment may incur substantial overhead for page recoloring (or copying). This paper proposes a hot-page coloring approach enforcing coloring on only a small set of frequently accessed (or hot) pages for each process. The cost of identifying hot pages online is reduced by leveraging the knowledge of spatial locality during a page table scan of access bits. Our results demonstrate that hot page identification and selective coloring can significantly alleviate the coloring-induced adverse effects in practice. However, we also reach the somewhat negative conclusion that without additional hardware support, adaptive page coloring is only beneficial when recoloring is performed infrequently (meaning long scheduling time quanta in multi-programmed executions).", "", "", "In commercial-off-the-shelf (COTS) multi-core systems, the execution times of tasks become hard to predict because of contention on shared resources in the memory hierarchy. In particular, a task running in one processor core can delay the execution of another task running in another processor core. This is due to the fact that tasks can access data in the same cache set shared among processor cores or in the same memory bank in the DRAM memory (or both). Such cache and bank interference effects have motivated the need to create isolation mechanisms for resources accessed by more than one task. One popular isolation mechanism is cache coloring that divides the cache into multiple partitions. With cache coloring, each task can be assigned exclusive cache partitions, thereby preventing cache interference from other tasks. Similarly, bank coloring allows assigning exclusive bank partitions to tasks. While cache coloring and some bank coloring mechanisms have been studied separately, interactions between the two schemes have not been studied. Specifically, while memory accesses to two different bank colors do not interfere with each other at the bank level, they may interact at the cache level. Similarly, two different cache colors avoid cache interference but may not prevent bank interference. Therefore it is necessary to coordinate cache and bank coloring approaches. In this paper, we present a coordinated cache and bank coloring scheme that is designed to prevent cache and bank interference simultaneously. We also developed color allocation algorithms for configuring a virtual memory system to support our scheme which has been implemented in the Linux kernel. In our experiments, we observed that the execution time can increase by 60 due to inter-task interference when we use only cache coloring. Our coordinated approach can reduce this figure down to 12 (an 80 reduction).", "Cache-partitioning techniques have been invented to make modern processors with an extensive cache structure useful in real-time systems where task switches disrupt cache working sets and hence make execution times unpredictable. This paper describes an OS-controlled application-transparent cache-partitioning technique. The resulting partitions can be transparently assigned to tasks for their exclusive use. The major drawbacks found in other cache-partitioning techniques, namely waste of memory and additions on the critical performance path within CPUs, are avoided using memory coloring techniques that do nor require changes within the chips of modern CPUs or on the critical path for performance. A simple filter algorithm commonly used in real-time systems, a matrix-multiplication algorithm and the interaction of both are analysed with regard to cache-induced worst case penalties. Worst-case penalties are determined for different widely-used cache architectures. Some insights regarding the impact of cache architectures on worst-case execution are described." ] }
1407.6432
2402628068
Learning structured outputs with general structures is computationally challenging, except for tree-structured models. Thus we propose an efficient boosting-based algorithm AdaBoost.MRF for this task. The idea is based on the realization that a graph is a superimposition of trees. Different from most existing work, our algorithm can handle partial labelling, and thus is particularly attractive in practice where reliable labels are often sparsely observed. In addition, our method works exclusively on trees and thus is guaranteed to converge. We apply the AdaBoost.MRF algorithm to an indoor video surveillance scenario, where activities are modelled at multiple levels.
Conditional random fields are an example of structured output models @cite_13 @cite_1 @cite_10 . Learning in structured output models can be based on principles other than maximum likelihood, for example, large-margin @cite_13 @cite_1 , or search @cite_10 . With the latter, the computation of feature expectation is replaced by finding the most probable labelling. Learning with partial labels have been addressed in the past decade @cite_31 @cite_36 @cite_6 . Related problems include weak supervision @cite_18 and indirect supervision @cite_24 . Partial labels arise when only some components of @math are observed, i.e., @math where @math and @math are observed and missing components respectively. In the CRF setting, parameter learning requires to maximise the conditional incomplete log-likelihood instead, which can be shown to be: where @math . The gradient @math can now be derived as: where @math denotes the missing components associated with clique @math . Equations ) and ) reveal that learning depends on the inference ability to compute @math , @math and local clique distributions @math .
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_36", "@cite_1", "@cite_6", "@cite_24", "@cite_31", "@cite_10" ], "mid": [ "2105842272", "2026581312", "2964001329", "2097826433", "2099761356", "1536616638", "1503442776", "2010624529" ], "abstract": [ "Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.", "We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods.", "We explore a framework called boosted Markov networks to combine the learning capacity of boosting and the rich modeling semantics of Markov networks and applying the framework for video-based activity recognition. Importantly, we extend the framework to incorporate hidden variables. We show how the framework can be applied for both model learning and feature selection. We demonstrate that boosted Markov networks with hidden variables perform comparably with the standard maximum likelihood estimation. However, our framework is able to learn sparse models, and therefore can provide computational savings when the learned models are used for classification.", "We consider large margin estimation in a broad range of prediction models where inference involves solving combinatorial optimization problems, for example, weighted graph-cuts or matchings. Our goal is to learn parameters such that inference using the model reproduces correct answers on the training data. Our method relies on the expressive power of convex optimization problems to compactly capture inference or solution optimality in structured prediction models. Directly embedding this structure within the learning formulation produces concise convex problems for efficient estimation of very complex and diverse models. We describe experimental results on a matching task, disulfide connectivity prediction, showing significant improvements over state-of-the-art methods.", "Activity recognition is an important issue in building intelligent monitoring systems. We address the recognition of multilevel activities in this paper via a conditional Markov random field (MRF), known as the dynamic conditional random field (DCRF). Parameter estimation in general MRFs using maximum likelihood is known to be computationally challenging (except for extreme cases), and thus we propose an efficient boosting-based algorithm AdaBoost.MRF for this task. Distinct from most existing work, our algorithm can handle hidden variables (missing labels) and is particularly attractive for smarthouse domains where reliable labels are often sparsely observed. Furthermore, our method works exclusively on trees and thus is guaranteed to converge. We apply the AdaBoost.MRF algorithmto a home video surveillance application and demonstrate its efficacy.", "We present a novel approach for structure prediction that addresses the difficulty of obtaining labeled structures for training. We observe that structured output problems often have a companion learning problem of determining whether a given input possesses a good structure. For example, the companion problem for the part-of-speech (POS) tagging task asks whether a given sequence of words has a corresponding sequence of POS tags that is \"legitimate\". While obtaining direct supervision for structures is difficult and expensive, it is often very easy to obtain indirect supervision from the companion binary decision problem. In this paper, we develop a large margin framework that jointly learns from both direct and indirect forms of supervision. Our experiments exhibit the significant contribution of the easy-to-get indirect binary supervision on three important NLP structure learning problems.", "Learning and understanding the typical patterns in the daily activities and routines of people from low-level sensory data is an important problem in many application domains such as building smart environments, or providing intelligent assistance. Traditional approaches to this problem typically rely on supervised learning and generative models such as the hidden Markov models and its extensions. While activity data can be readily acquired from pervasive sensors, e.g. in smart environments, providing manual labels to support supervised training is often extremely expensive. In this paper, we propose a new approach based on semi-supervised training of partially hidden discriminative models such as the conditional random field (CRF) and the maximum entropy Markov model (MEMM). We show that these models allow us to incorporate both labeled and unlabeled data for learning, and at the same time, provide us with the flexibility and accuracy of the discriminative framework. Our experimental results in the video surveillance domain illustrate that these models can perform better than their generative counterpart, the partially hidden Markov model, even when a substantial amount of labels are unavailable.", "Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is tractable. Instead of learning exact models and searching via heuristic means, we embrace this difficulty and treat the structured output problem in terms of approximate search. We present a framework for learning as search optimization, and two parameter updates with convergence the-orems and bounds. Empirical evidence shows that our integrated approach to learning and decoding can outperform exact models at smaller computational cost." ] }
1407.6439
1435924991
Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user "think about features---not algorithms." We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
Knowledge Base Construction (KBC) has been an area of intense study over the last decade @cite_7 @cite_26 @cite_18 @cite_13 @cite_19 @cite_3 @cite_44 @cite_2 @cite_33 @cite_14 @cite_20 @cite_41 . Within this space, there are a number of approaches.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_14", "@cite_33", "@cite_7", "@cite_41", "@cite_3", "@cite_44", "@cite_19", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2167571757", "", "", "2396924315", "2110367654", "1599188306", "", "2127978399", "2115461474", "2006149654", "2045495924", "2129629757" ], "abstract": [ "This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers high-quality output, even from unstructured Internet documents.", "", "", "We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”).", "As applications within and outside the enterprise encounter increasing volumes of unstructured data, there has been renewed interest in the area of information extraction (IE) -- the discipline concerned with extracting structured information from unstructured text. Classical IE techniques developed by the NLP community were based on cascading grammars and regular expressions. However, due to the inherent limitations of grammarbased extraction, these techniques are unable to: (i) scale to large data sets, and (ii) support the expressivity requirements of complex information tasks. At the IBM Almaden Research Center, we are developing SystemT, an IE system that addresses these limitations by adopting an algebraic approach. By leveraging well-understood database concepts such as declarative queries and costbased optimization, SystemT enables scalable execution of complex information extraction tasks. In this paper, we motivate the SystemT approach to information extraction. We describe our extraction algebra and demonstrate the effectiveness of our optimization techniques in providing orders of magnitude reduction in the running time of complex extraction tasks.", "The goal of information extraction is to extract database records from text or semi-structured sources. Traditionally, information extraction proceeds by first segmenting each candidate record separately, and then merging records that refer to the same entities. While computationally efficient, this approach is suboptimal, because it ignores the fact that segmenting one candidate record can help to segment similar ones. For example, resolving a well-segmented field with a less-clear one can disambiguate the latter's boundaries. In this paper we propose a joint approach to information extraction, where segmentation of all records and entity resolution are performed together in a single integrated inference process. While a number of previous authors have taken steps in this direction (eg., (2003), (2004)), to our knowledge this is the first fully joint approach. In experiments on the CiteSeer and Cora citation matching datasets, joint inference improved accuracy, and our approach outperformed previous ones. Further, by using Markov logic and the existing algorithms for it, our solution consisted mainly of writing the appropriate logical formulas, and required much less engineering than previous ones.", "", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "Manually querying search engines in order to accumulate a large bodyof factual information is a tedious, error-prone process of piecemealsearch. Search engines retrieve and rank potentially relevantdocuments for human perusal, but do not extract facts, assessconfidence, or fuse information from multiple documents. This paperintroduces KnowItAll, a system that aims to automate the tedious process ofextracting large collections of facts from the web in an autonomous,domain-independent, and scalable manner.The paper describes preliminary experiments in which an instance of KnowItAll, running for four days on a single machine, was able to automatically extract 54,753 facts. KnowItAll associates a probability with each fact enabling it to trade off precision and recall. The paper analyzes KnowItAll's architecture and reports on lessons learned for the design of large-scale information extraction systems.", "This paper gives an overview on the YAGO-NAGA approach to information extraction for building a conveniently searchable, large-scale, highly accurate knowledge base of common facts. YAGO harvests infoboxes and category names of Wikipedia for facts about individual entities, and it reconciles these with the taxonomic backbone of WordNet in order to ensure that all entities have proper classes and the class system is consistent. Currently, the YAGO knowledge base contains about 19 million instances of binary relations for about 1.95 million entities. Based on intensive sampling, its accuracy is estimated to be above 95 percent. The paper presents the architecture of the YAGO extractor toolkit, its distinctive approach to consistency checking, its provisions for maintenance and further growth, and the query engine for YAGO, coined NAGA. It also discusses ongoing work on extensions towards integrating fact candidates extracted from natural-language text sources.", "Harvesting relational facts from Web sources has received great attention for automatically constructing large knowledge bases. Stateof-the-art approaches combine pattern-based gathering of fact candidates with constraint-based reasoning. However, they still face major challenges regarding the trade-offs between precision, recall, and scalability. Techniques that scale well are susceptible to noisy patterns that degrade precision, while techniques that employ deep reasoning for high precision cannot cope with Web-scale data. This paper presents a scalable system, called PROSPERA, for high-quality knowledge harvesting. We propose a new notion of ngram-itemsets for richer patterns, and use MaxSat-based constraint reasoning on both the quality of patterns and the validity of fact candidates.We compute pattern-occurrence statistics for two benefits: they serve to prune the hypotheses space and to derive informative weights of clauses for the reasoner. The paper shows how to incorporate these building blocks into a scalable architecture that can parallelize all phases on a Hadoop-based distributed platform. Our experiments with the ClueWeb09 corpus include comparisons to the recent ReadTheWeb experiment. We substantially outperform these prior results in terms of recall, with the same precision, while having low run-times.", "Traditional relation extraction methods require pre-specified relations and relation-specific human-tagged examples. Bootstrapping systems significantly reduce the number of training examples, but they usually apply heuristic-based methods to combine a set of strict hard rules, which limit the ability to generalize and thus generate a low recall. Furthermore, existing bootstrapping methods do not perform open information extraction (Open IE), which can identify various types of relations without requiring pre-specifications. In this paper, we propose a statistical extraction framework called Statistical Snowball (StatSnowball), which is a bootstrapping system and can perform both traditional relation extraction and Open IE. StatSnowball uses the discriminative Markov logic networks (MLNs) and softens hard rules by learning their weights in a maximum likelihood estimate sense. MLN is a general model, and can be configured to perform different levels of relation extraction. In StatSnwoball, pattern selection is performed by solving an l1-norm penalized maximum likelihood estimation, which enjoys well-founded theories and efficient solvers. We extensively evaluate the performance of StatSnowball in different configurations on both a small but fully labeled data set and large-scale Web data. Empirical results show that StatSnowball can achieve a significantly higher recall without sacrificing the high precision during iterations with a small number of seeds, and the joint inference of MLN can improve the performance. Finally, StatSnowball is efficient and we have developed a working entity relation search engine called Renlifang based on it." ] }
1407.6439
1435924991
Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user "think about features---not algorithms." We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
* Rule-Based Systems. The earliest KBC systems used pattern matching to extract relationships from text. The most well-known example is the Hearst Pattern'' proposed by Hearst @cite_9 in 1992. In her seminal work, Hearst observed that a large number of hyponyms can be discovered by simple patterns, e.g., X such as Y.'' Hearst's technique has formed the basis of many further techniques that attempt to extract high-quality patterns from text. Rule-based (pattern matching-based) KBC systems, such as IBM's SystemT @cite_7 @cite_4 , have been built to aid developers in constructing high-quality patterns. These systems provide the user with a (declarative) interface to specify a set of rules and patterns to derive relationships. These systems have achieved state-of-the-art quality on tasks such as parsing @cite_4 .
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7" ], "mid": [ "2068737686", "2144416276", "2110367654" ], "abstract": [ "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested.", "In this paper we argue that developing information extraction (IE) programs using Datalog with embedded procedural extraction predicates is a good way to proceed. First, compared to current ad-hoc composition using, e.g., Perl or C++, Datalog provides a cleaner and more powerful way to compose small extraction modules into larger programs. Thus, writing IE programs this way retains and enhances the important advantages of current approaches: programs are easy to understand, debug, and modify. Second, once we write IE programs in this framework, we can apply query optimization techniques to them. This gives programs that, when run over a variety of data sets, are more efficient than any monolithic program because they are optimized based on the statistics of the data on which they are invoked. We show how optimizing such programs raises challenges specific to text data that cannot be accommodated in the current relational optimization framework, then provide initial solutions. Extensive experiments over real-world data demonstrate that optimization is indeed vital for IE programs and that we can effectively optimize IE programs written in this proposed framework.", "As applications within and outside the enterprise encounter increasing volumes of unstructured data, there has been renewed interest in the area of information extraction (IE) -- the discipline concerned with extracting structured information from unstructured text. Classical IE techniques developed by the NLP community were based on cascading grammars and regular expressions. However, due to the inherent limitations of grammarbased extraction, these techniques are unable to: (i) scale to large data sets, and (ii) support the expressivity requirements of complex information tasks. At the IBM Almaden Research Center, we are developing SystemT, an IE system that addresses these limitations by adopting an algebraic approach. By leveraging well-understood database concepts such as declarative queries and costbased optimization, SystemT enables scalable execution of complex information extraction tasks. In this paper, we motivate the SystemT approach to information extraction. We describe our extraction algebra and demonstrate the effectiveness of our optimization techniques in providing orders of magnitude reduction in the running time of complex extraction tasks." ] }
1407.6439
1435924991
Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user "think about features---not algorithms." We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
Here, traditional classifiers assign each tuple a probability score, e.g., a na "ive Bayes classifier or a logistic regression classifier. For example, KnowItAll @cite_19 and TextRunner @cite_35 @cite_44 use a na "ive Bayes classifier, and CMU’s NELL @cite_33 @cite_1 uses logistic regression. Large-scale systems typically use these types of approaches in sophisticated combinations, e.g., NELL or Watson.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_1", "@cite_44", "@cite_19" ], "mid": [ "2009591769", "2396924315", "1512387364", "2127978399", "2115461474" ], "abstract": [ "Traditional information extraction systems have focused on satisfying precise, narrow, pre-specified requests from small, homogeneous corpora. In contrast, the TextRunner system demonstrates a new kind of information extraction, called Open Information Extraction (OIE), in which the system makes a single, data-driven pass over the entire corpus and extracts a large set of relational tuples, without requiring any human input. (, 2007) TextRunner is a fully-implemented, highly scalable example of OIE. TextRunner's extractions are indexed, allowing a fast query mechanism.", "We report research toward a never-ending language learning system, focusing on a first implementation which learns to classify occurrences of noun phrases according to lexical categories such as “city” and “university.” Our experiments suggest that the accuracy of classifiers produced by semi-supervised learning can be improved by coupling the learning of multiple classes based on background knowledge about relationships between the classes (e.g., ”university” is mutually exclusive of ”company”, and is a subset of ”organization”).", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "Manually querying search engines in order to accumulate a large bodyof factual information is a tedious, error-prone process of piecemealsearch. Search engines retrieve and rank potentially relevantdocuments for human perusal, but do not extract facts, assessconfidence, or fuse information from multiple documents. This paperintroduces KnowItAll, a system that aims to automate the tedious process ofextracting large collections of facts from the web in an autonomous,domain-independent, and scalable manner.The paper describes preliminary experiments in which an instance of KnowItAll, running for four days on a single machine, was able to automatically extract 54,753 facts. KnowItAll associates a probability with each fact enabling it to trade off precision and recall. The paper analyzes KnowItAll's architecture and reports on lessons learned for the design of large-scale information extraction systems." ] }
1407.6439
1435924991
Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user "think about features---not algorithms." We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
Here, a probabilistic approach is used, but the MAP or most likely world (which do differ slightly) is selected. Notable examples include the YAGO system @cite_2 , which uses a PageRank-based approach to assign a confidence score. Other examples include SOFIE @cite_18 and Prospera @cite_13 , which use an approach based on constraint satisfaction.
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_2" ], "mid": [ "2167571757", "2045495924", "2006149654" ], "abstract": [ "This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers high-quality output, even from unstructured Internet documents.", "Harvesting relational facts from Web sources has received great attention for automatically constructing large knowledge bases. Stateof-the-art approaches combine pattern-based gathering of fact candidates with constraint-based reasoning. However, they still face major challenges regarding the trade-offs between precision, recall, and scalability. Techniques that scale well are susceptible to noisy patterns that degrade precision, while techniques that employ deep reasoning for high precision cannot cope with Web-scale data. This paper presents a scalable system, called PROSPERA, for high-quality knowledge harvesting. We propose a new notion of ngram-itemsets for richer patterns, and use MaxSat-based constraint reasoning on both the quality of patterns and the validity of fact candidates.We compute pattern-occurrence statistics for two benefits: they serve to prune the hypotheses space and to derive informative weights of clauses for the reasoner. The paper shows how to incorporate these building blocks into a scalable architecture that can parallelize all phases on a Hadoop-based distributed platform. Our experiments with the ClueWeb09 corpus include comparisons to the recent ReadTheWeb experiment. We substantially outperform these prior results in terms of recall, with the same precision, while having low run-times.", "This paper gives an overview on the YAGO-NAGA approach to information extraction for building a conveniently searchable, large-scale, highly accurate knowledge base of common facts. YAGO harvests infoboxes and category names of Wikipedia for facts about individual entities, and it reconciles these with the taxonomic backbone of WordNet in order to ensure that all entities have proper classes and the class system is consistent. Currently, the YAGO knowledge base contains about 19 million instances of binary relations for about 1.95 million entities. Based on intensive sampling, its accuracy is estimated to be above 95 percent. The paper presents the architecture of the YAGO extractor toolkit, its distinctive approach to consistency checking, its provisions for maintenance and further growth, and the query engine for YAGO, coined NAGA. It also discusses ongoing work on extensions towards integrating fact candidates extracted from natural-language text sources." ] }
1407.6439
1435924991
Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user "think about features---not algorithms." We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality.
The classification-based methods ignore the interaction among predictions, and there is a hypothesis that modeling these correlations yields higher quality systems more quickly. A generic graphical model has been used to model the probabilistic distribution among all possible extractions. For example, @cite_41 used Markov logic networks (MLN) @cite_17 for information extraction. Microsoft's StatisticalSnowBall EntityCube @cite_20 also uses an MLN-based approach. A key challenge in these systems is scalability. For example, was limited to 1.5K citations. Our relational database-driven algorithms for MLN-based systems are dramatically more scalable @cite_32 .
{ "cite_N": [ "@cite_41", "@cite_32", "@cite_20", "@cite_17" ], "mid": [ "1599188306", "2952047749", "2129629757", "" ], "abstract": [ "The goal of information extraction is to extract database records from text or semi-structured sources. Traditionally, information extraction proceeds by first segmenting each candidate record separately, and then merging records that refer to the same entities. While computationally efficient, this approach is suboptimal, because it ignores the fact that segmenting one candidate record can help to segment similar ones. For example, resolving a well-segmented field with a less-clear one can disambiguate the latter's boundaries. In this paper we propose a joint approach to information extraction, where segmentation of all records and entity resolution are performed together in a single integrated inference process. While a number of previous authors have taken steps in this direction (eg., (2003), (2004)), to our knowledge this is the first fully joint approach. In experiments on the CiteSeer and Cora citation matching datasets, joint inference improved accuracy, and our approach outperformed previous ones. Further, by using Markov logic and the existing algorithms for it, our solution consisted mainly of writing the appropriate logical formulas, and required much less engineering than previous ones.", "Markov Logic Networks (MLNs) have emerged as a powerful framework that combines statistical and logical reasoning; they have been applied to many data intensive problems including information extraction, entity resolution, and text mining. Current implementations of MLNs do not scale to large real-world data sets, which is preventing their wide-spread adoption. We present Tuffy that achieves scalability via three novel contributions: (1) a bottom-up approach to grounding that allows us to leverage the full power of the relational optimizer, (2) a novel hybrid architecture that allows us to perform AI-style local search efficiently using an RDBMS, and (3) a theoretical insight that shows when one can (exponentially) improve the efficiency of stochastic local search. We leverage (3) to build novel partitioning, loading, and parallel algorithms. We show that our approach outperforms state-of-the-art implementations in both quality and speed on several publicly available datasets.", "Traditional relation extraction methods require pre-specified relations and relation-specific human-tagged examples. Bootstrapping systems significantly reduce the number of training examples, but they usually apply heuristic-based methods to combine a set of strict hard rules, which limit the ability to generalize and thus generate a low recall. Furthermore, existing bootstrapping methods do not perform open information extraction (Open IE), which can identify various types of relations without requiring pre-specifications. In this paper, we propose a statistical extraction framework called Statistical Snowball (StatSnowball), which is a bootstrapping system and can perform both traditional relation extraction and Open IE. StatSnowball uses the discriminative Markov logic networks (MLNs) and softens hard rules by learning their weights in a maximum likelihood estimate sense. MLN is a general model, and can be configured to perform different levels of relation extraction. In StatSnwoball, pattern selection is performed by solving an l1-norm penalized maximum likelihood estimation, which enjoys well-founded theories and efficient solvers. We extensively evaluate the performance of StatSnowball in different configurations on both a small but fully labeled data set and large-scale Web data. Empirical results show that StatSnowball can achieve a significantly higher recall without sacrificing the high precision during iterations with a small number of seeds, and the joint inference of MLN can improve the performance. Finally, StatSnowball is efficient and we have developed a working entity relation search engine called Renlifang based on it.", "" ] }
1407.6174
1576407929
The Bag-of-Words (BoW) representation is widely used in computer vision. The size of the codebook impacts the time and space complexity of the applications that use BoW. Thus, given a training set for a particular computer vision task, a key problem is pruning a large codebook to select only a subset of visual words. Evaluating possible selections of words to be included in the pruned codebook can be computationally prohibitive; in a brute-force scheme, evaluating each pruned codebook requires re-coding of all features extracted from training images to words in the candidate codebook and then re-pooling the words to obtain a representation of each image, e.g., histogram of visual word frequencies. In this paper, a method is proposed that selects and evaluates a subset of words from an initially large codebook, without the need for re-coding or re-pooling. Formulations are proposed for two commonly-used schemes: hard and soft (kernel) coding of visual words with average-pooling. The effectiveness of these formulations is evaluated on the 15 Scenes and Caltech 10 benchmarks.
Borrowing ideas from the document retrieval domain @cite_11 , traditional codeword selection methods use criteria such as the term frequency, @math statistic, mutual information and learned SVM weights to select the most discriminative codewords @cite_4 @cite_13 . Winn and Minka @cite_2 propose to merge visual words textons with respect to a probabilistic measure defined on the altered representations. Doing so they aim to find dimensions in the original representation to merge that presumably correspond to the same textures but are captured under different lighting or viewing angles. Similarly, @cite_17 merges pairs of visual words based on a mutual information measure. Wang @cite_7 employs a boosting mechanism where each weak classifier is associated with a codeword and selection of weak classifiers in the procedure naturally results in the selection of the most discriminative codewords. Zhang, et al @cite_20 considers an unsupervised scheme in which the visual words are selected by constructing a ridge regression formulation.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_7", "@cite_2", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2435251607", "2036718463", "2105126798", "2141303268", "", "1976906132", "2151768982" ], "abstract": [ "", "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.", "In patch-based object recognition, there are two important issues on the codebook generation: (I) resolution: a coarse codebook lacks sufficient discriminative power, and an over-fine one is sensitive to noise; (2) codeword selection: non-discriminative codewords not only increase the codebook size, but also can hurt the recognition performance. To achieve a discriminative codebook for better recognition, this paper argues that these two issues are strongly related and should be solved as a whole. In this paper, a multi-resolution codebook is first designed via hierarchical clustering. With a reasonable size, it includes all of the codewords which cross a large number of resolution levels. More importantly, it forms a diverse candidate codeword set that is critical to codeword selection. A Boosting feature selection approach is modified to select the discriminative codewords from this multi-resolution code-book. By doing so, the obtained codebook is composed of the most discriminative codewords culled from different levels of resolution. Experimental study demonstrates the better recognition performance attained by this codebook.", "This paper presents a new algorithm for the automatic recognition of object classes from images (categorization). Compact and yet discriminative appearance-based object class models are automatically learned from a set of training images. The method is simple and extremely fast, making it suitable for many applications such as semantic image retrieval, Web search, and interactive image editing. It classifies a region according to the proportions of different visual words (clusters in feature space). The specific visual words and the typical proportions in each object are learned from a segmented training set. The main contribution of this paper is twofold: i) an optimally compact visual dictionary is learned by pair-wise merging of visual words from an initially large dictionary. The final visual words are described by GMMs. ii) A novel statistical measure of discrimination is proposed which is optimized by each merge operation. High classification accuracy is demonstrated for nine object classes on photographs of real objects viewed under general lighting conditions, poses and viewpoints. The set of test images used for validation comprise: i) photographs acquired by us, ii) images from the Web and iii) images from the recently released Pascal dataset. The proposed algorithm performs well on both texture-rich objects (e.g. grass, sky, trees) and structure-rich ones (e.g. cars, bikes, planes)", "", "Bag of features (BoF) representation has attracted an increasing amount of attention in large scale image processing systems. BoF representation treats images as loose collections of local invariant descriptors extracted from them. The visual codebook is generally constructed by using an unsupervised algorithm such as K-means to quantize the local descriptors into clusters. Images are then represented by the frequency histograms of the codewords contained in them. To build a compact and discriminative codebook, codeword selection has become an indispensable tool. However, most of the existing codeword selection algorithms are supervised and the human labeling may be very expensive. In this paper, we consider the problem of unsupervised codeword selection, and propose a novel algorithm called Discriminative Codeword Selection (DCS). Motivated from recent studies on discriminative clustering, the central idea of our proposed algorithm is to select those codewords so that the cluster structure of the image database can be best respected. Specifically, a multi-output linear function is fitted to model the relationship between the data matrix after codeword selection and the indicator matrix. The most discriminative codewords are thus defined as those leading to minimal fitting error. Experiments on image retrieval and clustering have demonstrated the effectiveness of the proposed method.", "We present an approach to determine the category and location of objects in images. It performs very fast categorization of each pixel in an image, a brute-force approach made feasible by three key developments: First, our method reduces the size of a large generic dictionary (on the order of ten thousand words) to the low hundreds while increasing classification performance compared to k-means. This is achieved by creating a discriminative dictionary tailored to the task by following the information bottleneck principle. Second, we perform feature-based categorization efficiently on a dense grid by extending the concept of integral images to the computation of local histograms. Third, we compute SIFT descriptors densely in linear time. We compare our method to the state of the art and find that it excels in accuracy and simplicity, performing better while assuming less." ] }
1407.6125
748833766
Sequence discovery tools play a central role in several fields of computational biology. In the framework of Transcription Factor binding studies, motif finding algorithms of increasingly high performance are required to process the big datasets produced by new high-throughput sequencing technologies. Most existing algorithms are computationally demanding and often cannot support the large size of new experimental data. We present a new motif discovery algorithm that is built on a recent machine learning technique, referred to as Method of Moments. Based on spectral decompositions, this method is robust under model misspecification and is not prone to locally optimal solutions. We obtain an algorithm that is extremely fast and designed for the analysis of big sequencing data. In a few minutes, we can process datasets of hundreds of thousand sequences and extract motif profiles that match those computed by various state-of-the-art algorithms.
The literature on sequence motif discovery is vast. We refer to @cite_43 @cite_26 @cite_30 @cite_12 for reviews and additional references. There are two main classes of motif finding algorithms, probabilistic and word-based. Probabilistic algorithms search for the most represented un-gapped alignments in the sample to obtain deterministic consensus sequences, PWM models, or more advanced models that take into account multi-base correlations @cite_3 @cite_47 @cite_33 @cite_2 @cite_21 . Word-based algorithms search the dataset for deterministic short words, measure the statistical significance of small variations from a given seed, or transform motif discovery into a kernel feature classification problem @cite_34 @cite_0 @cite_38 . Our method and two of the algorithms we have used for evaluating our results, namely MEME @cite_13 and STEME @cite_1 , belong to the probabilistic class, while the method used in @cite_44 and DREME @cite_6 are word-based algorithms. The latter algorithms can also compute PWM models, so it is of interest to compare algorithms of different classes (See Results section).
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_26", "@cite_33", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_43", "@cite_44", "@cite_2", "@cite_47", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "", "", "", "", "2029201460", "", "2146393977", "2097175728", "2127952290", "2141408320", "", "", "2125810118", "", "", "2031140266" ], "abstract": [ "", "", "", "", "Finding where transcription factors (TFs) bind to the DNA is of key importance to decipher gene regulation at a transcriptional level. Classically, computational prediction of TF binding sites (TFBSs) is based on basic position weight matrices (PWMs) which quantitatively score binding motifs based on the observed nucleotide patterns in a set of TFBSs for the corresponding TF. Such models make the strong assumption that each nucleotide participates independently in the corresponding DNA-protein interaction and do not account for flexible length motifs. We introduce transcription factor flexible models (TFFMs) to represent TF binding properties. Based on hidden Markov models, TFFMs are flexible, and can model both position interdependence within TFBSs and variable length motifs within a single dedicated framework. The availability of thousands of experimentally validated DNA-TF interaction sequences from ChIP-seq allows for the generation of models that perform as well as PWMs for stereotypical TFs and can improve performance for TFs with flexible binding characteristics. We present a new graphical representation of the motifs that convey properties of position interdependence. TFFMs have been assessed on ChIP-seq data sets coming from the ENCODE project, revealing that they can perform better than both PWMs and the dinucleotide weight matrix extension in discriminating ChIP-seq from background sequences. Under the assumption that ChIP-seq signal values are correlated with the affinity of the TF-DNA binding, we find that TFFM scores correlate with ChIP-seq peak signals. Moreover, using available TF-DNA affinity measurements for the Max TF, we demonstrate that TFFMs constructed from ChIP-seq data correlate with published experimentally measured DNA-binding affinities. Finally, TFFMs allow for the straightforward computation of an integrated TF occupancy score across a sequence. These results demonstrate the capacity of TFFMs to accurately model DNA-protein interactions, while providing a single unified framework suitable for the next generation of TFBS prediction.", "", "We can determine the effects of many possible sequence variations in transcription factor binding sites using microarray binding experiments. Analysis of wild-type and mutant Zif268 (Egr1) zinc fingers bound to microarrays containing all possible central 3 bp triplet binding sites indicates that the nucleotides of transcription factor binding sites cannot be treated independently. This indicates that the current practice of characterizing transcription factor binding sites by mutating individual positions of binding sites one base pair at a time does not provide a true picture of the sequence specificity. Similarly, current bioinformatic practices using either just a consensus sequence, or even mononucleotide frequency weight matrices to provide more complete descriptions of transcription factor binding sites, are not accurate in depicting the true binding site specificities, since these methods rely upon the assumption that the nucleotides of binding sites exert independent effects on binding affinity. Our results stress the importance of complete reference tables of all possible binding sites for comparing protein binding preferences for various DNA sequences. We also show results suggesting that microarray binding data using particular subsets of all possible binding sites can be used to extrapolate the relative binding affinities of all possible full-length binding sites, given a known binding site for use as a starting sequence for site preference refinement.", "Motivation: Transcription factor (TF) ChIP-seq datasets have particular characteristics that provide unique challenges and opportunities for motif discovery. Most existing motif discovery algorithms do not scale well to such large datasets, or fail to report many motifs associated with cofactors of the ChIP-ed TF. Results: We present DREME, a motif discovery algorithm specifically designed to find the short, core DNA-binding motifs of eukaryotic TFs, and optimized to analyze very large ChIP-seq datasets in minutes. Using DREME, we discover the binding motifs of the the ChIP-ed TF and many cofactors in mouse ES cell (mESC), mouse erythrocyte and human cell line ChIP-seq datasets. For example, in mESC ChIP-seq data for the TF Esrrb, we discover the binding motifs for eight cofactor TFs important in the maintenance of pluripotency. Several other commonly used algorithms find at most two cofactor motifs in this same dataset. DREME can also perform discriminative motif discovery, and we use this feature to provide evidence that Sox2 and Oct4 do not bind in mES cells as an obligate heterodimer. DREME is much faster than many commonly used algorithms, scales linearly in dataset size, finds multiple, non-redundant motifs and reports a reliable measure of statistical significance for each motif found. DREME is available as part of the MEME Suite of motif-based sequence analysis tools (http: meme.nbcr.net).", "We describe a hierarchy of motif-based kernels for multiple alignments of biological sequences, particularly suitable to process regulatory regions of genes. The kernels incorporate progressively more information, with the most complex kernel accounting for a multiple alignment of orthologous regions, the phylogenetic tree relating the species, and the prior knowledge that relevant sequence patterns occur in conserved motif blocks. These kernels can be used in the presence of a library of known transcription factor binding sites, or de novo by iterating over all k-mers of a given length. In the latter mode, a discriminative classifier built from such a kernel not only recognizes a given class of promoter regions, but as a side effect simultaneously identifies a collection of relevant, discriminative sequence motifs. We demonstrate the utility of the motif-based multiple alignment kernels by using a collection of aligned promoter regions from five yeast species to recognize classes of cell-cycle regulated genes. Supplementary data is available at http: noble.gs.washington.edu proj pkernel.", "The prediction of regulatory elements is a problem where computational methods offer great hope. Over the past few years, numerous tools have become available for this task. The purpose of the current assessment is twofold: to provide some guidance to users regarding the accuracy of currently available tools in various settings, and to provide a benchmark of data sets for assessing future tools.", "", "", "Motivation: The sequence specificity of DNA-binding proteins is typically represented as a position weight matrix in which each base position contributes independently to relative affinity. Assessment of the accuracy and broad applicability of this representation has been limited by the lack of extensive DNA-binding data. However, new microarray techniques, in which preferences for all possible K-mers are measured, enable a broad comparison of both motif representation and methods for motif discovery. Here, we consider the problem of accounting for all of the binding data in such experiments, rather than the highest affinity binding data. We introduce the RankMotif++, an algorithm designed for finding motifs whenever sequences are associated with a semi-quantitative measure of protein-DNA-binding affinity. RankMotif++ learns motif models by maximizing the likelihood of a set of binding preferences under a probabilistic model of how sequence binding affinity translates into binding preference observations. Because RankMotif++ makes few assumptions about the relationship between binding affinity and the semi-quantitative readout, it is applicable to a wide variety of experimental assays of DNA-binding preference. Results: By several criteria, RankMotif++ predicts binding affinity better than two widely used motif finding algorithms (MDScan, MatrixREDUCE) or more recently developed algorithms (PREGO, Seed and Wobble), and its performance is comparable to a motif model that separately assigns affinities to 8-mers. Our results validate the PWM model and provide an approximation of the precision and recall that can be expected in a genomic scan. Availability: RankMotif++ is available upon request. Contact: quaid.morris@utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online.", "", "", "A major challenge in molecular biology is reverse-engineering the cis-regulatory logic that plays a major role in the control of gene expression. This program includes searching through DNA sequences to identify “motifs” that serve as the binding sites for transcription factors or, more generally, are predictive of gene expression across cellular conditions. Several approaches have been proposed for de novo motif discovery–searching sequences without prior knowledge of binding sites or nucleotide patterns. However, unbiased validation is not straightforward. We consider two approaches to unbiased validation of discovered motifs: testing the statistical significance of a motif using a DNA “background” sequence model to represent the null hypothesis and measuring performance in predicting membership in gene clusters. We demonstrate that the background models typically used are “too null,” resulting in overly optimistic assessments of significance, and argue that performance in predicting TF binding or expression patterns from DNA motifs should be assessed by held-out data, as in predictive learning. Applying this criterion to common motif discovery methods resulted in universally poor performance, although there is a marked improvement when motifs are statistically significant against real background sequences. Moreover, on synthetic data where “ground truth” is known, discriminative performance of all algorithms is far below the theoretical upper bound, with pronounced “over-fitting” in training. A key conclusion from this work is that the failure of de novo discovery approaches to accurately identify motifs is basically due to statistical intractability resulting from the fixed size of co-regulated gene clusters, and thus such failures do not necessarily provide evidence that unfound motifs are not active biologically. Consequently, the use of prior knowledge to enhance motif discovery is not just advantageous but necessary. An implementation of the LR and ALR algorithms is available at http: code.google.com p likelihood-ratio-motifs ." ] }
1407.6082
1608374838
This paper proposes a new hierarchical MDL-based model for a joint detection and classi?cation of multilingual text lines in im- ages taken by hand-held cameras. The majority of related text detec- tion methods assume alphabet-based writing in a single language, e.g. in Latin. They use simple clustering heuristics speci?c to such texts: prox- imity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on information- theoretic principles. Our new MDL model includes data costs combining geometric errors with classi?cation likelihoods and a hierarchical sparsity term based on label costs. This energy model can be e?ciently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the pub- lic transit system of Seoul.
There are three major groups of text candidate detection methods: sliding window @cite_12 @cite_27 @cite_11 @cite_6 @cite_13 @cite_17 , edge based @cite_3 @cite_2 @cite_31 , and color based @cite_26 @cite_9 algorithms.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_17", "@cite_6", "@cite_3", "@cite_27", "@cite_2", "@cite_31", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2061802763", "", "2112586637", "2123979834", "", "2131163834", "2060560731", "2132985810", "", "2049951199", "2076014259" ], "abstract": [ "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "", "Video text carries meaningful contextual information and semantic clues for visual content understanding. In this paper, we propose a novel hybrid algorithm to fast detect video texts even under complex backgrounds. We first use an SVM classifier trained by our new StrOke unIt Connection (SOIC) operator to identify seed stroke units. Stroke shape distributions, instead of color or texture features, are extracted and trained in our method. Then the stroke units are tracked and extended into their surroundings to form text lines, obeying seed stroke geometric constraints. Experimental results show that our approach is color and language independent, and robust to video illuminations.", "We propose to use text recognition to aid in visual object class recognition. To this end we first propose a new algorithm for text detection in natural images. The proposed text detection is based on saliency cues and a context fusion step. The algorithm does not need any parameter tuning and can deal with varying imaging conditions. We evaluate three different tasks: 1. Scene text recognition, where we increase the state-of-the-art by 0.17 on the ICDAR 2003 dataset. 2. Saliency based object recognition, where we outperform other state-of-the-art saliency methods for object recognition on the PASCAL VOC 2011 dataset. 3. Object recognition with the aid of recognized text, where we are the first to report multi-modal results on the IMET set. Results show that text helps for object class recognition if the text is not uniquely coupled to individual object instances.", "", "Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results.", "We present a fast automatic text detection algorithm devised for a mobile augmented reality (AR) translation system on a mobile phone. In this application, scene text must be detected, recognized, and translated into a desired language, and then the translation is displayed overlaid properly on the real-world scene. In order to offer a fast automatic text detector, we focused our initial search to find a single letter. Detecting one letter provides useful information that is processed with efficient rules to quickly find the reminder of a word. This approach allows for detecting all the contiguous text regions in an image quickly. We also present a method that exploits the redundancy of the information contained in the video stream to remove false alarms. Our experimental results quantify the accuracy and efficiency of the algorithm and show the strengths and weaknesses of the method as well as its speed (about 160 ms on a recent generation smartphone, not optimized). The algorithm is well suited for real-time, real-world applications.", "In this paper, we propose an efficient text detection method based on the Laplacian operator. The maximum gradient difference value is computed for each pixel in the Laplacian-filtered image. K-means is then used to classify all the pixels into two clusters: text and non-text. For each candidate text region, the corresponding region in the Sobel edge map of the input image undergoes projection profile analysis to determine the boundary of the text blocks. Finally, we employ empirical rules to eliminate false positives based on geometrical properties. Experimental results show that the proposed method is able to detect text of different fonts, contrast and backgrounds. Moreover, it outperforms three existing methods in terms of detection and false positive rates.", "", "Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 ).", "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system." ] }
1407.6082
1608374838
This paper proposes a new hierarchical MDL-based model for a joint detection and classi?cation of multilingual text lines in im- ages taken by hand-held cameras. The majority of related text detec- tion methods assume alphabet-based writing in a single language, e.g. in Latin. They use simple clustering heuristics speci?c to such texts: prox- imity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on information- theoretic principles. Our new MDL model includes data costs combining geometric errors with classi?cation likelihoods and a hierarchical sparsity term based on label costs. This energy model can be e?ciently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the pub- lic transit system of Seoul.
Edge based methods retrieve an edge map (Sobel, Canny, Laplace) and then perform connected component (CC) analysis and outputs blobs. Moreover, Stroke width transform (SWT) @cite_18 @cite_23 also aims to find blobs that have consistent width of stroke. Color based methods, such as MSER and ER (inspired by MSER) assume that a text character's color is homogeneous. MSER was used by the ICDAR 2011 robust reading competition'' winner.
{ "cite_N": [ "@cite_18", "@cite_23" ], "mid": [ "2142159465", "1972065312" ], "abstract": [ "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes." ] }
1407.6082
1608374838
This paper proposes a new hierarchical MDL-based model for a joint detection and classi?cation of multilingual text lines in im- ages taken by hand-held cameras. The majority of related text detec- tion methods assume alphabet-based writing in a single language, e.g. in Latin. They use simple clustering heuristics speci?c to such texts: prox- imity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on information- theoretic principles. Our new MDL model includes data costs combining geometric errors with classi?cation likelihoods and a hierarchical sparsity term based on label costs. This energy model can be e?ciently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the pub- lic transit system of Seoul.
After text candidates are detected non-text blobs must be filtered out. The decision whether a blob represents text is done by classification. Popular classifiers are: support vector machine (SVM) @cite_2 @cite_9 @cite_14 , AdaBoost @cite_7 @cite_32 or their cascades. Popular features for classification are: color based (histogram of intensities, moment of intensity) edge based (histogram of oriented gradients (HOG), Gabor filter) geometric features (width, height, aspect ration, number of holes, convex hull, area of background foreground).
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_32", "@cite_2" ], "mid": [ "2035058553", "2017278151", "", "2131447359", "2060560731" ], "abstract": [ "Text detection and recognition in real images taken in unconstrained environments, such as street view images, remain surprisingly challenging in Computer Vision.", "Detecting text regions in natural scenes is an important part of computer vision. We propose a novel text detection algorithm that extracts six different classes features of text, and uses Modest AdaBoost with multi-scale sequential search. Experiments show that our algorithm can detect text regions with a f= 0.70, from the ICDAR 2003 datasets which include images with text of various fonts, sizes, colors, alphabets and scripts.", "", "Recognition of text in natural scene images is becoming a prominent research area due to the widespread availablity of imaging devices in low-cost consumer products like mobile phones. To evaluate the performance of recent algorithms in detecting and recognizing text from complex images, the ICDAR 2011 Robust Reading Competition was organized. Challenge 2 of the competition dealt specifically with detecting recognizing text in natural scene images. This paper presents an overview of the approaches that the participants used, the evaluation measure, and the dataset used in the Challenge 2 of the contest. We also report the performance of all participating methods for text localization and word recognition tasks and compare their results using standard methods of area precision recall and edit distance.", "We present a fast automatic text detection algorithm devised for a mobile augmented reality (AR) translation system on a mobile phone. In this application, scene text must be detected, recognized, and translated into a desired language, and then the translation is displayed overlaid properly on the real-world scene. In order to offer a fast automatic text detector, we focused our initial search to find a single letter. Detecting one letter provides useful information that is processed with efficient rules to quickly find the reminder of a word. This approach allows for detecting all the contiguous text regions in an image quickly. We also present a method that exploits the redundancy of the information contained in the video stream to remove false alarms. Our experimental results quantify the accuracy and efficiency of the algorithm and show the strengths and weaknesses of the method as well as its speed (about 160 ms on a recent generation smartphone, not optimized). The algorithm is well suited for real-time, real-world applications." ] }
1407.6082
1608374838
This paper proposes a new hierarchical MDL-based model for a joint detection and classi?cation of multilingual text lines in im- ages taken by hand-held cameras. The majority of related text detec- tion methods assume alphabet-based writing in a single language, e.g. in Latin. They use simple clustering heuristics speci?c to such texts: prox- imity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on information- theoretic principles. Our new MDL model includes data costs combining geometric errors with classi?cation likelihoods and a hierarchical sparsity term based on label costs. This energy model can be e?ciently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the pub- lic transit system of Seoul.
Single text blobs must then be aggregated into text lines. Older approaches are based on the Hough transform algorithm @cite_16 @cite_34 . More recent algorithms combine neighbours of blobs into pairs and then fulfill clustering in N-dimensional space, where the following dimensions are in use: stroke width, orientation of a pair, and geometric size of blobs.
{ "cite_N": [ "@cite_16", "@cite_34" ], "mid": [ "2146248127", "1805701149" ], "abstract": [ "In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.", "We propose a new method for extracting character strings which have various directions and sizes. In our method, we regard a character string as a rectangular region crowded with short segments. Then two kinds of the segment crowdedness (local segment crowdedness and global segment crowdedness) are introduced to extract the rectangular regions (i.e. strings). The method was applied to many images which involve strings in various directions and sizes. The result shows that almost all strings were correctly extracted." ] }
1407.6082
1608374838
This paper proposes a new hierarchical MDL-based model for a joint detection and classi?cation of multilingual text lines in im- ages taken by hand-held cameras. The majority of related text detec- tion methods assume alphabet-based writing in a single language, e.g. in Latin. They use simple clustering heuristics speci?c to such texts: prox- imity between letters within one line, larger distance between separate lines, etc. We are interested in a significantly more ambiguous problem where images combine alphabet and logographic characters from multiple languages and typographic rules vary a lot (e.g. English, Korean, and Chinese). Complexity of detecting and classifying text lines in multiple languages calls for a more principled approach based on information- theoretic principles. Our new MDL model includes data costs combining geometric errors with classi?cation likelihoods and a hierarchical sparsity term based on label costs. This energy model can be e?ciently minimized by fusion moves. We demonstrate robustness of the proposed algorithm on a large new database of multilingual text images collected in the pub- lic transit system of Seoul.
Text candidate filtering and text line detection could be done consequentially with complex approaches based on a Markov random field (MRF) @cite_12 @cite_13 or a Conditional Random Field (CRF) @cite_4 and algorithms based on minimal spanning trees.
{ "cite_N": [ "@cite_4", "@cite_13", "@cite_12" ], "mid": [ "2108193766", "", "2049951199" ], "abstract": [ "This paper proposes a novel hybrid method to robustly and accurately localize texts in natural scene images. A text region detector is designed to generate a text confidence map, based on which text components can be segmented by local binarization approach. A Conditional Random Field (CRF) model, considering the unary component property as well as binary neighboring component relationship, is then presented to label components as \"text\" or \"non-text\". Last, text components are grouped into text lines with an energy minimization approach. Experimental results show that the proposed method gives promising performance comparing with the existing methods on ICDAR 2003 competition dataset.", "", "Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 )." ] }
1407.6178
2039800402
Let @math be a directed graph. A in @math is a maximal vertex set @math with @math such that for each pair of distinct vertices @math , there exist two vertex-disjoint paths from @math to @math and two vertex-disjoint paths from @math to @math in @math . In contrast to the @math -vertex-connected components of @math , the subgraphs induced by the @math -directed blocks may consist of few or no edges. In this paper we present two algorithms for computing the @math -directed blocks of @math in @math time, where @math is the number of the strong articulation points of @math and @math is the number of the strong bridges of @math . Furthermore, we study two related concepts: the @math -strong blocks and the @math -edge blocks of @math . We give two algorithms for computing the @math -strong blocks of @math in @math time and we show that the @math -edge blocks of @math can be computed in @math time. In this paper we also study some optimization problems related to the strong articulation points and the @math -blocks of a directed graph. Given a strongly connected graph @math , find a minimum cardinality set @math such that @math is strongly connected and the strong articulation points of @math coincide with the strong articulation points of @math . This problem is called minimum strongly connected spanning subgraph with the same strong articulation points. We show that there is a linear time @math approximation algorithm for this NP-hard problem. We also consider the problem of finding a minimum strongly connected spanning subgraph with the same @math -blocks in a strongly connected graph @math . We present approximation algorithms for three versions of this problem, depending on the type of @math -blocks.
In independent work, Georgiadis, Italiano, Laura, and Parotsidis @cite_20 have studied @math -edge blocks and have given linear time algorithms for finding them. This is better than our results in Section .
{ "cite_N": [ "@cite_20" ], "mid": [ "2953345679" ], "abstract": [ "Edge and vertex connectivity are fundamental concepts in graph theory. While they have been thoroughly studied in the case of undirected graphs, surprisingly not much has been investigated for directed graphs. In this paper we study @math -edge connectivity problems in directed graphs and, in particular, we consider the computation of the following natural relation: We say that two vertices @math and @math are @math -edge-connected if there are two edge-disjoint paths from @math to @math and two edge-disjoint paths from @math to @math . This relation partitions the vertices into blocks such that all vertices in the same block are @math -edge-connected. Differently from the undirected case, those blocks do not correspond to the @math -edge-connected components of the graph. We show how to compute this relation in linear time so that we can report in constant time if two vertices are @math -edge-connected. We also show how to compute in linear time a sparse certificate for this relation, i.e., a subgraph of the input graph that has @math edges and maintains the same @math -edge-connected blocks as the input graph." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_29 gave a comprehensive survey on the topic of anomaly detection for general scenarios. defined an anomaly as a pattern that does not conform to expected normal behavior. But the notion of expected normal behavior depends on the application domains and types of input data. surveyed a broad overview of various techniques used in anomaly detection; classification-based @cite_7 @cite_24 , nearest neighbor approach @cite_5 , clustering-based @cite_8 , statistical-based (including parametric @cite_12 and non-parametric @cite_17 ), information-theoretic-based @cite_22 , and spectral anomaly detection techniques @cite_11 . Although our network transmission model can be classified as one of these methods, the second portion of our paper that measures the impact of anomalous data has not been used before. The kind of input data we used for our anomalies detection is also unique and contains inherent difficulties, which is addressed by our network transmission model.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_7", "@cite_8", "@cite_29", "@cite_24", "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "1695597346", "2139104465", "", "", "2122646361", "", "1966147156", "1543388142", "2134641333" ], "abstract": [ "The formation of secure transportation corridors, where cargoes and shipments from points of entry can be dispatched safely to highly sensitive and secure locations, is a high national priority. One of the key tasks of the program is the detection of anomalous cargo based on sensor readings in truck weigh stations. Due to the high variability, dimensionality, and or noise content of sensor data in transportation corridors, appropriate feature representation is crucial to the success of anomaly detection methods in this domain. In this paper, we empirically investigate the usefulness of manifold embedding methods for feature representation in anomaly detection problems in the domain of transportation corridors. We focus on both linear methods, such as multi-dimensional scaling (MDS), as well as nonlinear methods, such as locally linear embedding (LLE) and isometric feature mapping (ISOMAP). Our study indicates that such embedding methods provide a natural mechanism for keeping anomalous points away from the dense normal regions in the embedding of the data. We illustrate the efficacy of manifold embedding methods for anomaly detection through experiments on simulated data as well as real truck data from weigh stations.", "Identifying atypical objects is one of the traditional topics in machine learning. Recently, novel approaches, e.g., Minority Detection and One-class clustering, have explored further to identify clusters of atypical objects which strongly contrast from the rest of the data in terms of their distribution or density. This paper analyzes such tasks from an information theoretic perspective. Based on Information Bottleneck formalization, these tasks interpret to increasing the averaged atypicalness of the clusters while reducing the complexity of the clustering. This formalization yields a unifying view of the new approaches as well as the classic outlier detection. We also present a scalable minimization algorithm which exploits the localized form of the cost function over individual clusters. The proposed algorithm is evaluated using simulated datasets and a text classification benchmark, in comparison with an existing method.", "", "", "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "", "Efficiently detecting outliers or anomalies is an important problem in many areas of science, medicine and information technology. Applications range from data cleaning to clinical diagnosis, from detecting anomalous defects in materials to fraud and intrusion detection. Over the past decade, researchers in data mining and statistics have addressed the problem of outlier detection using both parametric and non-parametric approaches in a centralized setting. However, there are still several challenges that must be addressed. First, most approaches to date have focused on detecting outliers in a continuous attribute space. However, almost all real-world data sets contain a mixture of categorical and continuous attributes. Categorical attributes are typically ignored or incorrectly modeled by existing approaches, resulting in a significant loss of information. Second, there have not been any general-purpose distributed outlier detection algorithms. Most distributed detection algorithms are designed with a specific domain (e.g. sensor networks) in mind. Third, the data sets being analyzed may be streaming or otherwise dynamic in nature. Such data sets are prone to concept drift, and models of the data must be dynamic as well. To address these challenges, we present a tunable algorithm for distributed outlier detection in dynamic mixed-attribute data sets.", "", "Network intrusion detection is the problem of detecting anomalous network connections caused by intrusive activities. Many intrusion detection systems proposed before use both normal and intrusion data to build their classifiers. However, intrusion data are usually scarce and difficult to collect. We propose to solve this problem using a novelty detection approach. In particular, we propose to take a nonparametric density estimation approach based on Parzen-window estimators with Gaussian kernels to build an intrusion detection system using normal data only. To facilitate comparison, we have tested our system on the KDD Cup 1999 dataset. Our system compares favorably with the KDD Cup winner which is based on an ensemble of decision trees with bagged boosting, as our system uses no intrusion data at all and much less normal data for training." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_20 addressed the detection for two broad classes of anomalies; The first class is work flow execution anomalies and the second class is execution low performance anomalies. Our focus on non-critical anomalies is similar to 's definition of execution low performance anomalies. Their detection method is based on text analysis of logs generated by parallel frameworks such as Hadoop. However, we do not require the usage of logs, which is usually difficult to obtain from distributed systems.
{ "cite_N": [ "@cite_20" ], "mid": [ "2102632804" ], "abstract": [ "Detection of execution anomalies is very important for the maintenance, development, and performance refinement of large scale distributed systems. Execution anomalies include both work flow errors and low performance problems. People often use system logs produced by distributed systems for troubleshooting and problem diagnosis. However, manually inspecting system logs to detect anomalies is unfeasible due to the increasing scale and complexity of distributed systems. Therefore, there is a great demand for automatic anomalies detection techniques based on log analysis. In this paper, we propose an unstructured log analysis technique for anomalies detection. In the technique, we propose a novel algorithm to convert free form text messages in log files to log keys without heavily relying on application specific knowledge. The log keys correspond to the log-print statements in the source code which can provide cues of system execution behavior. After converting log messages to log keys, we learn a Finite State Automaton (FSA) from training log sequences to present the normal work flow for each system component. At the same time, a performance measurement model is learned to characterize the normal execution performance based on the log mes-sages’ timing information. With these learned models, we can automatically detect anomalies in newly input log files. Experiments on Hadoop and SILK show that the technique can effectively detect running anomalies." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_11 proposed a manifold embedding based method for detecting anomalies in transportation. Their algorithm takes in high dimensional feature vectors and reduces it to low dimensional representations for better efficiency of detecting anomalies. However, in the network data we look at in this research, our data is low dimensional and coarse-grained information of network flow in distributed systems. We use low dimensional information and reconstruct the high dimensional information in order to obtain a better representation of the network flows.
{ "cite_N": [ "@cite_11" ], "mid": [ "1695597346" ], "abstract": [ "The formation of secure transportation corridors, where cargoes and shipments from points of entry can be dispatched safely to highly sensitive and secure locations, is a high national priority. One of the key tasks of the program is the detection of anomalous cargo based on sensor readings in truck weigh stations. Due to the high variability, dimensionality, and or noise content of sensor data in transportation corridors, appropriate feature representation is crucial to the success of anomaly detection methods in this domain. In this paper, we empirically investigate the usefulness of manifold embedding methods for feature representation in anomaly detection problems in the domain of transportation corridors. We focus on both linear methods, such as multi-dimensional scaling (MDS), as well as nonlinear methods, such as locally linear embedding (LLE) and isometric feature mapping (ISOMAP). Our study indicates that such embedding methods provide a natural mechanism for keeping anomalous points away from the dense normal regions in the embedding of the data. We illustrate the efficacy of manifold embedding methods for anomaly detection through experiments on simulated data as well as real truck data from weigh stations." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_13 used data that has multiple sensors monitoring different variables of the road conditions. The multiple sensors provided a multi-view of readings that is high dimensional which required manifold embedding @cite_11 for clustering of the data points into two clusters. From the two clusters, chose the smaller cluster as the set of anomalous data points. But their approach requires detailed data such as readings from multiple sensors. On the other hand, our algorithm is suitable for distributed systems that do not have access to many sensors.
{ "cite_N": [ "@cite_13", "@cite_11" ], "mid": [ "2097759626", "1695597346" ], "abstract": [ "We focus on detecting anomalous events in transportation systems. In transportation systems, other than normal road situation, anomalous events happen once in a while such as traffic accidents, ambulance car passing, harsh weather conditions, etc. Identifying the anomalous traffic events is essential because the events can lead to critical conditions where immediate investigation and recovery may be necessary. We propose an anomaly detection method for transportation systems where we create a police report automatically after detecting anomalies. Unlike the traditional police report, in this case, some quantitative analysis shall be done as well to provide experts with an advanced, precise and professional description of the anomalous event. For instance, we can provide the moment, the location as well as how severe the accident occurs in the upstream and downstream routes. We present an anomaly detection approach based on view association given multiple feature views on the transportation data if the views are more or less independent from each other. For each single view, anomalies are detected based on a manifold learning and hierarchical clustering procedures and anomalies from different views are associated and detected as anomalies with high confidence. We study two well-known ITS datasets which include the data from Mobile Century project and the PeMS dataset, and we evaluate the proposed method by comparing the automatically generated report and real report from police during the related period.", "The formation of secure transportation corridors, where cargoes and shipments from points of entry can be dispatched safely to highly sensitive and secure locations, is a high national priority. One of the key tasks of the program is the detection of anomalous cargo based on sensor readings in truck weigh stations. Due to the high variability, dimensionality, and or noise content of sensor data in transportation corridors, appropriate feature representation is crucial to the success of anomaly detection methods in this domain. In this paper, we empirically investigate the usefulness of manifold embedding methods for feature representation in anomaly detection problems in the domain of transportation corridors. We focus on both linear methods, such as multi-dimensional scaling (MDS), as well as nonlinear methods, such as locally linear embedding (LLE) and isometric feature mapping (ISOMAP). Our study indicates that such embedding methods provide a natural mechanism for keeping anomalous points away from the dense normal regions in the embedding of the data. We illustrate the efficacy of manifold embedding methods for anomaly detection through experiments on simulated data as well as real truck data from weigh stations." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_23 address the problem of detecting and describing traffic anomalies using crowd sensing with Beijing's taxis' GPS data and China's Weibo This service resembles Twitter but is catered for the Chinese population in Chinese language. social media data. The anomaly here refers to a deviation in traffic volume on segments of road during some special events. The proposed detection algorithm is straightforward because of the availability of GPS data at regular and fine grain time intervals. Their focus is on two special indexing data structures that improve the algorithm efficiency.
{ "cite_N": [ "@cite_23" ], "mid": [ "2164920786" ], "abstract": [ "Smart card transactions capture rich information of human mobility and urban dynamics, therefore are of particular interest to urban planners and location-based service providers. However, since most transaction systems are only designated for billing purpose, typically, fine-grained location information, such as the exact boarding and alighting stops of a bus trip, is only partially or not available at all, which blocks deep exploitation of this rich and valuable data at individual level. This paper presents a \"space alignment\" framework to reconstruct individual mobility history from a large-scale smart card transaction dataset pertaining to a metropolitan city. Specifically, we show that by delicately aligning the monetary space and geospatial space with the temporal space, we are able to extrapolate a series of critical domain specific constraints. Later, these constraints are naturally incorporated into a semi-supervised conditional random field to infer the exact boarding and alighting stops of all transit routes with a surprisingly high accuracy, e.g., given only 10 trips with known alighting boarding stops, we successfully inferred more than 78 alighting and boarding stops from all unlabeled trips. In addition, we demonstrated that the smart card data enriched by the proposed approach dramatically improved the performance of a conventional method for identifying users' home and work places (with 88 improvement on home detection and 35 improvement on work place detection). The proposed method offers the possibility to mine individual mobility from common public transit transactions, and showcases how uncertain data can be leveraged with domain knowledge and constraints, to support cross-application data mining tasks." ] }
1407.6064
2952138063
Deep within the networks of distributed systems, one often finds anomalies that affect their efficiency and performance. These anomalies are difficult to detect because the distributed systems may not have sufficient sensors to monitor the flow of traffic within the interconnected nodes of the networks. Without early detection and making corrections, these anomalies may aggravate over time and could possibly cause disastrous outcomes in the system in the unforeseeable future. Using only coarse-grained information from the two end points of network flows, we propose a network transmission model and a localization algorithm, to detect the location of anomalies and rank them using a proposed metric within distributed systems. We evaluate our approach on passengers' records of an urbanized city's public transportation system and correlate our findings with passengers' postings on social media microblogs. Our experiments show that the metric derived using our localization algorithm gives a better ranking of anomalies as compared to standard deviation measures from statistical models. Our case studies also demonstrate that transportation events reported in social media microblogs matches the locations of our detect anomalies, suggesting that our algorithm performs well in locating the anomalies within distributed systems.
@cite_2 @cite_1 @cite_6 analyzed taxi GPS data to detect drivers who overcharge their passengers by deliberately taking the longer route to reach the destination. The general idea for finding these anomalous routes is to compare the route taken for each pickup and destination points and obtain a measure of how much it deviates from the usual routes.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_2" ], "mid": [ "2136975357", "", "2168884627" ], "abstract": [ "GPS-equipped taxis can be viewed as pervasive sensors and the large-scale digital traces produced allow us to reveal many hidden \"facts\" about the city dynamics and human behaviors. In this paper, we aim to discover anomalous driving patterns from taxi's GPS traces, targeting applications like automatically detecting taxi driving frauds or road network change in modern cites. To achieve the objective, firstly we group all the taxi trajectories crossing the same source destination cell-pair and represent each taxi trajectory as a sequence of symbols. Secondly, we propose an Isolation-Based Anomalous Trajectory (iBAT) detection method and verify with large scale taxi data that iBAT achieves remarkable performance (AUC>0.99, over 90 detection rate at false alarm rate of less than 2 ). Finally, we demonstrate the potential of iBAT in enabling innovative applications by using it for taxi driving fraud detection and road network change detection.", "", "Advances in GPS tracking technology have enabled us to install GPS tracking devices in city taxis to collect a large amount of GPS traces under operational time constraints. These GPS traces provide unparallel opportunities for us to uncover taxi driving fraud activities. In this paper, we develop a taxi driving fraud detection system, which is able to systematically investigate taxi driving fraud. In this system, we first provide functions to find two aspects of evidences: travel route evidence and driving distance evidence. Furthermore, a third function is designed to combine the two aspects of evidences based on Dempster-Shafer theory. To implement the system, we first identify interesting sites from a large amount of taxi GPS logs. Then, we propose a parameter-free method to mine the travel route evidences. Also, we introduce route mark to represent a typical driving path from an interesting site to another one. Based on route mark, we exploit a generative statistical model to characterize the distribution of driving distance and identify the driving distance evidences. Finally, we evaluate the taxi driving fraud detection system with large scale real-world taxi GPS logs. In the experiments, we uncover some regularity of driving fraud activities and investigate the motivation of drivers to commit a driving fraud by analyzing the produced taxi fraud data." ] }
1407.5380
2950781478
As a contribution to the challenge of building game-playing AI systems, we develop and analyse a formal language for representing and reasoning about strategies. Our logical language builds on the existing general Game Description Language (GDL) and extends it by a standard modality for linear time along with two dual connectives to express preferences when combining strategies. The semantics of the language is provided by a standard state-transition model. As such, problems that require reasoning about games can be solved by the standard methods for reasoning about actions and change. We also endow the language with a specific semantics by which strategy formulas are understood as move recommendations for a player. To illustrate how our formalism supports automated reasoning about strategies, we demonstrate two example methods of implementation : first, we formalise the semantic interpretation of our language in conjunction with game rules and strategy rules in the Situation Calculus; second, we show how the reasoning problem can be solved with Answer Set Programming.
Another approach to strategy representation and reasoning is to treat a strategy as a program so that PDL-style program connectives can be used to combine strategies @cite_13 @cite_2 @cite_8 . van Benthem proposed a logical framework, named Temporal Forcing Logic (TFL), with a modality @math , meaning that player @math applies strategy @math , against any play of the others, to force the game to a state in which @math holds'', where a strategy can be defined as any PDL program. Similar proposal can also be found in @cite_2 . Such an intuitive analogue to strategies'' provides a close approximation to strategy representation; nevertheless, a strategy has essential differences from a program, which requires specific ways of composition and reasoning as we have shown in the previous sections.
{ "cite_N": [ "@cite_8", "@cite_13", "@cite_2" ], "mid": [ "1535051955", "2136794369", "197077171" ], "abstract": [ "In open systems verification, to formally check for reliability, one needs an appropriate formalism to model the interaction between open entities and express that the system is correct no matter how the environment behaves. An important contribution in this context is given by the modal logics for strategic ability, in the setting of multi-agent games, such as Atl, Atl (^ * ), and the like. Recently, Chatterjee, Henzinger, and Piterman introduced Strategy Logic, which we denote here by CHP-Sl, with the aim of getting a powerful framework for reasoning explicitly about strategies. CHP-Sl is obtained by using first-order quantifications over strategies and it has been investigated in the specific setting of two-agents turned-based game structures where a non-elementary model-checking algorithm has been provided. While CHP-Sl is a very expressive logic, we claim that it does not fully capture the strategic aspects of multi-agent systems. In this work, we introduce and study a more general strategy logic, denoted Sl, for reasoning about strategies in multi-agent concurrent systems. We prove that Sl strictly includes CHP-Sl, while maintaining a decidable model-checking problem. Indeed, we show that it is 2ExpTime-complete under a reasonable semantics, thus not harder than that for Atl (^ * ) and a remarkable improvement of the same problem for CHP-Sl. We also consider the satisfiability problem and show that it is undecidable already for the sub-logic CHP-Sl under the concurrent game semantics.", "The author discusses games of both perfect and imperfect information at two levels of structural detail: players' local actions, and their global powers for determining outcomes of the game. Matching logical languages are proposed for both. In particular, at the \"action level\", imperfect information games naturally model a combined \"dynamic-epistemic language\"--and correspondences are found between special axioms in this language and particular modes of playing games with their information dynamics. At the \"outcome level\", the paper presents suitable notions of game equivalence, and some simple representation results. Copyright 2001 by Blackwell Publishing Ltd and the Board of Trustees of the Bulletin of Economic Research", "We consider a prepositional dynamic logic whose programs are regular expressions over game - strategy pairs. At the atomic level, these are finite extensive form game trees with structured strategy specifications, whereby a player's strategy may depend on properties of the opponent's strategy. The advantage of imposing structure not merely on games or on strategies but on game - strategy pairs, is that we can speak of a composite game g followed by g′ whereby if the opponent played a strategy s in g, the player responds with s′ in g′ to ensure a certain outcome. In the presence of iteration, a player has significant ability to strategise taking into account the explicit structure of games. We present a complete axiomatization of the logic and prove its decidability. The tools used combine techniques from PDL, CTL and game logics." ] }
1407.5754
2952347824
The (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS) takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhood and does so with limited memory without any requirement on the cost functions. We evaluate the T-ILS in a simulation of Ising model and two real-world problems in computer vision: stereo matching, image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with a significant computational gain.
The MAP assignment for Markov random fields (MRFs) as a combinatorial search problem has attracted a great amount of research in the past several decades, especially in the area of computer vision @cite_20 and probabilistic artificial intelligence @cite_29 . The problem is known to be NP-hard @cite_15 . For example, in labeling of an image of size @math , the problem space is @math large, where @math is the number of possible labels per pixel.
{ "cite_N": [ "@cite_29", "@cite_20", "@cite_15" ], "mid": [ "2159080219", "", "2130178369" ], "abstract": [ "From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.", "", "Given a probabilistic world model, an important problem is to find the maximum a-posteriori probability (MAP) instantiation of all the random variables given the evidence. Numerous researchers using such models employ some graph representation for the distributions, such as a Bayesian belief network. This representation simplifies the complexity of specifying the distributions from exponential in n, the number of variables in the model, to linear in n, in many interesting cases. We show, however, that finding the MAP is NP-hard in the general case when these representations are used, even if the size of the representation happens to be linear in n. Furthermore, minor modifications to the proof show that the problem remains NP-hard for various restrictions of the topology of the graphs. The same technique can be applied to the results of a related paper (by Cooper), to further restrict belief network topology in the proof that probabilistic inference is NP-hard." ] }
1407.5754
2952347824
The (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS) takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhood and does so with limited memory without any requirement on the cost functions. We evaluate the T-ILS in a simulation of Ising model and two real-world problems in computer vision: stereo matching, image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with a significant computational gain.
Techniques for solving the MAP assignment can be broadly classified into stochastic and deterministic classes. In early days, the first stochastic algorithms were based on simulated annealing (SA) @cite_31 . The first application of SA to Markov random fields (MRFs) with provable convergence was perhaps the work of @cite_18 . The main drawback of this method is slow convergence toward good solutions @cite_61 . Nature-inspired algorithms were also popular, especially the family of genetic algorithms @cite_60 @cite_40 @cite_45 @cite_19 @cite_3 . Some attempts using ant colony optimization and tabu-search have also been made @cite_17 @cite_14 .
{ "cite_N": [ "@cite_61", "@cite_18", "@cite_14", "@cite_60", "@cite_3", "@cite_19", "@cite_40", "@cite_45", "@cite_31", "@cite_17" ], "mid": [ "2107884096", "2020999234", "", "", "", "2158987937", "2070002167", "2104371080", "2024060531", "1521026312" ], "abstract": [ "Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http: vision.middlebury.edu MRF .", "We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states ( annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.", "", "", "", "Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.", "An unsupervised method for segmenting noisy and blurred images is proposed. A Markov random field (MRF) model is used which is robust to degradation. Since this is computationally intensive, a hierarchical distributed genetic algorithm (HDGA) is used which is unsupervised and parallel. Experimental results show that the proposed method is effective at segmenting real images.", "Many vision problems have been formulated as energy minimization problems and there have been significant advances in energy minimization algorithms. The most widely-used energy minimization algorithms include graph cuts, belief propagation and tree-reweighted message passing. Although they have obtained good results, they are still unsatisfactory when it comes to more difficult MRF problems such as non-submodular energy functions, highly connected MRFs, and high-order clique potentials. There have also been other approaches, known as stochastic sampling-based algorithms, which include simulated annealing, Markov chain Monte Carlo and population based Markov chain Monte Carlo. They are applicable to any general energy models but they are usually slower than deterministic methods. In this paper, we propose new algorithms which elegantly combine stochastic and deterministic methods. Sampling-based methods are boosted by deterministic methods so that they can rapidly move to lower energy states and easily jump over energy barriers. In different point of view, the sampling-based method prevents deterministic methods from getting stuck at local minima. Consequently, a combination of both approaches substantially increases the quality of the solutions. We present a thorough analysis of the proposed methods in synthetic MRF problems by controlling the hardness of the problems. We also demonstrate experimental results for the photomontage problem which is the most difficult one among the standard MRF benchmark problems.", "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.", "In this paper, we propose a novel method for image segmentation that we call ACS-MRF method. ACS-MRF is a hybrid ant colony system coupled with a local search. We show how a colony of cooperating ants are able to estimate the labels field and minimize the MAP estimate. Cooperation between ants is performed by exchanging information through pheromone updating. The obtained results show the efficiency of the new algorithm, which is able to compete with other stochastic optimization methods like Simulated annealing and Genetic algorithm in terms of solution quality." ] }
1407.5754
2952347824
The (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS) takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhood and does so with limited memory without any requirement on the cost functions. We evaluate the T-ILS in a simulation of Ising model and two real-world problems in computer vision: stereo matching, image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with a significant computational gain.
Another powerful class of algorithms in computer vision is graph cuts @cite_48 @cite_61 . They are, nevertheless, designed with specific cost functions in mind (i.e. and ) @cite_7 , and therefore inapplicable for generic cost functions such as those resulting from learning. Again, research in graph cuts is an active area in computer vision @cite_59 @cite_12 @cite_30 @cite_37 @cite_49 @cite_65 . Interestingly, it has been recently proved that graph cuts are in fact loop BP @cite_56 .
{ "cite_N": [ "@cite_61", "@cite_30", "@cite_37", "@cite_7", "@cite_48", "@cite_65", "@cite_56", "@cite_59", "@cite_49", "@cite_12" ], "mid": [ "2107884096", "", "", "", "2143516773", "2162366888", "2403968605", "", "2952853970", "2113137767" ], "abstract": [ "Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http: vision.middlebury.edu MRF .", "", "", "", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.", "Markov Random Fields (MRFs) are ubiquitous in low- level computer vision. In this paper, we propose a new approach to the optimization of multi-labeled MRFs. Similarly to a-expansion it is based on iterative application of binary graph cut. However, the number of binary graph cuts required to compute a labelling grows only logarithmically with the size of label space, instead of linearly. We demonstrate that for applications such as optical flow, image restoration, and high resolution stereo, this gives an order of magnitude speed-up, for comparable energies. Iterations are performed by \"fusion\" of solutions, done with QPBO which is related to graph cut but can deal with non- submodularity. At convergence, the method achieves optima on a par with the best competitors, and sometimes even exceeds them.", "The maximum a posteriori (MAP) configuration of binary variable models with sub-modular graph-structured energy functions can be found efficiently and exactly by graph cuts. Max-product belief propagation (MP) has been shown to be suboptimal on this class of energy functions by a canonical counterexample where MP converges to a suboptimal fixed point (Kulesza & Pereira, 2008). In this work, we show that under a particular scheduling and damping scheme, MP is equivalent to graph cuts, and thus optimal. We explain the apparent contradiction by showing that with proper scheduling and damping, MP always converges to an optimal fixed point. Thus, the canonical counterexample only shows the suboptimality of MP with a particular suboptimal choice of schedule and damping. With proper choices, MP is optimal.", "", "We consider the task of obtaining the maximum a posteriori estimate of discrete pairwise random fields with arbitrary unary potentials and semimetric pairwise potentials. For this problem, we propose an accurate hierarchical move making strategy where each move is computed efficiently by solving an st-MINCUT problem. Unlike previous move making approaches, e.g. the widely used a-expansion algorithm, our method obtains the guarantees of the standard linear programming (LP) relaxation for the important special case of metric labeling. Unlike the existing LP relaxation solvers, e.g. interior-point algorithms or tree-reweighted message passing, our method is significantly faster as it uses only the efficient st-MINCUT algorithm in its design. Using both synthetic and real data experiments, we show that our technique outperforms several commonly used algorithms.", "Minimum cut maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style \"push -relabel\" methods and algorithms based on Ford-Fulkerson style \"augmenting paths.\" We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow min-cut algorithm is available upon request for research purposes." ] }
1407.5754
2952347824
The (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS) takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhood and does so with limited memory without any requirement on the cost functions. We evaluate the T-ILS in a simulation of Ising model and two real-world problems in computer vision: stereo matching, image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with a significant computational gain.
It is fair to say that the deterministic approach has become dominant due to their performance and theoretical guarantee for certain classes of problems @cite_2 . However, the problem is still unsolved for general settings, thus motivates this paper. Our approach has both the deterministic and heuristic nature. It relies on the concept of using the deterministic method of BP, which is efficient in trees. The local search is strong because it covers a significant number of sites, rather than just one, which is often found in other local search methods such as ICM @cite_23 . The neighborhood size in our method is very large @cite_24 . For typical image labeling problems, the size is @math for an image of height @math and width @math and label size of @math . Standard local search like ICM only explores the neighborhood of size @math at a time. Once a strong local minimum is found, a stochastic procedure based on iterated local search @cite_42 is applied to escape from it and explores a better local minimum.
{ "cite_N": [ "@cite_24", "@cite_42", "@cite_23", "@cite_2" ], "mid": [ "2137849348", "1592150368", "1554544485", "2952604113" ], "abstract": [ "Many optimization problems of practical interest are computationally intractable. Therefore, a practical approach for solving such problems is to employ heuristic (approximation) algorithms that can find nearly optimal solutions within a reasonable amount of computation time. An improvement algorithm is a heuristic algorithm that generally starts with a feasible solution and iteratively tries to obtain a better solution. Neighborhood search algorithms (alternatively called local search algorithms) are a wide class of improvement algorithms where at each iteration an improving solution is found by searching the \"neighborhood\" of the current solution. A critical issue in the design of a neighborhood search algorithm is the choice of the neighborhood structure, that is, the manner in which the neighborhood is defined. As a rule of thumb, the larger the neighborhood, the better is the quality of the locally optimal solutions, and the greater is the accuracy of the final solution that is obtained. At the same time, the larger the neighborhood, the longer it takes to search the neighborhood at each iteration. For this reason, a larger neighborhood does not necessarily produce a more effective heuristic unless one can search the larger neighborhood in a very efficient manner. This paper concentrates on neighborhood search algorithms where the size of the neighborhood is \"very large\" with respect to the size of the input data and in which the neighborhood is searched in an efficient manner. We survey three broad classes of very large-scale neighborhood search (VLSN) algorithms: (1) variable-depth methods in which large neighborhoods are searched heuristically, (2) large neighborhoods in which the neighborhoods are searched using network flow techniques or dynamic programming, and (3) large neighborhoods induced by restrictions of the original problem that are solvable in polynomial time.", "Iterated Local Search has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective. The essential idea of Iterated Local Search lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for a given optimization engine. The success of Iterated Local Search lies in the biased sampling of this set of local optima. How effective this approach turns out to be depends mainly on the choice of the local search, the perturbations, and the acceptance criterion. So far, in spite of its conceptual simplicity, it has lead to a number of state-of-the-art results without the use of too much problem- specific knowledge. But with further work so that the different modules are well adapted to the problem at hand, Iterated Local Search can often become a competitive or even state of the art algorithm. The purpose of this review is both to give a detailed description of this metaheuristic and to show where it stands in terms of performance.", "may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or \"pixels\", each pixel having a particular \"colour\" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.", "published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large la -bel-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of 32 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types." ] }
1407.5754
2952347824
The (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable. In this paper, we exploit tree-based methods to efficiently address this problem. Our novel method, named Tree-based Iterated Local Search (T-ILS) takes advantage of the tractability of tree-structures embedded within MRFs to derive strong local search in an ILS framework. The method efficiently explores exponentially large neighborhood and does so with limited memory without any requirement on the cost functions. We evaluate the T-ILS in a simulation of Ising model and two real-world problems in computer vision: stereo matching, image denoising. Experimental results demonstrate that our methods are competitive against state-of-the-art rivals with a significant computational gain.
The idea of exploiting the trees in MRF in image analysis is not entirely new. In early days, a spanning tree is used to approximate the entire MRF @cite_57 @cite_4 . This method is efficient but the approximation quality may hurt because the number of edges in tree is far less than that in the original MRF. Another way is to built a hierarchical MRF with multiple resolutions @cite_41 , but this is less applicable for flat image labeling problems. Our method differs from these efforts in that we use trees embedded in the original graph rather than building an approximate tree. Second, our trees are conditional -- trees are defined on the values of its leaves. Third, trees are selected as we go during the search process.
{ "cite_N": [ "@cite_57", "@cite_41", "@cite_4" ], "mid": [ "2163166770", "2139549194", "" ], "abstract": [ "A method is presented to approximate optimally an n -dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n - 1 first order dependence relationship among the n variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution.", "Reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts-in particular, making ties to topics such as wavelets and multigrid methods. A third goal is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1 f processes. We also illustrate how these methods have been used in practice.", "" ] }
1407.4989
2952878841
There has been a surge of interest in community detection in homogeneous single-relational networks which contain only one type of nodes and edges. However, many real-world systems are naturally described as heterogeneous multi-relational networks which contain multiple types of nodes and edges. In this paper, we propose a new method for detecting communities in such networks. Our method is based on optimizing the composite modularity, which is a new modularity proposed for evaluating partitions of a heterogeneous multi-relational network into communities. Our method is parameter-free, scalable, and suitable for various networks with general structure. We demonstrate that it outperforms the state-of-the-art techniques in detecting pre-planted communities in synthetic networks. Applied to a real-world Digg network, it successfully detects meaningful communities.
Optimizing modularity is proved to be NP-hard @cite_47 . Researchers have developed various heuristic optimization algorithms @cite_42 @cite_10 @cite_48 @cite_7 @cite_29 @cite_11 @cite_0 @cite_52 @cite_38 @cite_36 @cite_1 @cite_15 @cite_27 . In particular, the simulated annealing algorithm @cite_11 is the most accurate (in terms of the modularity score) @cite_18 . However, this algorithm requires a long time to complete, and is only suitable for small-scale networks. On the other hand, the label propagation algorithm @cite_36 , which requires only near linear time to complete, is perhaps the fastest. However, this algorithm tends to get stuck in a poor local optimum @cite_1 . In practice, the Louvain algorithm @cite_7 is widely used, since it reaches a proper balance between accuracy and speed.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_7", "@cite_15", "@cite_36", "@cite_48", "@cite_29", "@cite_42", "@cite_1", "@cite_52", "@cite_0", "@cite_27", "@cite_47", "@cite_10", "@cite_11" ], "mid": [ "1971844861", "", "2131681506", "1964333868", "2011843642", "2015953751", "1985625141", "2089458547", "2017588197", "", "", "2037098808", "2621737021", "2047940964", "1982322675" ], "abstract": [ "We have recently introduced a multistep extension of the greedy algorithm for modularity optimization. The extension is based on the idea that merging l pairs of communities (l>1) at each iteration prevents premature condensation into few large communities. Here, an empirical formula is presented for the choice of the step width l that generates partitions with (close to) optimal modularity for 17 real-world and 1100 computer-generated networks. Furthermore, an in-depth analysis of the communities of two real-world networks (the metabolic network of the bacterium E. coli and the graph of coappearing words in the titles of papers coauthored by Martin Karplus) provides evidence that the partition obtained by the multistep greedy algorithm is superior to the one generated by the original greedy algorithm not only with respect to modularity, but also according to objective criteria. In other words, the multistep extension of the greedy algorithm reduces the danger of getting trapped in local optima of modularity and generates more reasonable partitions.", "", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "Network model recently becomes a popular tool for studying complex systems. Detecting meaningful communities in complex networks, as an important task in network modeling and analysis, has attracted great interests in various research areas. This paper proposes a genetic algorithm with a special encoding schema for community detection in complex networks. The algorithm employs a metric, named modularity Q as the fitness function and applies a special locus-based adjacency encoding schema to represent the community partitions. The encoding schema enables the algorithm to determine the number of communities adaptively and automatically, which provides great flexibility to the detection process. In addition, the schema also significantly reduces the search space. Extensive experiments demonstrate the effectiveness of the proposed algorithm.", "We investigate the recently proposed label-propagation algorithm (LPA) for identifying network communities. We reformulate the LPA as an equivalent optimization problem, giving an objective function whose maxima correspond to community solutions. By considering properties of the objective function, we identify conceptual and practical drawbacks of the label propagation approach, most importantly the disparity between increasing the value of the objective function and improving the quality of communities found. To address the drawbacks, we modify the objective function in the optimization problem, producing a variety of algorithms that propagate labels subject to constraints; of particular interest is a variant that maximizes the modularity measure of community quality. Performance properties and implementation details of the proposed algorithms are discussed. Bipartite as well as unipartite networks are considered. PACS numbers: 89.75.Hc", "We consider the problem of detecting communities or modules in networks, groups of vertices with a higher-than-average density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as \"modularity\" over possible divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a number of possible algorithms for detecting community structure, as well as several other results, including a spectral measure of bipartite structure in networks and a new centrality measure that identifies those vertices that occupy central positions within the communities to which they belong. The algorithms and measures proposed are illustrated with applications to a variety of real-world complex networks.", "The description of the structure of complex networks has been one of the focus of attention of the physicist’s community in the recent years. The levels of description range from the microscopic (degree, clustering coefficient, centrality measures, etc., of individual nodes) to the macroscopic description in terms of statistical properties of the whole network (degree distribution, total clustering coefficient, degree-degree correlations, etc.) [1, 2, 3, 4]. Between these two extremes there is a ”mesoscopic” description of networks that tries to explain its community structure. The general notion of community structure in complex networks was first pointed out in the physics literature by Girvan and Newman [5], and refers to the fact that nodes in many real networks appear to group in subgraphs in which the density of internal connections is larger than the connections with the rest of nodes in the network. The community structure has been empirically found in many real technological, biological and social networks [6, 7, 8, 9, 10] and its emergence seems to be at the heart of the network formation process [11]. The existing methods intended to devise the community structure in complex networks have been recently reviewed in [10]. All these methods require a definition of community that imposes the limit up to which a group should be considered a community. However, the concept of community itself is qualitative: nodes must be more connected within its community than with the rest of the network, and its quantification is still a subject of debate. Some quantitative definitions that came from sociology have been used in recent studies [12], but in general, the physics community has widely accepted a recent measure for the community structure based on the concept of modularity Q introduced by Newman and Girvan [13]:", "Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.", "A modularity-specialized label propagation algorithm (LPAm) for detecting network communities was recently proposed. This promising algorithm offers some desirable qualities. However, LPAm favors community divisions where all communities are similar in total degree and thus it is prone to get stuck in poor local maxima in the modularity space. To escape local maxima, we employ a multistep greedy agglomerative algorithm (MSG) that can merge multiple pairs of communities at a time. Combining LPAm and MSG, we propose an advanced modularity-specialized label propagation algorithm (LPAm+). Experiments show that LPAm+ successfully detects communities with higher modularity values than ever reported in two commonly used real-world networks. Moreover, LPAm+ offers a fair compromise between accuracy and speed.", "", "", "In this paper, we propose a multi-layer ant-based algorithm (MABA), which detects communities from networks by means of locally optimizing modularity using individual ants. The basic version of MABA, namely SABA, combines a self-avoiding label propagation technique with a simulated annealing strategy for ant diffusion in networks. Once the communities are found by SABA, this method can be reapplied to a higher level network where each obtained community is regarded as a new vertex. The aforementioned process is repeated iteratively, and this corresponds to MABA. Thanks to the intrinsic multi-level nature of our algorithm, it possesses the potential ability to unfold multi-scale hierarchical structures. Furthermore, MABA has the ability that mitigates the resolution limit of modularity. The proposed MABA has been evaluated on both computer-generated benchmarks and widely used real-world networks, and has been compared with a set of competitive algorithms. Experimental results demonstrate that MABA is both effective and efficient (in near linear time with respect to the size of network) for discovering communities.", "Modularity is a recently introduced quality measure for graph clusterings. It has immediately received considerable attention in several disciplines, and in particular in the complex systems literature, although its properties are not well understood. We here present first results on the computational and analytical properties of modularity. The complexity status of modularity maximization is resolved showing that the corresponding decision version is NP-complete in the strong sense. We also give a formulation as an Integer Linear Program (ILP) to facilitate exact optimization, and provide results on the approximation factor of the commonly used greedy algorithm. Completing our investigation, we characterize clusterings with maximum modularity for several graph families.", "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.", "We present an analysis of communality structure in networks based on the application of simulated annealing techniques. In this case we use as “cost function” the already introduced modularity Q (1), which is based on the relative number of links within a commune against the number of links that would correspond in case the links were distributed randomly. We compare the results of our approach against other methodologies based on betweenness analysis and show that in all cases a better community structure can be attained." ] }
1407.4989
2952878841
There has been a surge of interest in community detection in homogeneous single-relational networks which contain only one type of nodes and edges. However, many real-world systems are naturally described as heterogeneous multi-relational networks which contain multiple types of nodes and edges. In this paper, we propose a new method for detecting communities in such networks. Our method is based on optimizing the composite modularity, which is a new modularity proposed for evaluating partitions of a heterogeneous multi-relational network into communities. Our method is parameter-free, scalable, and suitable for various networks with general structure. We demonstrate that it outperforms the state-of-the-art techniques in detecting pre-planted communities in synthetic networks. Applied to a real-world Digg network, it successfully detects meaningful communities.
A notable issue of modularity optimization is the resolution limit, which refers to the incapability of detecting small communities in large-scale networks @cite_53 @cite_22 . Researchers have tried to get around this issue by proposing variants of modularity. For example, modified modularity by adding a parameter that forms self-loop for each node @cite_33 . Reichardt and Bornholdt modified modularity by adding a parameter in front of the null model term @cite_40 . Both parameters can be used to control the resolution level and detect communities at multiple resolutions. However, a recent study by Lancichinetti and Fortunato demonstrated that these methods are intrinsically deficient and still suffer from the resolution limit @cite_23 .
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_53", "@cite_40", "@cite_23" ], "mid": [ "2061099285", "", "2128366083", "2025543856", "2095189226" ], "abstract": [ "Although widely used in practice, the behavior and accuracy of the popular module identification technique called modularity maximization is not well understood in practical contexts. Here, we present a broad characterization of its performance in such situations. First, we revisit and clarify the resolution limit phenomenon for modularity maximization. Second, we show that the modularity function Q exhibits extreme degeneracies: it typically admits an exponential number of distinct high-scoring solutions and typically lacks a clear global maximum. Third, we derive the limiting behavior of the maximum modularity Q(max) for one model of infinitely modular networks, showing that it depends strongly both on the size of the network and on the number of modules it contains. Finally, using three real-world metabolic networks as examples, we show that the degenerate solutions can fundamentally disagree on many, but not all, partition properties such as the composition of the largest modules and the distribution of module sizes. These results imply that the output of any modularity maximization procedure should be interpreted cautiously in scientific contexts. They also explain why many heuristics are often successful at finding high-scoring partitions in practice and why different heuristics can disagree on the modular structure of the same network. We conclude by discussing avenues for mitigating some of these behaviors, such as combining information from many degenerate solutions or using generative models.", "", "Detecting community structure is fundamental for uncovering the links between structure and function in complex networks and for practical applications in many disciplines such as biology and sociology. A popular method now widely used relies on the optimization of a quantity called modularity, which is a quality index for a partition of a network into communities. We find that modularity optimization may fail to identify modules smaller than a scale which depends on the total size of the network and on the degree of interconnectedness of the modules, even in cases where modules are unambiguously defined. This finding is confirmed through several examples, both in artificial and in real social, biological, and technological networks, where we show that modularity optimization indeed does not resolve a large number of modules. A check of the modules obtained through modularity optimization is thus necessary, and we provide here key elements for the assessment of the reliability of this community detection method.", "Institute for Theoretical Physics, University of Bremen, Otto-Hahn-Allee, D-28359 Bremen, Germany(Dated: February 3, 2008)Starting from a general ansatz, we show how community detection can be interpreted as finding theground state of an infinite range spin glass. Our approach applies to weighted and directed networksalike. It contains the at hoc introduced quality function from [1] and the modularity Q as definedby Newman and Girvan [2] as special cases. The community structure of the network is interpretedas the spin configuration that minimizes the energy of the spin glass with the spin states being thecommunity indices. We elucidate the properties of the ground state configuration to give a concisedefinition of communities as cohesive subgroups in networks that is adaptive to the specific class ofnetwork under study. Further we show, how hierarchies and overlap in the community structure canbe detected. Computationally effective local update rules for optimization procedures to find theground state are given. We show how the ansatz may be used to discover the community around agiven node without detecting all communities in the full network and we give benchmarks for theperformance of this extension. Finally, we give expectation values for the modularity of randomgraphs, which can be used in the assessment of statistical significance of community structure.", "Modularity maximization is the most popular technique for the detection of community structure in graphs. The resolution limit of the method is supposedly solvable with the introduction of modified versions of the measure, with tunable resolution parameters. We show that multiresolution modularity suffers from two opposite coexisting problems: the tendency to merge small subgraphs, which dominates when the resolution is low; the tendency to split large subgraphs, which dominates when the resolution is high. In benchmark networks with heterogeneous distributions of cluster sizes, the simultaneous elimination of both biases is not possible and multiresolution modularity is not capable to recover the planted community structure, not even when it is pronounced and easily detectable by other methods, for any value of the resolution parameter. This holds for other multiresolution techniques and it is likely to be a general problem of methods based on global optimization." ] }
1407.4989
2952878841
There has been a surge of interest in community detection in homogeneous single-relational networks which contain only one type of nodes and edges. However, many real-world systems are naturally described as heterogeneous multi-relational networks which contain multiple types of nodes and edges. In this paper, we propose a new method for detecting communities in such networks. Our method is based on optimizing the composite modularity, which is a new modularity proposed for evaluating partitions of a heterogeneous multi-relational network into communities. Our method is parameter-free, scalable, and suitable for various networks with general structure. We demonstrate that it outperforms the state-of-the-art techniques in detecting pre-planted communities in synthetic networks. Applied to a real-world Digg network, it successfully detects meaningful communities.
There are studies on community detection in homogeneous multi-relational networks (sometimes called the multi-mode networks, multi-dimensional networks, or multi-slice networks). For example, researchers developed methods for detecting communities in a particular subclass of such networks, known as signed networks where each edge has a positive or negative sign @cite_37 @cite_26 @cite_51 . proposed a multiplex model for describing a homogeneous multi-relational network and developed a method based on optimizing a generalized modularity known as stability @cite_35 . Moreover, researchers proposed methods based on matrix approximation @cite_4 and spectral analysis @cite_2 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_26", "@cite_4", "@cite_2", "@cite_51" ], "mid": [ "2074617510", "2112461976", "2097216034", "2127310925", "2095646393", "2143245657" ], "abstract": [ "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.", "Detecting communities in complex networks accurately is a prime challenge, preceding further analyses of network characteristics and dynamics. Until now, community detection took into account only positively valued links, while many actual networks also feature negative links. We extend an existing Potts model to incorporate negative links as well, resulting in a method similar to the clustering of signed graphs, as dealt with in social balance theory, but more general. To illustrate our method, we applied it to a network of international alliances and disputes. Using data from 1993-2001, it turns out that the world can be divided into six power blocs similar to Huntington's civilizations, with some notable exceptions.", "Many complex systems in the real world can be modeled as signed social networks that contain both positive and negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work, we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of rigorous experiments involving both benchmark and randomly generated signed networks.", "A multimode network consists of heterogeneous types of actors with various interactions occurring between them. Identifying communities in a multimode network can help understand the structural properties of the network, address the data shortage and unbalanced problems, and assist tasks like targeted marketing and finding influential actors within or between groups. In general, a network and its group structure often evolve unevenly. In a dynamic multimode network, both group membership and interactions can evolve, posing a challenging problem of identifying these evolving communities. In this work, we try to address this problem by employing the temporal information to analyze a multimode network. A temporally regularized framework and its convergence property are carefully studied. We show that the algorithm can be interpreted as an iterative latent semantic analysis process, which allows for extensions to handle networks with actor attributes and within-mode interactions. Experiments on both synthetic data and real-world networks demonstrate the efficacy of our approach and suggest its generality in capturing evolving groups in networks with heterogeneous entities and complex relationships.", "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation (CDNV) procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration.", "—We propose a framework for discovery of collaborative community structure in Wiki-based knowledge repositories based on raw-content generation analysis. We leverage topic modelling in order to capture agreement and opposition of contributors and analyze these multi-modal relations to map communities in the contributor base. The key steps of our approach include (i) modeling of pair wise variable-strength contributor interactions that can be both positive and negative, (ii) synthesis of a global network incorporating all pair wise interactions, and (iii) detection and analysis of community structure encoded in such networks. The global community discovery algorithm we propose outperforms existing alternatives in identifying coherent clusters according to objective optimality criteria. Analysis of the discovered community structure reveals coalitions of common interest editors who back each other in promoting some topics and collectively oppose other coalitions or single authors. We couple contributor interactions with content evolution and reveal the global picture of opposing themes within the self-regulated community base for both controversial and featured articles in Wikipedia." ] }
1407.4833
2949817229
In this paper, we present an ontology of mathematical knowledge concepts that covers a wide range of the fields of mathematics and introduces a balanced representation between comprehensive and sensible models. We demonstrate the applications of this representation in information extraction, semantic search, and education. We argue that the ontology can be a core of future integration of math-aware data sets in the Web of Data and, therefore, provide mappings onto relevant datasets, such as DBpedia and ScienceWISE.
To put our research into the context, we summarize the most relevant previous works for representing mathematical knowledge in this section. For a more comprehensive overview of services, ontological models and languages for mathematical knowledge management on the Semantic Web and beyond, we refer the interested reader to C. Lange's survey @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "1687148191" ], "abstract": [ "Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas of mathematics, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants. We argue that such scenarios will benefit from Semantic Web technology.Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge.Mathematical knowledge comprises structures given in a logical language --formulae, statements e.g. axioms, and theo-ries --, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and counter-examples. Our review of vocabularies for representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics.Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data.We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections." ] }
1407.4833
2949817229
In this paper, we present an ontology of mathematical knowledge concepts that covers a wide range of the fields of mathematics and introduces a balanced representation between comprehensive and sensible models. We demonstrate the applications of this representation in information extraction, semantic search, and education. We argue that the ontology can be a core of future integration of math-aware data sets in the Web of Data and, therefore, provide mappings onto relevant datasets, such as DBpedia and ScienceWISE.
A SKOS-based adaptation of Mathematics Subject Classification http: www.ams.org msc ‎ is exposed as a linked dataset @cite_18 . ontology overlaps with this dataset in case of modeling hierarchy of fields, but it is significantly richer for representing terms and their interactions.
{ "cite_N": [ "@cite_18" ], "mid": [ "87802918" ], "abstract": [ "The Mathematics Subject Classification (MSC), maintained by the American Mathematical Society's Mathematical Reviews (MR) and FIZ Karlsruhe's Zentralblatt fur Mathematik (Zbl), is a scheme for classifying publications in mathematics. While it is widely used, its traditional, idiosyncratic conceptualization and representation did not encourage wide reuse on the Web, and it made the scheme hard to maintain. We have reimplemented its current version MSC2010 as a Linked Open Dataset using SKOS, and our focus is concentrated on turning it into the new MSC authority. This paper explains the motivation and details of our design considerations and how we realized them in the implementation, presents use cases, and future applications." ] }
1407.4833
2949817229
In this paper, we present an ontology of mathematical knowledge concepts that covers a wide range of the fields of mathematics and introduces a balanced representation between comprehensive and sensible models. We demonstrate the applications of this representation in information extraction, semantic search, and education. We argue that the ontology can be a core of future integration of math-aware data sets in the Web of Data and, therefore, provide mappings onto relevant datasets, such as DBpedia and ScienceWISE.
Due to the lack of space, we do not cover related works on semantic data analysis for mathematical texts, which are given in @cite_13 @cite_2 .
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2015651633", "2041540188" ], "abstract": [ "A survey of the key approaches to the semantic processing of mathematical texts is presented. A software platform prototype for the electronic storage of mathematical documents, which is based on the linked open-data (LOD) model and uses semantic information for data management, including formula-fragment searching, is proposed. The analysis of mathematical documents and the extraction of semantic information from the latter are carried out based on the electronic collection of the Izv. Vyssh. Uchebn. Zaved., Mat. (1995---2009) using special-purpose ontologies, metadata representation in the RDF (Resource Description Framework) format, and integration with existing LOD sets.", "This paper analyzes two models: semantic annotation of mathematical texts and semantic searching for mathematical texts in a marked-up collection. It also presents the results of a series of experiments that were performed with a semantically annotated collection of scientific publications in the field of mathematics." ] }
1407.4709
319415540
The psychological state of flow has been linked to optimizing human performance. A key condition of flow emergence is a match between the human abilities and complexity of the task. We propose a simple computational model of flow for Artificial Intelligence (AI) agents. The model factors the standard agent-environment state into a self-reflective set of the agent's abilities and a socially learned set of the environmental complexity. Maximizing the flow serves as a meta control for the agent. We show how to apply the meta-control policy to a broad class of AI control policies and illustrate our approach with a specific implementation. Results in a synthetic testbed are promising and open interesting directions for future work.
Meta-control policies have been an important element of AI since its early days. The classic A* algorithm uses a heuristic to control its search at the base level and breaks ties towards higher @math -costs at the meta-control level. Pathfinding algorithms often use heuristic search (e.g., A*) as the base control policy but meta-control it with another search @cite_0 @cite_3 or case-based reasoning @cite_4 . Hierarchical control can also be used to solve MDPs more efficiently @cite_5 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_3" ], "mid": [ "196854640", "1876478031", "2155727572", "1557519121" ], "abstract": [ "From an academic perspective there has been a lot of work on using state abstraction to speed path planning. But, this work often does not directly address the needs of the game development community, specifically for mechanisms that will fit the limited memory budget of most commercial games. In this paper we bring together several related pieces of work on using abstraction for pathfinding, showing how the ideas can be implemented using a minimal amount of memory. Our techniques use about 3 additional storage to compute complete paths up to 100 times faster than A*.", "In this paper, we consider planning in stochastic shortest path (SSP) problems, a subclass of Markov Decision Problems (MDP). We focus on medium-size problems whose state space can be fully enumerated. This problem has numerous important applications, such as navigation and planning under uncertainty. We propose a new approach for constructing a multi-level hierarchy of progressively simpler abstractions of the original problem. Once computed, the hierarchy can be used to speed up planning by first finding a policy for the most abstract level and then recursively refining it into a solution to the original problem. This approach is fully automated and delivers a speed-up of two orders of magnitude over a state-of-the-art MDP solver on sample problems while returning near-optimal solutions. We also prove theoretical bounds on the loss of solution optimality resulting from the use of abstractions.", "Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents. actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computering subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14 less pre-computation time.", "Real-time heuristic search methods are used by situated agents in applications that require the amount of planning per move to be independent of the problem size. Such agents plan only a few actions at a time in a local search space and avoid getting trapped in local minima by improving their heuristic function over time. We extend a wide class of real-time search algorithms with automatically-built state abstraction and prove completeness and convergence of the resulting family of algorithms. We then analyze the impact of abstraction in an extensive empirical study in real-time pathfinding. Abstraction is found to improve efficiency by providing better trading offs between planning time, learning speed and other negatively correlated performance measures." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
@cite_12 extract keywords and keyphrases form co-occurrence networks of words and from noun phrases collocations networks. Eleven measures (degree, strength, neighbourhood size, coreness, clustering coefficient, structural diversity index, page rank, HITS – hub and authority score, betweenness, closeness and eigenvector centrality) are used for keyword extraction from directed undirected and weighted networks. Obtained results on 4 data sets are suggesting that centrality measures outperform the baseline term frequency inverse document frequency (tf-idf) model, and simpler measures like degree and strength outperform computationally more expensive centrality measures like coreness and betweenness.
{ "cite_N": [ "@cite_12" ], "mid": [ "1890164900" ], "abstract": [ "Keyword and keyphrase extraction is an important problem in natural language processing, with applications ranging from summarization to semantic search to document clustering. Graph-based approaches to keyword and keyphrase extraction avoid the problem of acquiring a large in-domain training corpus by applying variants of PageRank algorithm on a network of words. Although graph-based approaches are knowledge-lean and easily adoptable in online systems, it remains largely open whether they can benefit from centrality measures other than PageRank. In this paper, we experiment with an array of centrality measures on word and noun phrase collocation networks, and analyze their performance on four benchmark datasets. Not only are there centrality measures that perform as well as or better than PageRank, but they are much simpler (e.g., degree, strength, and neighborhood size). Furthermore, centrality-based methods give results that are competitive with and, in some cases, better than two strong unsupervised baselines." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Boudin @cite_16 compare various centrality measures for graph-based keyphrase extraction. Experiments on standard datasets of English and French show that simple degree centrality achieve results comparable to the widely used TextRank algorithm; and that closeness centrality obtains the best results on short documents. Undirected and weighted co-occurrence networks are constructed from syntactically (only nouns and adjectives) parsed and lemmatized text using co-occurrence window. Degree, closeness, betweenness and eigenvector centrality are compared to PageRank proposed by Mihalcea in @cite_5 as a baseline. Degree centrality achieve similar performance as much complex TextRank and closeness centrality outperforms TextRank on short documents (scientific papers abstracts).
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "1525595230", "2251786111" ], "abstract": [ "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.", "In this paper, we present and compare various centrality measures for graphbased keyphrase extraction. Through experiments carried out on three standard datasets of different languages and domains, we show that simple degree centrality achieve results comparable to the widely used TextRank algorithm, and that closeness centrality obtains the best results on short documents." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Litvak and Last @cite_1 compare supervised and unsupervised approach for keywords identification in the task of extractive summarization. Approaches are based on the graph-based syntactic representation of text and web documents. The results of the HITS algorithm on a set of summarized documents performed comparable to supervised methods (Naive Bayes, J48, Support Vector Machines). Authors are suggesting that simple degree-based rankings from the first iteration of HITS, rather than running it to its convergence should be considered.
{ "cite_N": [ "@cite_1" ], "mid": [ "2043004216" ], "abstract": [ "In this paper, we introduce and compare between two novel approaches, supervised and unsupervised, for identifying the keywords to be used in extractive summarization of text documents. Both our approaches are based on the graph-based syntactic representation of text and web documents, which enhances the traditional vector-space model by taking into account some structural document features. In the supervised approach, we train classification algorithms on a summarized collection of documents with the purpose of inducing a keyword identification model. In the unsupervised approach, we run the HITS algorithm on document graphs under the assumption that the top-ranked nodes should represent the document keywords. Our experiments on a collection of benchmark summaries show that given a set of summarized training documents, the supervised classification provides the highest keyword identification accuracy, while the highest F-measure is reached with a simple degree-based ranking. In addition, it is sufficient to perform only the first iteration of HITS rather than running it to its convergence." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
@cite_11 use community detection techniques for key terms extraction on Wikipedia's texts, modelled as a graph of semantic relationships between terms. The results showed that the terms related to the main topics of the document tend to form a community, thematically cohesive groups of terms. Community detection allows effectively processing of multiple topics in document and efficiently filters out noise. The results achieved on weighted and directed networks from semantically linked, morphologically expanded and disambiguated n-grams from article's titles. Additionally, for the purpose of the noise stability, they repeated the experiment on different multi-topic web pages (news, blogs, forums, social networks, product reviews) which confirmed the community detection outperforms td-idf model.
{ "cite_N": [ "@cite_11" ], "mid": [ "2145049651" ], "abstract": [ "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia. Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages. Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Palshikar @cite_19 propose a hybrid structural and statistical approach to extract keywords from a single document. The undirected co-occurrence network, using a dissimilarity measure between two words, calculated from the frequency of their co-occurrence in the preprocessed and lemmatized document, as the edge weight was shown to be appropriate for centrality measures based approach for keyword extraction.
{ "cite_N": [ "@cite_19" ], "mid": [ "1550806611" ], "abstract": [ "Keywords characterize the topics discussed in a document. Extracting a small set of keywords from a single document is an important problem in text mining. We propose a hybrid structural and statistical approach to extract keywords. We represent the given document as an undirected graph, whose vertices are words in the document and the edges are labeled with a dissimilarity measure between two words, derived from the frequency of their co-occurrence in the document. We propose that central vertices in this graph are candidates as keywords. We model importance of a word in terms of its centrality in this graph. Using graph-theoretical notions of vertex centrality, we suggest several algorithms to extract keywords from the given document. We demonstrate the effectiveness of the proposed algorithms on real-life documents." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Mihalcea and Tarau @cite_5 report a seminal research which introduced state-of-the-art TextRank model. TextRank is derived from PageRank and introduced to graph based text processing, keyword and sentence extraction. Abstracts are modelled as undirected or directed and weighted co-occurrence networks using the co-occurrence window of variable sizes (2..10). Lexical units are preprocessed: stopwords removed, words restricted with POS syntactic filters (open class words, nouns and adjectives, nouns). The PageRank motivated score of importance of the node derived from importance of the neighboring nodes is used for keyword extraction. Obtained TextRank performance compares favorably with the supervised machine learning n-gram based approach.
{ "cite_N": [ "@cite_5" ], "mid": [ "1525595230" ], "abstract": [ "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
in @cite_7 present an early research which represents a document by a undirected and unweighted co-occurrence network, and study small-world properties of text. Based on the network topology, the authors proposed an indexing system called KeyWorld, which extracts important terms (pair of words) by measuring their contribution to small world properties. The contribution of the node is based on closeness centrality calculated as the difference in small world properties of the network with temporarily elimination of a node combined with inverse document frequency (idf).
{ "cite_N": [ "@cite_7" ], "mid": [ "88154610" ], "abstract": [ "The small world topology is known widespread in biological, social and man-made systems. This paper shows that the small world structure also exists in documents,such as papers. A document is represented by a network;the nodes represent terms,and the edges represent the co-occurrence of terms. This network is shown to have the characteristics of being a small world,i.e.,nodes are highly clustered yet the path length between them is small. Based on the topology,we develop an indexing system called KeyWorld,which extracts important terms by measuring their contribution to the graph being small world." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Erkan and Radev @cite_13 introduce a stochastic graph-based method for computing relative importance of textual units on the problem of text summarization by extracting the most important sentences. LexRank, calculates sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. A connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. LexRank is shown to be quite insensitive to the noise in the data.
{ "cite_N": [ "@cite_13" ], "mid": [ "2110693578" ], "abstract": [ "We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
Mihalcea in @cite_2 presents extension to earlier work @cite_5 , where the TextRank algorithm is applied for text summarization task powered by sentence extraction. On this task TextRank performed at par with the supervised and unsupervised summarization methods, which motivated the new branch of research based on the graph-based extracting and ranking algorithms.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "1525595230", "2144270295" ], "abstract": [ "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.", "This paper presents an innovative unsupervised method for automatic sentence extraction using graph-based ranking algorithms. We evaluate the method in the context of a text summarization task, and show that the results obtained compare favorably with previously published results on established benchmarks." ] }
1407.4723
2293332382
Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63 , and average F2 score is 21,19 ; for the exacted words-tuples candidates average F1 score is 25,9 and average F2 score is 24,47 .
@cite_9 present SemanticRank, a network based ranking algorithm for keyword and sentence extraction from text. Semantic relation is based on calculated knowledge-based measure of semantic relatedness between linguistic units (keywords or sentences). The keyword extraction from the Inspec abstracts results reported favorable performance of SemanticRank over state-of-the-art counteparts - weighted and unweighted variations of PageRank and HITS.
{ "cite_N": [ "@cite_9" ], "mid": [ "1525391233" ], "abstract": [ "The selection of the most descriptive terms or passages from text is crucial for several tasks, such as feature extraction and summarization. In the majority of the cases, research works propose the ranking of all candidate keywords or sentences and then select the top-ranked items as features, or as a text summary respectively. Ranking is usually performed using statistical information from text (i.e., frequency of occurrence, inverse document frequency, co-occurrence information). In this paper we present SemanticRank, a graph-based ranking algorithm for keyword and sentence extraction from text. The algorithm constructs a semantic graph using implicit links, which are based on semantic relatedness between text nodes and consequently ranks nodes using different ranking algorithms. Comparative evaluation against related state of the art methods for keyword and sentence extraction shows that SemanticRank performs favorably in previously used data sets." ] }
1407.4443
2950640409
The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings. In the fixed-confidence setting, we provide the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m is larger than 1 under general assumptions. In the specific case of two armed-bandits, we derive refined lower bounds in both the fixed-confidence and fixed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fixed-budget setting may be smaller than the complexity of the fixed-confidence setting, contradicting the familiar behavior observed when testing fully specified alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest : a deviation lemma for self-normalized sums (Lemma 19) and a novel change of measure inequality for bandit models (Lemma 1).
The problem of best arm identification has been studied since the 1950s under the name 'ranking and identification problems'. The first advances on this topic are summarized in the monograph by @cite_24 who consider the fixed-confidence setting and strategies based on uniform sampling. In the fixed confidence setting, @cite_13 first introduces a sampling strategy based on eliminations for single best arm identification: the arms are successively discarded, the remaining arms being sampled uniformly. This idea was later used for example by @cite_3 @cite_9 or by @cite_6 in the context of bounded bandit models, in which each arm @math is a probability distribution on @math . @math best arm identification with @math was considered for example by @cite_22 , in the context of reinforcement learning. @cite_20 later proposed the LUCB (for Lower and Upper Confidence Bounds) algorithm, whose sampling strategy is no longer based on eliminations, still for bounded bandit models. Bounded distributions are in fact particular examples of distributions with subgaussian tails, to which the proposed algorithms can be easily generalized. A relevant quantity introduced in the analysis of algorithms for bounded (or subgaussian) bandit models is the 'complexity term'
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_6", "@cite_3", "@cite_24", "@cite_13", "@cite_20" ], "mid": [ "2059654640", "1585839838", "2147967768", "1489105500", "", "2027073833", "2168810201" ], "abstract": [ "Uncertainty arises in reinforcement learning from various sources, and therefore it is necessary to consider statistics based on several roll-outs for evaluating behavioral policies. We add an adaptive uncertainty handling based on Hoeffding and empirical Bernstein races to the CMA-ES, a variable metric evolution strategy proposed for direct policy search. The uncertainty handling adjusts individually the number of episodes considered for the evaluation of a policy. The performance estimation is kept just accurate enough for a sufficiently good ranking of candidate policies, which is in turn sufficient for the CMA-ES to find better solutions. This increases the learning speed as well as the robustness of the algorithm.", "Given a set of models and some training data, we would like to find the model that best describes the data. Finding the model with the lowest generalization error is a computationally expensive process, especially if the number of testing points is high or if the number of models is large. Optimization techniques such as hill climbing or genetic algorithms are helpful but can end up with a model that is arbitrarily worse than the best one or cannot be used because there is no distance metric on the space of discrete models. In this paper we develop a technique called ’’racing‘‘ that tests the set of models in parallel, quickly discards those models that are clearly inferior and concentrates the computational effort on differentiating among the better models. Racing is especially suitable for selecting among lazy learners since training requires negligible expense, and incremental testing using leave-one-out cross validation is efficient. We use racing to select among various lazy learning algorithms and to find relevant features in applications ranging from robot juggling to lesion detection in MRI scans.", "We incorporate statistical confidence intervals in both the multi-armed bandit and the reinforcement learning problems. In the bandit problem we show that given n arms, it suffices to pull the arms a total of O((n e2)log(1 δ)) times to find an e-optimal arm with probability of at least 1-δ. This bound matches the lower bound of Mannor and Tsitsiklis (2004) up to constants. We also devise action elimination procedures in reinforcement learning algorithms. We describe a framework that is based on learning the confidence interval around the value function or the Q-function and eliminating actions that are not optimal (with high probability). We provide a model-based and a model-free variants of the elimination method. We further derive stopping conditions guaranteeing that the learned policy is approximately optimal with high probability. Simulations demonstrate a considerable speedup and added robustness over e-greedy Q-learning.", "Publisher Summary This chapter presents asymptotically optimal procedures for sequential adaptive selection of the best of several normal means. It is shown that for a sequential procedure based on elimination, if k, δ, μi and σ2 are fixed and P* →1, then there is a sharp asymptotic lower bound for the natural measure of efficiency. The chapter describes the class of elimination procedures with adaptive sampling, which do solve the selection problem. It also presents some Monte Carlo simulations to illustrate the potential savings in sample size that can be achieved by using fairly simple adaptive sampling rules.", "", "", "We consider the problem of selecting, from among the arms of a stochastic n-armed bandit, a subset of size m of those arms with the highest expected rewards, based on efficiently sampling the arms. This \"subset selection\" problem finds application in a variety of areas. In the authors' previous work (Kalyanakrishnan & Stone, 2010), this problem is framed under a PAC setting (denoted \"Explore-m\"), and corresponding sampling algorithms are analyzed. Whereas the formal analysis therein is restricted to the worst case sample complexity of algorithms, in this paper, we design and analyze an algorithm (\"LUCB\") with improved expected sample complexity. Interestingly LUCB bears a close resemblance to the well-known UCB algorithm for regret minimization. The expected sample complexity bound we show for LUCB is novel even for single-arm selection (Explore-1). We also give a lower bound on the worst case sample complexity of PAC algorithms for Explore-m." ] }
1407.4443
2950640409
The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings. In the fixed-confidence setting, we provide the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m is larger than 1 under general assumptions. In the specific case of two armed-bandits, we derive refined lower bounds in both the fixed-confidence and fixed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fixed-budget setting may be smaller than the complexity of the fixed-confidence setting, contradicting the familiar behavior observed when testing fully specified alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest : a deviation lemma for self-normalized sums (Lemma 19) and a novel change of measure inequality for bandit models (Lemma 1).
The upper bound on the sample complexity of the LUCB algorithm of @cite_20 implies in particular that @math . Some of the existing works on the fixed-confidence setting do not bound @math in expectation but rather show that @math . These results are not directly comparable with the complexity @math , although no significant gap is to be observed yet.
{ "cite_N": [ "@cite_20" ], "mid": [ "2168810201" ], "abstract": [ "We consider the problem of selecting, from among the arms of a stochastic n-armed bandit, a subset of size m of those arms with the highest expected rewards, based on efficiently sampling the arms. This \"subset selection\" problem finds application in a variety of areas. In the authors' previous work (Kalyanakrishnan & Stone, 2010), this problem is framed under a PAC setting (denoted \"Explore-m\"), and corresponding sampling algorithms are analyzed. Whereas the formal analysis therein is restricted to the worst case sample complexity of algorithms, in this paper, we design and analyze an algorithm (\"LUCB\") with improved expected sample complexity. Interestingly LUCB bears a close resemblance to the well-known UCB algorithm for regret minimization. The expected sample complexity bound we show for LUCB is novel even for single-arm selection (Explore-1). We also give a lower bound on the worst case sample complexity of PAC algorithms for Explore-m." ] }
1407.4179
2949320875
Biometric key generation techniques are used to reliably generate cryptographic material from biometric signals. Existing constructions require users to perform a particular activity (e.g., type or say a password, or provide a handwritten signature), and are therefore not suitable for generating keys continuously. In this paper we present a new technique for biometric key generation from free-text keystroke dynamics. This is the first technique suitable for continuous key generation. Our approach is based on a scaled parity code for key generation (and subsequent key reconstruction), and can be augmented with the use of population data to improve security and reduce key reconstruction error. In particular, we rely on linear discriminant analysis (LDA) to obtain a better representation of discriminable biometric signals. To update the LDA matrix without disclosing user's biometric information, we design a provably secure privacy-preserving protocol (PP-LDA) based on homomorphic encryption. Our biometric key generation with PP-LDA was evaluated on a dataset of 486 users. We report equal error rate around 5 when using LDA, and below 7 without LDA.
Monrose al @cite_12 evaluate the performance of BKG based on spoken password using data from 50 users. They report a false-negative rate of 4 Handwritten signature is another behavioral modality, where biometric key generation has been studied. Multiple papers, for example by Freire al @cite_10 , Feng al @cite_13 and more recently Scheuermann al @cite_18 evaluate the performance. The dataset sizes for the first three papers are 330, 25 and 144 users; the last paper does not include the number of users. The false accept false reject rates presented are 57 Physical biometrics have also been used for biometric key generation, evaluated on fingerprints by Clancy @cite_14 al, Uludag al @cite_11 , Sy and Krishnan @cite_15 and others. BKG on iris was studied by Rathgeb and Uhl @cite_2 , @cite_16 and Wu al @cite_20 , and on face images by Chen al @cite_17 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_10", "@cite_20", "@cite_17", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2407528347", "2165367855", "2026973959", "2110486492", "2127002079", "1596070156", "1512396983", "2130483126", "2007527634", "2131183039", "" ], "abstract": [ "", "In this paper, the fundamental insecurities hampering a scalable, wide-spread deployment of biometric authentication are examined, and a cryptosystem capable of using fingerprint data as its key is presented. For our application, we focus on situations where a private key stored on a smartcard is used for authentication in a networked environment, and we assume an attacker can launch o -line attacks against a stolen card.Juels and Sudan's fuzzy vault is used as a starting point for building and analyzing a secure authentication scheme using fingerprints and smartcards called a figerprint vault. Fingerprint minutiae coordinates mi are encoded as elements in a nite eld F and the secret key is encoded in a polynomial f(x) over F[x]. The polynomial is evaluated at the minutiae locations, and the pairs (mi, f(mi)) are stored along with random (ci, di) cha points such that di ≠ f(ci). Given a matching fingerprint, a valid user can seperate out enough true points from the cha points to reconstruct f(x), and hence the original secret key.The parameters of the vault are selected such that the attacker's vault unlocking complexity is maximized, subject to zero unlocking complexity with a matching fingerprint and a reasonable amount of error. For a feature location measurement variance of 9 pixels, the optimal vault is 269 times more difficult to unlock for an attacker compared to a user posessing a matching fingerprint, along with approximately a 30 chance of unlocking failure.", "Based on recent works showing the feasibility of key generation using biometrics, we study the application of handwritten signature to cryptography. Our signature-based key generation scheme implements the cryptographic construction named fuzzy vault. The use of distinctive signature features suited for the fuzzy vault is discussed and evaluated. Experimental results are reported, including error rates to unlock the secret data by using both random and skilled forgeries from the MCYT database.", "Biometric cryptography is a technique using biometric features to encrypt data, which can improve the security of the encrypted data and overcome the shortcomings of the traditional cryptography. This paper proposes a novel biometric cryptosystem based on the most accurate biometric feature - iris. In encryption phase, a quantified 256-dimension textural feature vector is firstly extracted from the preprocessed iris image using a set of 2-D Gabor filters. At the same time, an error-correct-code (ECC) is generated using Reed-Solomon algorithm. Then the feature vector is translated to a cipher key using Hash function. Some general encryption algorithms use this cipher key to encrypt the secret information. In decryption phase, a feature vector extracted from the input iris is firstly corrected using the ECC. Then it is translated to the cipher key using the same Hash function. Finally, the corresponding general decryption algorithms use the key to decrypt the information. Experimental results demonstrate the feasibility of the proposed system.", "Existing asymmetric encryption algorithms require the storage of the secret private key. Stored keys are often protected by poorly selected user passwords that can either be guessed or obtained through brute force attacks. This is a weak link in the overall encryption system and can potentially compromise the integrity of sensitive data. Combining biometrics with cryptography is seen as a possible solution but any biometric cryptosystem must be able to overcome small variations present between different acquisitions of the same biometric in order to produce consistent keys. This paper discusses a new method which uses an entropy based feature extraction process coupled with Reed-Solomon error correcting codes that can generate deterministic bit-sequences from the output of an iterative one-way transform. The technique is evaluated using 3D face data and is shown to reliably produce keys of suitable length for 128-bit Advanced Encryption Standard (AES).", "In this work we present a new technique for generating cryptographic keys out of iris textures implementing a key-generation scheme. In contrast to existing approaches to iris-biometric cryptosystems the proposed scheme does not store any biometric data, neither in raw nor in encrypted form, providing high secrecy in terms of template protection. The proposed approach is tested on a widely used database revealing key generation rates above 95 .", "Cryptographic approach, on the other hand, ties data protection mathematically to the Key that is utilized to protect it. This allows a data owner to have complete control over one’s personal information without relying on, or relinquishing control to, a third party authority. The protection of personal sensitive information is also not tied to complex software and hardware systems that may need constant patches.", "In order to increase security in common key management systems, the majority of so-called Biometric Cryptosystems aim at coupling cryptographic systems with biometric recognition systems. As a result these systems produce cryptographic keys, dependent on biometric information, denoted Biometric Keys. In this work a generic approach to producing cryptographic keys out of biometric data is presented, by applying so-called Interval-Mapping techniques. The proposed scheme is adapted to iris, which has not yet been examined according to this technique, as well as on-line signatures to demonstrate that this approach is generic. Performance results show that the approach pays off.", "n recent years, public key infrastructure (PKI) has emerged as co‐existent with the increasing demand for digital security. A digital signature is created using existing public key cryptography technology. This technology will permit commercial transactions to be carried out across insecure networks without fear of tampering or forgery. The relative strength of digital signatures relies on the access control over the individual’s private key. The private key storage, which is usually password‐protected, has long been a weak link in the security chain. In this paper, we describe a novel and feasible system – BioPKI cryptosystem – that dynamically generates private keys from users’ on‐line handwritten signatures. The BioPKI cryptosystem eliminates the need of private key storage. The system is secure, reliable, convenient and non‐invasive. In addition, it ensures non‐repudiation to be addressed on the maker of the transaction instead of the computer where the transaction occurs.", "We propose a technique to reliably generate a cryptographic key from a user's voice while speaking a password. The key resists cryptanalysis even against an attacker who captures all system information related to generating or verifying the cryptographic key. Moreover, the technique is sufficiently robust to enable the user to reliably regenerate the key by uttering her password again. We describe an empirical evaluation of this technique using 250 utterances recorded from 50 users.", "" ] }
1407.4075
2951673886
Iterative compilation is a widely adopted technique to opti- mize programs for difierent constraints such as performance, code size and power consumption in rapidly evolving hardware and software envi- ronments. However, in case of statically compiled programs, it is often re- stricted to optimizations for a speciflc dataset and may not be applicable to applications that exhibit difierent run-time behavior across program phases, multiple datasets or when executed in heterogeneous, reconflg- urable and virtual environments. Several frameworks have been recently introduced to tackle these problems and enable run-time optimization and adaptation for statically compiled programs based on static func- tion multiversioning and monitoring of online program behavior. In this article, we present a novel technique to select a minimal set of representa- tive optimization variants (function versions) for such frameworks while avoiding performance loss across available datasets and code-size explo- sion. We developed a novel mapping mechanism using popular decision tree or rule induction based machine learning techniques to rapidly select best code versions at run-time based on dataset features and minimize selection overhead. These techniques enable creation of self-tuning static binaries or libraries adaptable to changing behavior and environments at run-time using staged compilation that do not require complex recom- pilation frameworks while efiectively outperforming traditional single- version non-adaptable code.
Iterative compilation is usually used to optimize program with one dataset which is not practical. This is demonstrated in @cite_35 where the influence of multiple datasets on iterative compilation has been studied using a number of programs from MiBench benchmark. Hybrid static dynamic approaches have been introduced to tackle such problems. They are used in a well-known library generators such as ATLAS @cite_37 , FFTW @cite_1 and SPIRAL @cite_25 to identify different optimization variants for different inputs to improve overall execution time. Some general approaches have also been introduced in @cite_30 @cite_3 @cite_22 @cite_18 to make static programs adaptable to changes in run-time behavior by generating different code versions for different contexts. However, most of these frameworks are limited to simple optimizations or need complex run-time recompilation frameworks. None of them provide techniques to select a representative set of optimization variants.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_18", "@cite_22", "@cite_1", "@cite_3", "@cite_25" ], "mid": [ "188832472", "", "2135653967", "2055817654", "2116210226", "2096070062", "2044280736", "1556378383" ], "abstract": [ "", "", "This paper describes an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. The production of such software for machines ranging from desktop workstations to embedded processors can be a tedious and time consuming process. The work described here can help in automating much of this process. We will concentrate our efforts on the widely used linear algebra kernels called the Basic Linear Algebra Subroutines (BLAS). In particular, the work presented here is for general matrix multiply, DGEMM. However much of the technology and approach developed here can be applied to the other Level 3 BLAS and the general strategy can have an impact on basic linear algebra operations in general and may be extended to other important kernel operations.", "As hardware complexity increases and virtualization is added at more layers of the execution stack, predicting the performance impact of optimizations becomes increasingly difficult. Production compilers and virtual machines invest substantial development effort in performance tuning to achieve good performance for a range of benchmarks. Although optimizations typically perform well on average, they often have unpredictable impact on running time, sometimes degrading performance significantly. Today's VMs perform sophisticated feedback-directed optimizations, but these techniques do not address performance degradations, and they actually make the situation worse by making the system more unpredictable.This paper presents an online framework for evaluating the effectiveness of optimizations, enabling an online system to automatically identify and correct performance anomalies that occur at runtime. This work opens the door for a fundamental shift in the way optimizations are developed and tuned for online systems, and may allow the body of work in offline empirical optimization search to be applied automatically at runtime. We present our implementation and evaluation of this system in a product Java VM.", "Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions to preserve correctness and to avoid performance degradation. In order to cope with this lack of information at compile-time, adaptive and dynamic systems can be used to perform optimization at runtime when complete knowledge of input and machine parameters is available. This paper presents a compiler-supported high-level adaptive optimization system. Users describe, in a domain specific language, optimizations performed by stand-alone optimization tools and backend compiler flags, as well as heuristics for applying these optimizations dynamically at runtime. The ADAPT compiler reads these descriptions and generates application-specific runtime systems to apply the heuristics. To facilitate the usage of existing tools and compilers, overheads are minimized by decoupling optimization from execution. Our system, ADAPT, supports a range of paradigms proposed recently, including dynamic compilation, parameterization and runtime sampling. We demonstrate our system by applying several optimization techniques to a suite of benchmarks on two target machines. ADAPT is shown to consistently outperform statically generated executables, improving performance by as much as 70 .", "FFT literature has been mostly concerned with minimizing the number of floating-point operations performed by an algorithm. Unfortunately, on present-day microprocessors this measure is far less important than it used to be, and interactions with the processor pipeline and the memory hierarchy have a larger impact on performance. Consequently, one must know the details of a computer architecture in order to design a fast algorithm. In this paper, we propose an adaptive FFT program that tunes the computation automatically for any particular hardware. We compared our program, called FFTW, with over 40 implementations of the FFT on 7 machines. Our tests show that FFTW's self-optimizing approach usually yields significantly better performance than all other publicly available software. FFTW also compares favorably with machine-specific, vendor-optimized libraries.", "This paper presents dynamic feedback, a technique that enables computations to adapt dynamically to different execution environments. A compiler that uses dynamic feedback produces several different versions of the same source code; each version uses a different optimization policy. The generated code alternately performs sampling phases and production phases. Each sampling phase measures the overhead of each version in the current environment. Each production phase uses the version with the least overhead in the previous sampling phase. The computation periodically resamples to adjust dynamically to changes in the environment.We have implemented dynamic feedback in the context of a parallelizing compiler for object-based programs. The generated code uses dynamic feedback to automatically choose the best synchronization optimization policy. Our experimental results show that the synchronization optimization policy has a significant impact on the overall performance of the computation, that the best policy varies from program to program, that the compiler is unable to statically choose the best policy, and that dynamic feedback enables the generated code to exhibit performance that is comparable to that of code that has been manually tuned to use the best policy. We have also performed a theoretical analysis which provides, under certain assumptions, a guaranteed optimality bound for dynamic feedback relative to a hypothetical (and unrealizable) optimal algorithm that uses the best policy at every point during the execution.", "" ] }
1407.3896
2950439211
We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypothe- ses that explain skeptical credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
We already discussed Sakama's @cite_1 model of abduction in argumentation and mentioned some differences. Our approach is more general because we consider a hypothesis to be a change to the AF that is applied as a whole, instead of a set of independently selectable abducible arguments. On the other hand, Sakama's method supports a larger range semantics, including (semi-)stable and skeptical preferred semantics. Furthermore, Sakama also considers observations leading to rejection of arguments, which we do not.
{ "cite_N": [ "@cite_1" ], "mid": [ "2206132849" ], "abstract": [ "This paper studies an abduction problem in formal argumentation frameworks. Given an argument, an agent verifies whether the argument is justified or not in its argumentation framework. If the argument is not justified, the agent seeks conditions to explain the argument in its argumentation framework. We formulate such abductive reasoning in argumentation semantics and provide its computation in logic programming. Next we apply abduction in argumentation frameworks to reasoning by players in debate games. In debate games, two players have their own argumentation frameworks and each player builds claims to refute the opponent. A player may provide false or inaccurate arguments as a tactic to win the game. We show that abduction is used not only for seeking counter-claims but also for building dishonest claims in debate games." ] }
1407.3896
2950439211
We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypothe- ses that explain skeptical credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
@cite_11 use term rewriting logic to compute changes to an abstract AF with the goal of changing the status of an argument. Two similarities to our work are: (1) our production rules to generate dialogues can be seen as a kind of term rewriting rules. (2) their approach amounts to rewriting goals into statements to the effect that certain attacks in the AF are enabled or disabled. These statements resemble the moves @math and @math in our system. However, they treat attacks as entities that can be enabled or disabled independently. As discussed, different arguments (or in this case attacks associated with arguments) cannot be regarded as independent entities, if the abstract model is instantiated.
{ "cite_N": [ "@cite_11" ], "mid": [ "74587951" ], "abstract": [ "When several agents are engaged in an argumentation process, they are faced with the problem of deciding how to contribute to the current state of the debate in order to satisfy their own goal, ie. to make an argument under a given semantics accepted or not. In this paper, we study the minimal changes or target sets on the current state of the debate that are required to achieve such a goal, where changes are the addition and or deletion of attacks among arguments. We study some properties of these target sets, and propose a Maude specification of rewriting rules which allow to compute all the target sets for some types of goals." ] }
1407.3896
2950439211
We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypothe- ses that explain skeptical credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
Goal oriented change of AFs is also studied by Baumann @cite_19 , Baumann and Brewka @cite_0 , @cite_7 and @cite_20 . Furthermore, @cite_10 and Coste- @cite_15 frame it as a problem of . Other studies in which changes to AFs are considered include @cite_18 @cite_12 @cite_16 @cite_17 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_0", "@cite_19", "@cite_15", "@cite_16", "@cite_10", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "", "", "119034740", "2072643736", "2142438850", "", "", "", "1814404551", "2129730018" ], "abstract": [ "", "", "This paper addresses the problem of revising a Dung-style argumentation framework by adding finitely many new arguments which may interact with old ones. We study the behavior of the extensions of the augmented argumentation frameworks, taking also into account possible changes of the underlying semantics (which may be interpreted as corresponding changes of proof standards). We show both possibility and impossibility results related to the problem of enforcing a desired set of arguments. Furthermore, we prove some monotonicity results for a special class of expansions with respect to the cardinality of the set of extensions and the justification state.", "Given a semantics @s, two argumentation frameworks (AFs) F and G are said to be standard equivalent if they possess the same extensions and strongly equivalent if, for any AF H, F conjoined with H and G conjoined with H are standard equivalent. Argumentation is a dynamic process and, in general, new arguments occur in response to a former argument or, more precisely, attack a former argument. For this reason, rather than considering arbitrary expansions we focus here on expansions where new arguments and attacks may be added but the attacks among the old arguments remain unchanged. We define and characterize two new notions of equivalence between AFs (which lie in-between standard and strong equivalence), namely normal and strong expansion equivalence. Furthermore, using the characterization theorems proved in this paper, we draw the connections between all mentioned notions of equivalence including further equivalence relations, so-called weak and local expansion equivalence.", "In this paper, we investigate the revision of argumentation systems a la Dung. We focus on revision as minimal change of the arguments status. Contrarily to most of the previous works on the topic, the addition of new arguments is not allowed in the revision process, so that the revised system has to be obtained by modifying the attack relation only. We introduce a language of revision formulae which is expressive enough for enabling the representation of complex conditions on the acceptability of arguments in the revised system. We show how AGM belief revision postulates can be translated to the case of argumentation systems. We provide a corresponding representation theorem in terms of minimal change of the arguments statuses. Several distance-based revision operators satisfying the postulates are also pointed out, along with some methods to build revised argumentation systems. We also discuss some computational aspects of those methods.", "", "", "", "Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.", "Since argumentation is an inherently dynamic process, it is of great importance to understand the effect of incorporating new information into given argumentation frameworks. In this work, we address this issue by analyzing equivalence between argumentation frameworks under the assumption that the frameworks in question are incomplete, i.e. further information might be added later to both frameworks simultaneously. In other words, instead of the standard notion of equivalence (which holds between two frameworks, if they possess the same extensions), we require here that frameworks F and G are also equivalent when conjoined with any further framework H. Due to the nonmonotonicity of argumentation semantics, this concept is different to (but obviously implies) the standard notion of equivalence. We thus call our new notion strong equivalence and study how strong equivalence can be decided with respect to the most important semantics for abstract argumentation frameworks. We also consider variants of strong equivalence in which we define equivalence with respect to the sets of arguments credulously (or skeptically) accepted, and restrict strong equivalence to augmentations H where no new arguments are raised." ] }
1407.3832
341128909
This paper develops a Reasoning about Actions and Change framework integrated with Default Reasoning, suitable as a Knowledge Representation and Reasoning framework for Story Comprehension. The proposed framework, which is guided strongly by existing knowhow from the Psychology of Reading and Comprehension, is based on the theory of argumentation from AI. It uses argumentation to capture appropriate solutions to the frame, ramification and qualification problems and generalizations of these problems required for text comprehension. In this first part of the study the work concentrates on the central problem of integration (or elaboration) of the explicit information from the narrative in the text with the implicit (in the readers mind) common sense world knowledge pertaining to the topic(s) of the story given in the text. We also report on our empirical efforts to gather background common sense world knowledge used by humans when reading a story and to evaluate, through a prototype system, the ability of our approach to capture both the majority and the variability of understanding of a story by the human readers in the experiments.
Many other authors have emphasized the importance of commonsense knowledge and reasoning in story comprehension @cite_33 @cite_32 @cite_2 @cite_7 @cite_40 @cite_4 @cite_37 @cite_19 , and indeed how it can offer a basis for story comprehension tasks beyond question answering @cite_10 .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_33", "@cite_7", "@cite_32", "@cite_19", "@cite_40", "@cite_2", "@cite_10" ], "mid": [ "", "", "2035585180", "1998398906", "1538989194", "1890624015", "", "145423907", "" ], "abstract": [ "", "", "levels, such as Pet6fi's text grammar and Rumelhart's schemata for stories. By defining permissible interrelations between constituents of larger texts at all levels of abstraction, these create meaning structures organized on both hierarchical and associative principles, rather than on a sequential basis. The automated \"understanding\" process consists of instantiating an appropriate set of event templates by associating a given text with the ERGO inventory of event templates and const ucting a network of instanti ted templates based on intertemplate relational rules and constraints. The network of instantiated t mplates thus represents the information content of the unstructured original text, and provides the structured input for the event record data bas . Details of the automated understanding process presented here in the abstract are given in the follo ing section. 3.2 Processing Principles. The process of data base generation involves two major functions: (1) co tent analysis of the incoming text (2) event record synthesis (or production) The first involves constructing a meaning representation of the text and the second the extraction of relevant information and its storage in a data base record. The major focus of ERGO is on reports of a particular class of events which describe aircraft movements. The unit of analysis is therefore the report, a textual unit consisting of one or more par graphs, each containing one or more sentences. The first step in the analytical process involves a lexical lookup: a lexical entry contains morphological, syntactic and semantic information, on the lines of Sager (1973) and (1973). Each sentence is then subjected to a syntactic analysis by means of an Augmented Transition Network (ATN) parser (Woods, 1970). Since event templates are based on propositional structures, the analytical", "This paper investigates the use of commonsense reasoning to understand texts involving stereotypical activities or scripts. We present a system that understands news stories involving four terrorism scripts. The system (1) builds a commonsense reasoning problem given an information extraction template representing a terrorist incident, and (2) uses commonsense reasoning and a commonsense knowledge base to build a model of the terrorist incident. The reasoning problem, commonsense knowledge base, and model are expressed in the classical logic event calculus. The system was developed using the MUC3 and MUC4 development data set. We present the results of running the system on the MUC3 and MUC4 test data sets, using manually generated answer key templates and templates generated automatically by two MUC4 information extraction systems. We present a detailed analysis of the models produced by the system given automatically generated templates. We present methods for answering questions based on the models produced by our system. We assess the portability of the system by extending it to handle 10 scripts frequent in Project Gutenberg American literature texts.", "The reader of a text actively constructs a rich picture of the objects, events, and situation described. The text is a vague, insufficient, and ambiguous indicator of the world that the writer intends to depict. The reader draws upon world knowledge to disambiguate and clarify the text, selecting the most plausible interpretation from among the (infinitely) many possible ones. In principle, any world knowledge whatsoever in the reader's mind can affect the choice of an interpretation. Is there a level of knowledge that is general and common to many speakers of a natural language? Can this level be the basis of an explanation of text interpretation? Can it be identified in a principled, projectable way? Can this level be represented for use in computational text understanding? We claim that there is such a level, called naive semantics (NS), which is commonsense knowledge associated with words. Naive semantics identifies words with concepts, which vary in type. Nominal concepts are categorizations of objects based upon naive theories concerning the nature and typical description of conceptualized objects. Verbal concepts are naive theories of the implications of conceptualized events and states. 2 Concepts are considered naive because they are not always objectively true, and bear only a distant relation to scientific theories. An informal example of a naive nominal concept is the following description of the typical lawyer.", "Among the many aspects of human intelligence that currently elude the simulation by machines is that of story understanding. Although many theories of narrative have been proposed, several processes pertaining to narrative remain inadequately formalized and, hence, beyond full mechanization. This work proposes a general formal framework that attempts to make precise such processes and related notions, with first and foremost that of what constitutes a narrative. Emphasis is placed on identifying certain premises that narratives are expected to adhere to, and deriving the formal implications that these have in terms of the computability of the various relevant notions. Among others, it is established that checking whether a discourse is a narrative is decidable, and that narratives can be computably enumerated and, hence, unambiguously indexed.", "", "Historically, story understanding systems have depended on a great deal of hand-crafted knowledge. Natural language understanding systems that use conceptual knowledge structures [SA77, Cul78, Wil78, Car79, Leh81, Kol83] typically rely on enormous amounts of manual knowledge engineering. While much of the work on conceptual knowledge structures has been hailed as pioneering research in cognitive modeling and narrative understanding, from a practical perspective it has also been viewed with skepticism because of the underlying knowledge engineering bottleneck. The thought of building a large-scale conceptual natural language processing (NLP) system that can understand open-ended text is daunting even to the most ardent enthusiasts. So must we grit our collective teeth and assume that story understanding will be limited to prototype systems in the foreseeable future? Or will conceptual natural language processing ultimately depend on a massive, broad-scale manual knowledge engineering effort, such as CYC [LPS86]?", "" ] }
1407.3190
2185183319
We consider three different schemes for signal routing on a tree. The vertices of the tree represent transceivers that can transmit and receive signals, and are equipped with i.i.d. weights representing the strength of the transceivers. The edges of the tree are also equipped with i.i.d. weights, representing the costs for passing the edges. For each one of our schemes, we derive sharp conditions on the distributions of the vertex weights and the edge weights that determine when the root can transmit a signal over arbitrarily large distances.
Probability on trees has been a very active field of probability for the last decades; see e.g. @cite_6 for an introduction and @cite_10 for a recent account. The work here is closely related to first passage percolation on trees and tree-indexed Markov chains, see e.g. @cite_5 @cite_8 . We also rely on results and techniques for branching random walks, see @cite_9 . Transceiver networks have previously been analyzed in the probability literature in the context of spatial Poisson processes, see @cite_3 , but the setup there is quite different from ours.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_6", "@cite_3", "@cite_5", "@cite_10" ], "mid": [ "", "2144517909", "2486019214", "2031121449", "2038983716", "121824094" ], "abstract": [ "", "These notes provide an elementary and self-contained introduction to branching ran- dom walks. Section 1 gives a brief overview of Galton-Watson trees, whereas Section 2 presents the classical law of large numbers for branching random walks. These two short sections are not exactly in- dispensable, but they introduce the idea of using size-biased trees, thus giving motivations and an avant-gout to the main part, Section 3, where branching random walks are studied from a deeper point of view, and are connected to the model of directed polymers on a tree. Tree-related random processes form a rich and exciting research subject. These notes cover only special topics. For a general account, we refer to the St-Flour lecture notes of Peres (47) and to the forthcoming book of Lyons and Peres (42), as well as to Duquesne and Le Gall (23) and Le Gall (37) for continuous random trees.", "1. Preface 2. Basic Definitions and a Few Highlights 3. Galton-Watson Trees 4. General percolation on a connected graph 5. The First-Moment Method 6. Quasi-independent Percolation 7. The Second Moment Method 8. Electrical Networks 9. Infinite Networks 10. The Method of Random Paths 11. Transience of Percolation Clusters 12. Subperiodic Trees 13. The Random Walks ( RW _ ) 14. Capacity 15. Intersection-Equivalence 16. Reconstruction for the Ising Model on a Tree 17. Unpredictable Paths in Z and EIT inZ2 18. Tree-Indexed Processes 19. Recurrence for Tree-Indexed Markov Chains 20. Dynamical Percolation 21. Stochastic Domination Between Trees", "Consider randomly scattered radio transceivers in ℝ d , each of which can transmit signals to all transceivers in a given randomly chosen region about itself. If a signal is retransmitted by every transceiver that receives it, under what circumstances will a signal propagate to a large distance from its starting point? Put more formally, place points x i in ℝ d according to a Poisson process with intensity 1. Then, independently for each x i , choose a bounded region A xi from some fixed distribution and let , be the random directed graph with vertex set x i and edges x i -x j whenever x j ∈ x i + A xi . We show that, for any η > 0, g will almost surely have an infinite directed path, provided the expected number of transceivers that can receive a signal directly from x i is at least 1 + η, and the regions x i + A xi do not overlap too much (in a sense that we shall make precise). One example where these conditions hold, and so gives rise to percolation, is in ℝ d , with each A xi a ball of volume 1 + η centred at x i , where η → 0 as d → ∞. Another example is in two dimensions, where the A xi are sectors of angle eθ and area 1 + η, uniformly randomly oriented within a fixed angle (1 + e)θ. In this case we can let η → 0 as e → 0 and still obtain percolation. The result is already known for the annulus, i.e. that the critical area tends to 1 as the ratio of the radii tends to 1, while it is known to be false for the square (l ∞ ) annulus. Our results show that it does however hold for the randomly oriented square annulus.", "Suppose that i.i.d. random variables are attached to the edges of an infinite tree. When the tree is large enough, the partial sumsSσ along some of its infinite paths will exhibit behavior atypical for an ordinary random walk. This principle has appeared in works on branching random walks, first-passage percolation, and RWRE on trees. We establish further quantitative versions of this principle, which are applicable in these settings. In particular, different notions of speed for such a tree-indexed walk correspond to different dimension notions for trees. Finally, if the labeling variables take values in a group, then properties of the group (e.g., polynomial growth or a nontrivial Poisson boundary) are reflected in the sample-path behavior of the resulting tree-indexed walk.", "Starting around the late 1950s, several research communities began relating the geometry of graphs to stochastic processes on these graphs. This book, twenty years in the making, ties together research in the field, encompassing work on percolation, isoperimetric inequalities, eigenvalues, transition probabilities, and random walks. Written by two leading researchers, the text emphasizes intuition, while giving complete proofs and more than 850 exercises. Many recent developments, in which the authors have played a leading role, are discussed, including percolation on trees and Cayley graphs, uniform spanning forests, the mass-transport technique, and connections on random walks on graphs to embedding in Hilbert space. This state-of-the-art account of probability on networks will be indispensable for graduate students and researchers alike." ] }
1407.3247
2951884201
We study computational aspects of three prominent voting rules that use approval ballots to elect multiple winners. These rules are satisfaction approval voting, proportional approval voting, and reweighted approval voting. We first show that computing the winner for proportional approval voting is NP-hard, closing a long standing open problem. As none of the rules are strategyproof, even for dichotomous preferences, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots from the other agents.
The Handbook of Approval Voting discusses various approval-based multi-winner rules including @math , @math and @math . Another prominent multi-winner rule in the Handbook is @cite_15 . Each agent's approval ballot and the winning set can be seen as a binary vector. Minimax approval voting selects the set of @math candidates that minimizes the maximum Hamming distance from the submitted ballots. Although minimax approval voting is a natural and elegant rule, showed that computing the winner set is unfortunately NP-hard. Strategic issues and approximation questions for minimax approval voting are covered in @cite_19 and @cite_12 where the problem is known as the closest string problem.''
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_12" ], "mid": [ "132136377", "2017492304", "1979149941" ], "abstract": [ "We consider approval voting elections in which each voter votes for a (possibly empty) set of candidates and the outcome consists of a set of k candidates for some parameter k, e.g., committee elections. We are interested in the min-imax approval voting rule in which the outcome represents a compromise among the voters, in the sense that the maximum distance between the preference of any voter and the outcome is as small as possible. This voting rule has two main drawbacks. First, computing an outcome that minimizes the maximum distance is computationally hard. Furthermore, any algorithm that always returns such an outcome provides incentives to voters to misreport their true preferences. In order to circumvent these drawbacks, we consider approximation algorithms, i.e., algorithms that produce an outcome that approximates the minimax distance for any given instance. Such algorithms can be considered as alternative voting rules. We present a polynomial-time 2-approximation algorithm that uses a natural linear programming relaxation for the underlying optimization problem and deterministically rounds the fractional solution in order to compute the outcome; this result improves upon the previously best known algorithm that has an approximation ratio of 3. We are furthermore interested in approximation algorithms that are resistant to manipulation by (coalitions of) voters, i.e., algorithms that do not motivate voters to misreport their true preferences in order to improve their distance from the outcome. We complement previous results in the literature with new upper and lower bounds on strategyproof and group-strategyproof algorithms.", "A new voting procedure for electing committees, called the minimax procedure, is described. Based on approval balloting, it chooses the committee that minimizes the maximum Hamming distance to voters’ ballots, where these ballots are weighted by their proximity to other voters’ ballots. This minimax outcome may be diametrically opposed to the outcome obtained by aggregating approval votes in the usual manner, which minimizes the sum of the Hamming distances and is called the minisum outcome. The manipulability of these procedures, and their applicability when election outcomes are restricted in various ways, are also investigated.", "CLOSEST STRING is one of the core problems in the field of consensus word analysis with particular importance for computational biology. Given k strings of the same length and a nonnegative integer d , find a center string'' s such that none of the given strings has the Hamming distance greater than d from s . CLOSEST STRING is NP-complete. In biological applications, however, d is usually very small. We show how to solve CLOSEST STRING in linear time for fixed d —the exponential growth in d is bounded by O(dd) . We extend this result to the closely related problems d -MISMATCH and DISTINGUISHING STRING SELECTION. Moreover, we also show that CLOSEST STRING is solvable in linear time when k is fixed and d is arbitrary. In summary, this means that CLOSEST STRING is fixed-parameter tractable with respect to parameter d and with respect to parameter k . Finally, the practical usefulness of our findings is substantiated by some experimental results." ] }
1407.3247
2951884201
We study computational aspects of three prominent voting rules that use approval ballots to elect multiple winners. These rules are satisfaction approval voting, proportional approval voting, and reweighted approval voting. We first show that computing the winner for proportional approval voting is NP-hard, closing a long standing open problem. As none of the rules are strategyproof, even for dichotomous preferences, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots from the other agents.
The area of multi-winner approval voting is closely related to the study of proportional representation when selecting a committee ( SFS13a,SFS13b ). Ideas from committee selection have therefore been used in computational social choice to ensure diversity when selecting a collection of objects @cite_13 . Understanding approval voting schemes which select multiple winners, as the rules we consider often do, is an important area in social choice with applications in a variety of settings from committee selection to multi-product recommendation @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_13" ], "mid": [ "2949245628", "1238745702" ], "abstract": [ "The goal of this paper is to propose and study properties of multiwinner voting rules which can be consider as generalisations of single-winner scoring voting rules. We consider SNTV, Bloc, k-Borda, STV, and several variants of Chamberlin--Courant's and Monroe's rules and their approximations. We identify two broad natural classes of multiwinner score-based rules, and show that many of the existing rules can be captured by one or both of these approaches. We then formulate a number of desirable properties of multiwinner rules, and evaluate the rules we consider with respect to these properties.", "We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems-- requiring the selection of diverse options tailored to different agent types--and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NP-complete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on real-world datasets demonstrate the effectiveness of our algorithms." ] }
1407.3686
2952397361
Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.
Finally, we would like to clarify that our SSL proposal is not a substitute for NMS and tracking post-classification stages. What we expect is to allow these stages to produce more accurate results by increasing the accuracy of the classification stage. For instance, tracking must be used for predicting pedestrian intentions @cite_18 , thus, if less false positives reach the tracker, we can reasonably expect to obtain more reliable pedestrian trajectories and so guessing intentions in the very short time this information is required ( , around a quarter of second before a potential collision).
{ "cite_N": [ "@cite_18" ], "mid": [ "2286744228" ], "abstract": [ "In the context of intelligent vehicles, we perform a comparative study on recursive Bayesian filters for pedestrian path prediction at short time horizons (< 2s). We consider Extended Kalman Filters (EKF) based on single dynamical models and Interacting Multiple Models (IMM) combining several such basic models (constant velocity acceleration turn). These are applied to four typical pedestrian motion types (crossing, stopping, bending in, starting). Position measurements are provided by an external state-of-the-art stereo vision-based pedestrian detector. We investigate the accuracy of position estimation and path prediction, and the benefit of the IMMs vs. the simpler single dynamical models. Special care is given to the proper sensor modeling and parameter optimization. The dataset and evaluation framework are made public to facilitate benchmarking." ] }
1407.2697
2950793614
A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lov 'asz extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset.
The reweighted @math @cite_4 aspect refers to the method of optimization applied. A double loop method is used, in the same class as EM methods and difference of convex programming, where each @math inner problem gives a monotonically improving lower bound on the true solution.
{ "cite_N": [ "@cite_4" ], "mid": [ "2107861471" ], "abstract": [ "It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained l1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms l1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted l1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations—not by reweighting the l1 norm of the coefficient sequence as is common, but by reweighting the l1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as Compressive Sensing." ] }
1407.2674
2949611647
We compare the sample complexity of private learning [ 2008] and sanitization [ 2008] under pure @math -differential privacy [ TCC 2006] and approximate @math -differential privacy [ Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generalizes some of our considered tasks. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm solving such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of label-private learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy.
Another interesting gap between pure and approximate differential privacy is the following. @cite_13 have given a generic construction of pure-private sanitizers, in which the sample complexity grows as @math (where @math is the approximation parameter). Following that, Hardt and Rothblum @cite_1 showed that with approximate-privacy, the sample complexity can be reduce to grow as @math . Currently, it is unknown whether this gap is essential.
{ "cite_N": [ "@cite_1", "@cite_13" ], "mid": [ "1985310469", "2042469398" ], "abstract": [ "We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worst-case accuracy guarantees that can answer large numbers of interactive queries and is efficient (in terms of the runtime's dependence on the data universe size). The error is asymptotically in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly linear in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for any input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes poly-logarithmic in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions.", "In this article, we demonstrate that, ignoring computational constraints, it is possible to release synthetic databases that are useful for accurately answering large classes of queries while preserving differential privacy. Specifically, we give a mechanism that privately releases synthetic data useful for answering a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries. We show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the VC-dimension of the class of queries, which itself grows at most logarithmically with the size of the query class. We also show that it is not possible to release even simple classes of queries (such as intervals and their generalizations) over continuous domains with worst-case utility guarantees while preserving differential privacy. In response to this, we consider a relaxation of the utility guarantee and give a privacy preserving polynomial time algorithm that for any halfspace query will provide an answer that is accurate for some small perturbation of the query. This algorithm does not release synthetic data, but instead another data structure capable of representing an answer for each query. We also give an efficient algorithm for releasing synthetic data for the class of interval queries and axis-aligned rectangles of constant dimension over discrete domains." ] }
1407.3130
2950684935
How do we allocate scarcere sources? How do we fairly allocate costs? These are two pressing challenges facing society today. I discuss two recent projects at NICTA concerning resource and cost allocation. In the first, we have been working with FoodBank Local, a social startup working in collaboration with food bank charities around the world to optimise the logistics of collecting and distributing donated food. Before we can distribute this food, we must decide how to allocate it to different charities and food kitchens. This gives rise to a fair division problem with several new dimensions, rarely considered in the literature. In the second, we have been looking at cost allocation within the distribution network of a large multinational company. This also has several new dimensions rarely considered in the literature.
There have been a number of complex markets developed to allocate resources which use money. For example, in a combinatorial auction, agents express prices over bundles of items @cite_11 . Our two projects, however, only consider allocation problems where money is not transfered. Nevertheless, there are ideas from domains like combinatorial auctions which we may be able to borrow. For example, we expect the bidding languages proposed for combinatorial auction may be useful for compactly specifying complex, real world preferences even when money is not being transferred. As a second example, as occurs in some course allocation mechanisms used in practice, we can give agents virtual money'' with which to bid and thus apply an auction based mechanism @cite_16 @cite_12 .
{ "cite_N": [ "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2116181612", "2788710522", "" ], "abstract": [ "Mechanisms that rely on course bidding are widely used at business schools in order to allocate seats at oversubscribed courses. Bids play two key roles under these mechanisms: to infer student preferences and to determine who have bigger claims on course seats. We show that these two roles may easily conflict, and preferences induced from bids may significantly differ from the true preferences. Therefore, these mechanisms, which are promoted as market mechanisms, do not necessarily yield market outcomes. We introduce a Pareto-dominant market mechanism that can be implemented by asking students for their preferences in addition to their bids over courses.", "We use theory and field data to study the draft mechanism used to allocate courses at Harvard Business School. We show that the draft is manipulable in theory, manipulated in practice, and that these manipulations cause significant welfare loss. Nevertheless, we find that welfare is higher than under its widely studied strategyproof alternative. We identify a new link between fairness and welfare that explains why the draft performs well despite the costs of strategic behavior, and then design a new draft that reduces these costs. We draw several broader lessons for market design, regarding Pareto efficiency, fairness, and strategyproofness. (JEL D63, D82, I23)", "" ] }
1407.3130
2950684935
How do we allocate scarcere sources? How do we fairly allocate costs? These are two pressing challenges facing society today. I discuss two recent projects at NICTA concerning resource and cost allocation. In the first, we have been working with FoodBank Local, a social startup working in collaboration with food bank charities around the world to optimise the logistics of collecting and distributing donated food. Before we can distribute this food, we must decide how to allocate it to different charities and food kitchens. This gives rise to a fair division problem with several new dimensions, rarely considered in the literature. In the second, we have been looking at cost allocation within the distribution network of a large multinational company. This also has several new dimensions rarely considered in the literature.
Finally, computational phase transitions have been observed in a number of related areas including constraint satisfaction @cite_47 @cite_23 @cite_13 @cite_21 @cite_50 , number partitioning @cite_38 @cite_15 , TSP @cite_8 , social choice @cite_39 @cite_46 @cite_4 , and elsewhere @cite_48 @cite_44 @cite_24 @cite_51 @cite_25 . We predict that a similar analysis of phase transitions will provide insight into the precise relationship between equitability and efficiency in allocation problems.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_15", "@cite_8", "@cite_46", "@cite_48", "@cite_21", "@cite_39", "@cite_44", "@cite_24", "@cite_50", "@cite_23", "@cite_51", "@cite_47", "@cite_13", "@cite_25" ], "mid": [ "1520154906", "2133138908", "2099081893", "2096041053", "2144794045", "1551913526", "2122856906", "2952439423", "", "1495092270", "2071706116", "", "1530121390", "1820930518", "116937202", "" ], "abstract": [ "", "Voting is a simple mechanism to combine together the preferences of multiple agents. Unfortunately, agents may try to manipulate the result by mis-reporting their preferences. One barrier that might exist to such manipulation is computational complexity. In particular, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We consider two settings which represent the two types of complexity results that have been identified in this area: manipulation with unweighted votes by a single agent, and manipulation with weighted votes by a coalition of agents. In the first case, we consider Single Transferable Voting (STV), and in the second case, we consider veto voting. STV is one of the few voting rules used in practice where it is NP-hard to compute how a single agent can manipulate the result when votes are unweighted. It also appears one of the harder voting rules to manipulate since it involves multiple rounds. On the other hand, veto voting is one of the simplest representatives of voting rules where it is NP-hard to compute how a coalition of weighted agents can manipulate the result. In our experiments, we sample a number of distributions of votes including uniform, correlated and real world elections. In many of the elections in our experiments, it was easy to compute how to manipulate the result or to prove that manipulation was impossible. Even when we were able to identify a situation in which manipulation was hard to compute (e.g. when votes are highly correlated and the election is \"hung\"), we found that the computational difficulty of computing manipulations was somewhat precarious (e.g. with such \"hung\" elections, even a single uncorrelated voter was enough to make manipulation easy to compute).", "We illustrate the use of phase transition behavior in the study of heuristics. Using an “annealed” theory, we define a parameter that measures the “constrainedness” of an ensemble of number partitioning problems. We identify a phase transition at a critical value of constrainedness. We then show that constrainedness can be used to analyze and compare algorithms and heuristics for number partitioning in a precise and quantitative manner. For example, we demonstrate that on uniform random problems both the Karmarkar–Karp and greedy heuristics minimize the constrainedness, but that the decisions made by the Karmarkar–Karp heuristic are superior at reducing constrainedness. This supports the better performance observed experimentally for the Karmarkar–Karp heuristic. Our results refute a conjecture of Fu that phase transition behavior does not occur in number partitioning. Additionally, they demonstrate that phase transition behavior is useful for more than just simple benchmarking. It can, for instance, be used to analyze heuristics, and to compare the quality of heuristic solutions.", "The traveling salesman problem is one of the most famous combinatorial problems. We identify a natural parameter for the two-dimensional Euclidean traveling salesman problem. We show that for random problems there is a rapid transition between soluble and insoluble instances of the decision problem at a critical value of this parameter. Hard instances of the traveling salesman problem are associated with this transition. Similar results are seen both with randomly generated problems and benchmark problems using geographical data. Surprisingly, finite-size scaling methods developed in statistical mechanics describe the behaviour around the critical value in random problems. Such phase transition phenomena appear to be ubiquitous. Indeed, we have yet to find an NP-complete problem which lacks a similar phase transition.", "Voting is a simple mechanism to combine together the preferences of multiple agents. Agents may try to manipulate the result of voting by mis-reporting their preferences. One barrier that might exist to such manipulation is computational complexity. In particular, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. In this paper, we study empirically the manipulability of single transferable voting (STV) to determine if computational complexity is really a barrier to manipulation. STV was one of the first voting rules shown to be NP-hard. It also appears one of the harder voting rules to manipulate. We sample a number of distributions of votes including uniform and real world elections. In almost every election in our experiments, it was easy to compute how a single agent could manipulate the election or to prove that manipulation by a single agent was impossible.", "In a graph with a \"small world\" topology, nodes are highly clustered yet the path length between them is small. Such a topology can make search problems very difficult since local decisions quickly propagate globally. We show that graphs associated with many different search problems have a small world topology, and that the cost of solving such search problems can have a heavy-tailed distribution. The strategy of randomization and restarts appears to eliminate these heavy tails. A novel restart schedule in which the cutoff bound is increased geometrically appears particularly effective.", "We introduce a parameter that measures the \"constrainedness\" of an ensemble of combinatorial problems. If problems are over-constrained, they are likely to be insoluble. If problems are under-constrained, they are likely to be soluble. This constrainedness parameter generalizes a number of parameters previously used in different NP-complete problem classes. Phase transitions in different NP classes can thus be directly compared. This parameter can also be used in a heuristic to guide search. The heuristic captures the intuition of making the most constrained choice first, since it is often useful to branch into the least constrained subproblem. Many widely disparate heuristics can be seen as minimizing constrainedness.", "Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is \"hung\". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.", "", "We introduce a mechanism called \"morphing\" for introducing structure or randomness into a wide variety of problems. We illustrate the usefulness of morphing by performing several different experimental studies. These studies identify the impact of a \"small-world\" topology on the cost of coloring graphs, of asymmetry on the cost of finding the optimal TSP tour, and of the dimensionality of space on the cost of finding the optimal TSP tour. We predict that morphing will find many other uses.", "A recent theoretical result by shows that many models of random binary constraint satisfaction problems become trivially insoluble as problem size increases. This insolubility is partly due to the presence of ‘flawed variables,’ variables whose values are all ‘flawed’ (or unsupported). In this paper, we analyse how seriously existing work has been affected. We survey the literature to identify experimental studies that use models and parameters that may have been affected by flaws. We then estimate theoretically and measure experimentally the size at which flawed variables can be expected to occur. To eliminate flawed values and variables in the models currently used, we introduce a ‘flawless’ generator which puts a limited amount of structure into the conflict matrix. We prove that such flawless problems are not trivially insoluble for constraint tightnesses up to 1 2. We also prove that the standard models B and C do not suffer from flaws when the constraint tightness is less than the reciprocal of domain size. We consider introducing types of structure into the constraint graph which are rare in random graphs and present experimental results with such structured graphs.", "", "We show that nodes of high degree tend to occur infrequently in random graphs but frequently in a wide variety of graphs associated with real world search problems. We then study some alternative models for randomly generating graphs which have been proposed to give more realistic topologies. For example, we show that Watts and Strogatz's small world model has a narrow distribution of node degree. On the other hand, Barabasi and Albert's power law model, gives graphs with both nodes of high degree and a small world topology. These graphs may therefore be useful for benchmarking. We then measure the impact of nodes of high degree and a small world topology on the cost of coloring graphs. The long tail in search costs observed with small world graphs disappears when these graphs are also constructed to contain nodes of high degree. We conjecture that this is a result of the small size of their \"backbone\", pairs of edges that are frozen to be the same color.", "Phase transitions in constraint satisfaction problems (CSP's) are the subject of intense study. We identify a control parameter for random binary CSP's. There is a rapid transition in the probability of a CSP having a solution at a critical value of this parameter. This parameter allows different phase transition behaviour to be compared in an uniform manner, for example CSP's generated under different regimes. We then show that within classes, the scaling of behaviour can be modelled by a technique called “finite size scaling”. This applies not only to probability of solubility, as has been observed before in other NP-problems, but also to search cost. Furthermore, the technique applies with equal validity to several different methods of varying problem size. As well as contributing to the understanding of phase transitions, we contribute by allowing much finer grained comparison of algorithms, and for accurate empirical extrapolations of behaviour.", "We show that a rescaled constrainedness parameter provides the basis for accurate numerical models of search cost for both backtracking and local search algorithms. In the past, the scaling of performance has been restricted to critically constrained problems at the phase transition. Here, we show how to extend models of search cost to the full width of the phase transition. This enables the direct comparison of algorithms on both under-constrained and over-constrained problems. We illustrate the generality of the approach using three different problem domains (satisfiability, constraint satisfaction and travelling salesperson problems) with both backtracking algorithms like the Davis-Putnam procedure and local search algorithms like GSAT. As well as modelling data from experiments, we give accurate predictions for results beyond the range of the experiments.", "" ] }
1407.2899
2408716304
Data replication and deployment of local SPARQL endpoints improve scalability and availability of public SPARQL endpoints, making the consumption of Linked Data a reality. This solution requires synchronization and specific query processing strategies to take advantage of replication. However, existing replication aware techniques in federations of SPARQL endpoints do not consider data dynamicity. We propose Fedra, an approach for querying federations of endpoints that benefits from replication. Participants in Fedra federations can copy fragments of data from several datasets, and describe them using provenance and views. These descriptions enable Fedra to reduce the number of selected endpoints while satisfying user divergence requirements. Experiments on real-world datasets suggest savings of up to three orders of magnitude.
The Semantic Web community has proposed different approaches to consume Linked Data from federations of endpoints @cite_0 @cite_9 @cite_1 @cite_6 . Although source selection and query processing techniques have successfully implemented, none of these approaches is able to exploit information about data replication to enhance performance and answer completeness. Recently propose DAW @cite_12 , a source selection technique that relies on data summarization to describe RDF replicas and thus, reduces the number of selected endpoints. For each triple pattern in a SPARQL query, DAW exploits information encoded in source summaries to rank relevant sources in terms of how much they can contribute to the answer. Source summaries are expressed as min-wise independent permutation vectors (MIPs) that index all the predicates in a source. Although properties of MIPs are exploited to efficiently estimate the overlap of two sources, since Linked Data can frequently change, DAW source summaries may need to be regularly recomputed to avoid obsolete answers. To overcome this limitation, provides a more abstract description of the sources which is less sensible to data changes; data provenance and timestamps are stored to control divergence.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_12" ], "mid": [ "1808100928", "2265585838", "1484056211", "2248646379", "2210169352" ], "abstract": [ "Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts, have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. They pose a number of limiting assumptions, thus breaking the openness principle of the Web. In this demo we present a novel technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently. The technique makes no prior assumptions about data distribution. Specifically, Avalanche can perform \"live\" queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers.", "In order to leverage the full potential of the Semantic Web it is necessary to transparently query distributed RDF data sources in the same way as it has been possible with federated databases for ages. However, there are significant differences between the Web of (linked) Data and the traditional database approaches. Hence, it is not straightforward to adapt successful database techniques for RDF federation. Reasons are the missing cooperation between SPARQL end-points and the need for detailed data statistics for estimating the costs of query execution plans. We have implemented SPLENDID, a query optimization strategy for federating SPARQL endpoints based on statistical data obtained from voiD descriptions.", "Motivated by the ongoing success of Linked Data and the growing amount of semantic data sources available on theWeb, new challenges to query processing are emerging. Especially in distributed settings that require joining data provided by multiple sources, sophisticated optimization techniques are necessary for efficient query processing. We propose novel join processing and grouping techniques to minimize the number of remote requests, and develop an effective solution for source selection in the absence of preprocessed metadata. We present FedX, a practical framework that enables efficient SPARQL query processing on heterogeneous, virtually integrated Linked Data sources. In experiments, we demonstrate the practicability and efficiency of our framework on a set of real-world queries and data sources from the Linked Open Data cloud. With FedX we achieve a significant improvement in query performance over state-of-the-art federated query engines.", "Following the design rules of Linked Data, the number of available SPARQL endpoints that support remote query processing is quickly growing; however, because of the lack of adaptivity, query executions may frequently be unsuccessful. First, fixed plans identified following the traditional optimize-thenexecute paradigm, may timeout as a consequence of endpoint availability. Second, because blocking operators are usually implemented, endpoint query engines are not able to incrementally produce results, and may become blocked if data sources stop sending data. We present ANAPSID, an adaptive query engine for SPARQL endpoints that adapts query execution schedulers to data availability and run-time conditions. ANAPSID provides physical SPARQL operators that detect when a source becomes blocked or data traffic is bursty, and opportunistically, the operators produce results as quickly as data arrives from the sources. Additionally, ANAPSID operators implement main memory replacement policies to move previously computed matches to secondary memory avoiding duplicates. We compared ANAPSID performance with respect to RDF stores and endpoints, and observed that ANAPSID speeds up execution time, in some cases, in more than one order of magnitude.", "Over the last years the Web of Data has developed into a large compendium of interlinked data sets from multiple domains. Due to the decentralised architecture of this compendium, several of these datasets contain duplicated data. Yet, so far, only little attention has been paid to the effect of duplicated data on federated querying. This work presents DAW, a novel duplicate-aware approach to federated querying over the Web of Data. DAW is based on a combination of min-wise independent permutations and compact data summaries. It can be directly combined with existing federated query engines in order to achieve the same query recall values while querying fewer data sources. We extend three well-known federated query processing engines — DARQ, SPLENDID, and FedX — with DAW and compare our extensions with the original approaches. The comparison shows that DAW can greatly reduce the number of queries sent to the endpoints, while keeping high query recall values. Therefore, it can significantly improve the performance of federated query processing engines. Moreover, DAW provides a source selection mechanism that maximises the query recall, when the query processing is limited to a subset of the sources." ] }
1407.2987
2952456292
We attack the problem of learning face models for public faces from weakly-labelled images collected from web through querying a name. The data is very noisy even after face detection, with several irrelevant faces corresponding to other people. We propose a novel method, Face Association through Model Evolution (FAME), that is able to prune the data in an iterative way, for the face models associated to a name to evolve. The idea is based on capturing discriminativeness and representativeness of each instance and eliminating the outliers. The final models are used to classify faces on novel datasets with possibly different characteristics. On benchmark datasets, our results are comparable to or better than state-of-the-art studies for the task of face identification.
In @cite_19 face-name association problem is tackled as a multiple instance learning problem over pairs of bags. Detected faces in an image is put into a bag, and names detected in the caption are put into the corresponding set of labels. A pair of bags is labeled as positive if they share at least one label, and negative otherwise. The results are reported on Labelled Yahoo! News dataset which is obtained through manually annotating and extending LFW dataset. In @cite_12 , it is shown that the performance of graph-based and generative approaches for text-based face retrieval and face-name association tasks can be improved with the incorporation of logistic discriminant based metric learning (LDML) @cite_33 .
{ "cite_N": [ "@cite_19", "@cite_33", "@cite_12" ], "mid": [ "2142863987", "", "1993741228" ], "abstract": [ "Metric learning aims at finding a distance that approximates a task-specific notion of semantic similarity. Typically, a Mahalanobis distance is learned from pairs of data labeled as being semantically similar or not. In this paper, we learn such metrics in a weakly supervised setting where \"bags\" of instances are labeled with \"bags\" of labels. We formulate the problem as a multiple instance learning (MIL) problem over pairs of bags. If two bags share at least one label, we label the pair positive, and negative otherwise. We propose to learn a metric using those labeled pairs of bags, leading to MildML, for multiple instance logistic discriminant metric learning. MildML iterates between updates of the metric and selection of putative positive pairs of examples from positive pairs of bags. To evaluate our approach, we introduce a large and challenging data set, Labeled Yahoo! News, which we have manually annotated and contains 31147 detected faces of 5873 different people in 20071 images. We group the faces detected in an image into a bag, and group the names detected in the caption into a corresponding set of labels. When the labels come from manual annotation, we find that MildML using the bag-level annotation performs as well as fully supervised metric learning using instance-level annotation. We also consider performance in the case of automatically extracted labels for the bags, where some of the bag labels do not correspond to any example in the bag. In this case MildML works substantially better than relying on noisy instance-level annotations derived from the bag-level annotation by resolving face-name associations in images with their captions.", "", "In this paper, we present methods for face recognition using a collection of images with captions. We consider two tasks: retrieving all faces of a particular person in a data set, and establishing the correct association between the names in the captions and the faces in the images. This is challenging because of the very large appearance variation in the images, as well as the potential mismatch between images and their captions. For both tasks, we compare generative and discriminative probabilistic models, as well as methods that maximize subgraph densities in similarity graphs. We extend them by considering different metric learning techniques to obtain appropriate face representations that reduce intra person variability and increase inter person separation. For the retrieval task, we also study the benefit of query expansion. To evaluate performance, we use a new fully labeled data set of 31147 faces which extends the recent Labeled Faces in the Wild data set. We present extensive experimental results which show that metric learning significantly improves the performance of all approaches on both tasks." ] }
1407.2845
2144941777
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
Schema Matching techniques have been studied in a large variety of application contexts, like data integration, e-commerce, Data Warehousing, distributed query answering, to name a few @cite_35 @cite_49 .
{ "cite_N": [ "@cite_35", "@cite_49" ], "mid": [ "2037344244", "2008896880" ], "abstract": [ "oftware design patterns capture tried and successful design solutions [6]. Among different views on design patterns is that they are created to compensate for the design shortfalls in programming languages [5]�€”that is, design patterns are needed when programming languages cannot do the job in a straightforward way. Based on this view, Coplien and Zhao [5] postulate that there is a causal relationship between language features and design patterns and that relationship is couched in a more fundamental relationship between symmetries and broken symmetries. This article builds on that postulation and provides a further understanding and fresh articulation of patterns, symmetries, and broken symmetries.", "Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing, and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches, and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component." ] }
1407.2845
2144941777
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
Up to 2001, Schema Matching was considered as an issue functional to a specific application domain. Such a vision was overturned by Rahm and Bernstein @cite_49 . They analyzed existing literature and recognized relevant similarities among techniques which were originally designed to work in different application domains. As a consequence, they suggested to consider Schema Matching as a new research problem which was interesting per se , independently of a particular application domain. The classification criteria illustrated in @cite_49 were (and still are) warmly welcomed by researchers working in the Schema Matching field, and they have been largely exploited to categorize existing approaches.
{ "cite_N": [ "@cite_49" ], "mid": [ "2008896880" ], "abstract": [ "Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing, and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches, and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component." ] }
1407.2845
2144941777
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
An update of the work presented in @cite_49 is proposed in @cite_102 . In that paper the authors report the main developments in Schema Matching algorithms in the decade 2001-11 and suggest a list of open research problems and current research directions in the Schema Matching field.
{ "cite_N": [ "@cite_102", "@cite_49" ], "mid": [ "2406114359", "2008896880" ], "abstract": [ "In a paper published in the 2001 VLDB Conference, we proposed treating generic schema matching as an independent problem. We developed a taxonomy of existing techniques, a new schema matching algorithm, and an approach to comparative evaluation. Since then, the field has grown into a major research topic. We briefly summarize the new techniques that have been developed and applications of the techniques in the commercial world. We conclude by discussing future trends and recommendations for further work.", "Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing, and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches, and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and when implementing a schema matching component." ] }
1407.2845
2144941777
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
In the context of Semantic Web, Ehrig @cite_19 focused on the problem of Ontology Alignment Ontology Matching (which strongly resembles the Schema Matching problem) and depicted the process of aligning ontologies as a six-step process.
{ "cite_N": [ "@cite_19" ], "mid": [ "1547224514" ], "abstract": [ "This book introduces novel methods and approaches for semantic integration. In addition to developing ground-breaking new methods for ontology alignment, the author provides extensive explanations of up-to-date case studies. It includes a thorough investigation of the foundations and provides pointers to future steps in ontology alignment with conclusion linking this work to the knowledge society." ] }
1407.2845
2144941777
Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.
A novel, definitely relevant, trend in Schema Matching research regards the problem of managing uncertainty in Schema Matching process @cite_86 . An excellent review of uncertainty in Schema Matching is proposed by Gal @cite_86 . In this book, the author presents a framework to classify the various aspects of uncertainty. The book provides also several alternative representations of Schema Matching uncertainty and discusses in depth some strategies that have been recently proposed to deal with this issue.
{ "cite_N": [ "@cite_86" ], "mid": [ "2036073399" ], "abstract": [ "Schema matching is the task of providing correspondences between concepts describing the meaning of data in various heterogeneous, distributed data sources. Schema matching is one of the basic operations required by the process of data and schema integration, and thus has a great effect on its outcomes, whether these involve targeted content delivery, view integration, database integration, query rewriting over heterogeneous sources, duplicate data elimination, or automatic streamlining of workflow activities that involve heterogeneous data sources. Although schema matching research has been ongoing for over 25 years, more recently a realization has emerged that schema matchers are inherently uncertain. Since 2003, work on the uncertainty in schema matching has picked up, along with research on uncertainty in other areas of data management. This lecture presents various aspects of uncertainty in schema matching within a single unified framework. We introduce basic formulations of uncertainty and provide several alternative representations of schema matching uncertainty. Then, we cover two common methods that have been proposed to deal with uncertainty in schema matching, namely ensembles, and top-K matchings, and analyze them in this context. We conclude with a set of real-world applications." ] }
1407.2587
2952756030
We study dynamics of opinion formation in a network of coupled agents. As the network evolves to a steady state, opinions of agents within the same community converge faster than those of other agents. This framework allows us to study how network topology and network flow, which mediates the transfer of opinions between agents, both affect the formation of communities. In traditional models of opinion dynamics, agents are coupled via conservative flows, which result in one-to-one opinion transfer. However, social interactions are often non-conservative, resulting in one-to-many transfer of opinions. We study opinion formation in networks using one-to-one and one-to-many interactions and show that they lead to different community structure within the same network.
Community detection is an extremely active research area, with a variety of methods proposed, including those based on similarity clustering @cite_3 , spectral clustering @cite_1 and graph partitioning methods that identify which edges to cut so as to minimize conductance @cite_22 @cite_34 or normalized cut @cite_0 , or maximize modularity @cite_18 @cite_13 . These methods have been used to reveal the structure of complex networks. @cite_14 found core and whiskers' structure of real-world networks using conductance minimization and argued that this method cannot reveal any further structure in the giant core. Song @cite_25 claimed that there exist self-repeating patterns in complex networks at all length scales. Our results corroborate these claims and show a repeating core and whiskers' pattern in online social networks at many different length scales.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_1", "@cite_3", "@cite_0", "@cite_34", "@cite_13", "@cite_25" ], "mid": [ "1971421925", "2131717044", "1578099820", "2132914434", "183030605", "2121947440", "2045107949", "2127048411", "1973353128" ], "abstract": [ "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the \"best\" possible community - according to the conductance measure - over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually \"blend in\" with the rest of the network and thus become less \"community-like.\" This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative \"forest fire\" burning process, is able to produce graphs exhibiting a network community structure similar to our observations.", "Eigenvalues and the Laplacian of a graph Isoperimetric problems Diameters and eigenvalues Paths, flows, and routing Eigenvalues and quasi-randomness Expanders and explicit constructions Eigenvalues of symmetrical graphs Eigenvalues of subgraphs with boundary conditions Harnack inequalities Heat kernels Sobolev inequalities Advanced techniques for random walks on graphs Bibliography Index.", "In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.", "For more than 100 years, sociologists have been concerned with relatively small, cohesive social groups (Tonnies, [1887] 1940; Durkheim [1893] 1933; Spencer 1895-97; Cooley, 1909). The groups that concern sociologists are not simply categories—like redheads or people more than six feet tall. Instead they are social collectivities characterized by interaction and interpersonal ties. Concern with groups of this sort has been—and remains—at the very core of the field.", "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning.", "The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.", "‘Scale-free’ networks, such as linked web pages, people in social groups, or cellular interaction networks show uneven connectivity distributions: there is no typical number of links per node. Many of these networks also exhibit the ‘small-world’ effect, called ‘six degrees of separation’ when applied to sociology. A new analysis of such networks, in which nodes are partitioned into boxes of different sizes, reveals that they share the surprising feature of self-similarity. In other words, these networks are constructed of fractal-like self-repeating patterns or degrees of separation. This may help explain how the scale-free property of such networks arises." ] }
1407.2587
2952756030
We study dynamics of opinion formation in a network of coupled agents. As the network evolves to a steady state, opinions of agents within the same community converge faster than those of other agents. This framework allows us to study how network topology and network flow, which mediates the transfer of opinions between agents, both affect the formation of communities. In traditional models of opinion dynamics, agents are coupled via conservative flows, which result in one-to-one opinion transfer. However, social interactions are often non-conservative, resulting in one-to-many transfer of opinions. We study opinion formation in networks using one-to-one and one-to-many interactions and show that they lead to different community structure within the same network.
Several researchers have explicitly studied how flows impact the measurement of network structure. Borgatti @cite_10 proposed that node's centrality reflects its participation in the flow taking place on the network, with different flows leading to different notions of centrality. However, he did not directly address the relationship between flows and network's community structure, though according to his arguments centrality is tied to group cohesiveness in networks @cite_20 . @cite_8 proposed an integrated representation of the structure and dynamics of a network by embedding dynamic flows into edge weights of the adjacency matrix. While their framework is general and flexible enough to model the flows studied in this paper, they did not use it to find and compare community structure identified by different flows. @cite_6 showed that introducing memory into a random walk in order to avoid nodes the walker has visited in the past, induces a different community structure on a network than an ordinary random walk. This paper builds on these works by demonstrating that details of the microscopic dynamics governing flows affect the composition of cohesive groups, or communities, discovered within real-world social networks.
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_20", "@cite_8" ], "mid": [ "2127387319", "2407879802", "2127405101", "2169553632" ], "abstract": [ "Centrality measures, or at least popular interpretations of these measures, make implicit assumptions about the manner in which traffic flows through a network. For example, some measures count only geodesic paths, apparently assuming that whatever flows through the network only moves along the shortest possible paths. This paper lays out a typology of network flows based on two dimensions of variation, namely the kinds of trajectories that traffic may follow (geodesics, paths, trails, or walks) and the method of spread (broadcast, serial replication, or transfer). Measures of centrality are then matched to the kinds of flows that they are appropriate for. Simulations are used to examine the relationship between type of flow and the differential importance of nodes with respect to key measurements such as speed of reception of traffic and frequency of receiving traffic. It is shown that the off-the-shelf formulas for centrality measures are fully applicable only for the specific flow processes they are designed for, and that when they are applied to other flow processes they get the “wrong” answer. It is noted that the most commonly used centrality measures are not appropriate for most of the flows we are routinely interested in. A key claim made in this paper is that centrality measures can be regarded as generating expected values for certain kinds of node outcomes (such as speed and frequency of reception) given implicit models of how traffic flows, and that this provides a new and useful way of thinking about centrality. © 2004 Elsevier B.V. All rights reserved.", "Capturing dynamics of the spread of information and disease with random flow on networks is a paradigm. We show that this conventional approach ignores an important feature of the dynamics: where flow moves to depends on where it comes from. That is, memory matters. We analyze multi-step pathways from different systems and show that ignoring memory overestimates the number of pathways by up to 400 per step, with profound consequences for community detection and ranking as well as for epidemic spreading. Specifically, memoryless dynamics on networks understate the effect of communities and exaggerate the effect of highly connected nodes. Including memory reveals actual travel patterns in air traffic, ranking that favors specialized journals in scientific communication, and diseases that spread more slowly and persist longer in hospitals.", "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node's involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman's 1979 categorization. At a more substantive level, measures of centrality summarize a node's involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the one-dimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network.", "The behavior of complex systems is determined not only by the topological organization of their interconnections but also by the dynamical processes taking place among their constituents. A faithful modeling of the dynamics is essential because different dynamical processes may be affected very differently by network topology. A full characterization of such systems thus requires a formalization that encompasses both aspects simultaneously, rather than relying only on the topological adjacency matrix. To achieve this, we introduce the concept of flow graphs, namely weighted networks where dynamical flows are embedded into the link weights. Flow graphs provide an integrated representation of the structure and dynamics of the system, which can then be analyzed with standard tools from network theory. Conversely, a structural network feature of our choice can also be used as the basis for the construction of a flow graph that will then encompass a dynamics biased by such a feature. We illustrate the ideas by focusing on the mathematical properties of generic linear processes on complex networks that can be represented as biased random walks and their dual consensus dynamics, and show how our framework improves our understanding of these processes." ] }
1407.1963
1981678666
Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.
Related to the Inter-Cloud Architectural taxonomy presented in @cite_16 , soCloud can be classified into the Multi-Cloud service category. This section presents some of the related work to multi-cloud computing challenges discussed in : , , , and across multiple clouds.
{ "cite_N": [ "@cite_16" ], "mid": [ "1587464373" ], "abstract": [ "Although Cloud computing itself has many open problems, researchers in the field have already made the leap to envision Inter-Cloud computing. Their goal is to achieve better overall Quality of Service QoS, reliability and cost efficiency by utilizing multiple clouds. Inter-Cloud research is still in its infancy, and the body of knowledge in the area has not been well defined yet. In this work, we propose and motivate taxonomies for Inter-Cloud architectures and application brokering mechanisms. We present a detailed survey of the state of the art in terms of both academic and industry developments 20 projects, and we fit each project onto the discussed taxonomies. We discuss how the current Inter-Cloud environments facilitate brokering of distributed applications across clouds considering their nonfunctional requirements. Finally, we analyse the existing works and identify open challenges and trends in the area of Inter-Cloud application brokering. Copyright © 2012 John Wiley & Sons, Ltd." ] }
1407.1963
1981678666
Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.
Portability approaches can be classified into three categories - @cite_31 : , and . The authors @cite_32 of mOSAIC deal with portability at IaaS and PaaS levels. mOSAIC provides a component-based programming model with asynchronous communication. However, mOSAIC APIs are not standardized and are complex to put at work in practice. Our soCloud solution deals with portability with an API that runs on existing PaaS and IaaS. soCloud supports both synchronous and asynchronous communications offered by the SCA standard. Moreover, SCA defines an easy way to use portable API. The Cloud4SOA @cite_9 project deals with the portability between PaaS using a semantic approach. soCloud intends to provide portability using an API based on the SCA standard.
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_32" ], "mid": [ "2016096494", "88174524", "1975022484" ], "abstract": [ "Cloud Platform as a Service (PaaS) is a novel paradigm that enables software developers to create (develop or integrate), deploy, execute, and manage business applications, using a service provided by a third party. The diversity and heterogeneity of the existing PaaS offerings raises several interoperability challenges. The actual Platform as a Service market is still quite young, chaotic and highly fragmented, dominated by a few providers which use and promote incompatible standards and formats. This introduces adoption barriers due to the lock-in issues that prevent the portability of data and software from one PaaS to another. Moreover, software developers do not only need to deploy applications into a specific Cloud platform, but also to migrate applications from one Cloud platform to another, and to manage distributed applications spanning multiple PaaS. In this paper, we present a multi-cloud PaaS management as a result of the Cloud4SOA European project that addresses these challenges.", "While the technological basis for cloud services is relatively mature, the development of the market is still at an early stage. There is considerable potential, but also a number of widely held concerns which are inhibiting mainstream adoption of cloud services by business. This paper is based on the outcome of the ETSI TC CLOUD Workshop, \"Grids, Clouds and Service Infrastructures\", an event which brought together key stakeholders of the grid, cloud and telecommunication domains to review the state of the art and current trends. The focus was on areas where standardization has the potential to encourage development of the market, with a particular focus on cloud computing and services. This paper introduces and expands on the conclusions reached. It is intended to serve as the basis for future work.", "The adoption of the Cloud computing concept and its market development are nowadays hindered by the problem of application, data and service portability between Clouds. Open application programming interfaces, standards and protocols, as well as their early integration in the software stack of the new technological offers, are the key elements towards a widely accepted solution and the basic requirements for the further development of Cloud applications. An approach for a new set of APIs for Cloud application development is discussed in this paper from the point of view of portability. The first available, proof-of-the-concept, prototype implementation of the proposed API is integrated in a new open-source deployable Cloudware, namely mOSAIC, designed to deal with multiple Cloud usage scenarios and providing further solutions for portability beyond the API." ] }
1407.1963
1981678666
Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.
A great deal of research on dynamic resource allocation for physical and virtual machines and clusters of virtual machines @cite_10 exists. The work of dynamic provisioning of resource in cloud computing may be classified into two categories. Authors in @cite_23 have addressed the problem of provisioning resources at the granularity of VMs. Other authors in @cite_14 have considered the provisioning of resources at a finer granularity of resources. In our work, we consider provisioning at both VM and finer granularity of resources. The authors in @cite_0 have addressed the problem of deploying a cluster of virtual machines with given resource configurations across a set of physical machines. While @cite_41 defines a Java API permitting developers to monitor and manage a cluster of Java VMs and to define resource allocation policies for such clusters. Unlike @cite_0 @cite_41 , soCloud uses both an application-centric and virtual machine approaches. Using knowledge on application workload and performance goals combined with server usage, soCloud utilizes a more versatile set of automation mechanisms.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_0", "@cite_23", "@cite_10" ], "mid": [ "2064359039", "2159604625", "2105372142", "2150035198", "1993353720" ], "abstract": [ "Internet hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center operating system, with an emphasis on energy as a driving resource management issue for large server clusters. The goals are to provision server resources for co-hosted services in a way that automatically adapts to offered load, improve the energy efficiency of server clusters by dynamically resizing the active server set, and respond to power supply disruptions or thermal events by degrading service in accordance with negotiated Service Level Agreements (SLAs).Our system is based on an economic approach to managing shared server resources, in which services \"bid\" for resources as a function of delivered performance. The system continuously monitors load and plans resource allotments by estimating the value of their effects on service performance. A greedy resource allocation algorithm adjusts resource prices to balance supply and demand, allocating resources to their most efficient use. A reconfigurable server switching infrastructure directs request traffic to the servers assigned to each service. Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29 or more for a typical Web workload.", "Enterprise applications are increasingly being built using type-safe programming platforms and deployed over horizontally scalable systems. Horizontal scalability depends crucially on the ability to monitor resource usage and to define and enforce resource management policies capable of guaranteeing a desired service level. However, current safe language platforms have very limited support for resource management, and their cluster-enabled versions reflect this deficiency. We describe an architecture of federated Java spl trade virtual machines. Its distinguishing feature is an integrated resource management interface that addresses the above issues. It offers programmatic control over monitoring and controlling the allocation of resources to applications and their components. The scope of each policy can span multiple nodes, realizing finegrained control. New resource types can be defined and integrated into the framework. Remote management of local resources and the notion of cluster-global resources form a powerful combination capable of expressing policies that achieve effective performance isolation for cluster applications.", "A challenging issue facing Grid communities is that while Grids can provide access to many heterogeneous resources, the resources to which access is provided often do not match the needs of a specific application or service. In an environment in which both resource availability and software requirements evolve rapidly, this disconnect can lead to resource underutilization, user frustration, and much wasted effort spent on bridging the gap between applications and resources. We show here how these issues can be overcome by allowing authorized Grid clients to negotiate the creation of virtual clusters made up of virtual machines configured to suit client requirements for software environment and hardware allocation. We introduce descriptions and methods that allow us to deploy flexibly configured virtual cluster workspaces. We describe their configuration, implementation, and evaluate them in the context of a virtual cluster representing the environment in production use by the Open Science Grid. Our performance evaluation results show that virtual clusters representing current Grid production environments can be deployed and managed efficiently, and thus can provide an acceptable platform for Grid applications.", "The automatic provisioning of applications is an important task for the success of software as a service (SaaS) providers. Different provisioning engines from different vendors and open source projects with different interfaces have been emerging lately. Additionally, infrastructure providers that provide infrastructure on demand now provide computing resources that can be integrated in a SaaS providerpsilas computing environment. In order to allow SaaS application providers to specify generic installation and maintenance flows independent from the underlying provisioning engines we propose an architecture for a generic provisioning infrastructure based on Web services and workflow technology.", "A systematic study of issues related to suspending, migrating and resuming virtual clusters for data-driven HPC applications is presented. The interest is focused on nontrivial virtual clusters, that is where the running computation is expected to be coordinated and strongly coupled. It is shown that this requires that all cluster level operations, such as start and save, should be performed as synchronously as possible on all nodes, introducing the need of barriers at the virtual cluster computing meta-level. Once a synchronization mechanism is provided, and appropriate transport strategies have been setup, it is possible to suspend, migrate and resume whole virtual clusters composed of ''heavy'' (4 GB RAM, 6 GB disk images) virtual machines in times of the order of few minutes without disrupting parallel computation-albeit of the MapReduce type-running inside them. The approach is intrinsically parallel, and should scale without problems to larger size virtual clusters." ] }
1407.1963
1981678666
Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.
Managing elasticity across multiple cloud providers is a challenging issue. However, although managed elasticity through multiple clouds would benefit when outages occur, few solutions are supporting it. For instance, in @cite_27 , the authors present a federated cloud infrastructure approach to provide elasticity for applications, however, they do not take into account elasticity management when outages occur. Another approach was proposed by @cite_42 , which managed the elasticity with both a controller and a load balancer. However, their solution does not address the management of elasticity through multiple cloud providers. The authors in @cite_7 propose a resource manager to manage application elasticity. However, their approach is specific for a single cloud provider.
{ "cite_N": [ "@cite_27", "@cite_42", "@cite_7" ], "mid": [ "2136674746", "2079967994", "2121636721" ], "abstract": [ "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments The proposed InterCloud environment supports scaling of applications across multiple vendor clouds We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.", "Scalability is said to be one of the major advantages brought by the cloud paradigm and, more specifically, the one that makes it different to an \"advanced outsourcing\" solution. However, there are some important pending issues before making the dreamed automated scaling for applications come true. In this paper, the most notable initiatives towards whole application scalability in cloud environments are presented. We present relevant efforts at the edge of state of the art technology, providing an encompassing overview of the trends they each follow. We also highlight pending challenges that will likely be addressed in new research efforts and present an ideal scalable cloud system.", "Infrastructure-as-a-Service (IaaS) cloud computing offers new possibilities to scientific communities. One of the most significant is the ability to elastically provision and relinquish new resources in response to changes in demand. In our work, we develop a model of an “elastic site” that efficiently adapts services provided within a site, such as batch schedulers, storage archives, or Web services to take advantage of elastically provisioned resources. We describe the system architecture along with the issues involved with elastic provisioning, such as security, privacy, and various logistical considerations. To avoid over- or under-provisioning the resources we propose three different policies to efficiently schedule resource deployment based on demand. We have implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. Our elastic site manager interfaces directly with local resource managers, such as Torque. We have developed and evaluated policies for resource provisioning on a Nimbus-based cloud at the University of Chicago, another at Indiana University, and Amazon EC2. We demonstrate a dynamic and responsive elastic cluster, capable of responding effectively to a variety of job submission patterns. We also demonstrate that we can process 10 times faster by expanding our cluster up to 150 EC2 nodes." ] }
1407.1963
1981678666
Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.
Cloud providers such as Amazon EC2, Windows Azure, Jelastic already provide a load balancer service with a single cloud to distribute load among virtual machines. However, they do not provide load balancing across multiple cloud providers. Different approaches of dynamic load balancing have been proposed in the literature @cite_2 @cite_35 , however, they do not provide a mechanism to scale the load balancers themselves. The authors in @cite_28 have explored the agility way to quickly reassign resources. However, their approach does not take into account a multi cloud environment. Most existing membership protocols @cite_19 employ a consensus algorithm to achieve agreement on the membership. Achieving consensus in an asynchronous distributed system is impossible without the use of timeouts to bound the time within which an action must take place. Even with the use of timeouts, achieving consensus can be relatively costly in the number of messages transmitted, and in the delays incurred. To avoid such costs, soCloud uses a novel Leader Determined Membership Protocol that does not involve the use of a consensus algorithm.
{ "cite_N": [ "@cite_28", "@cite_35", "@cite_19", "@cite_2" ], "mid": [ "2106195964", "2068666958", "1997662655", "1973861090" ], "abstract": [ "Virtual machines have emerged as an attractive approach for utility computing platforms because applications running on VMs are fault- and security- isolated from each other, yet can share physical machines. An important property of a virtualized utility computing platform is how quickly it can react to changing demand. We refer to the capability of a utility computing platform to quickly reassign resources as the agility of the platform. We are targeting hosting utility provider environments where the entire platform is under the control of a single administrative domain and application instances often form application-level clusters. In this work, we examine resource reassignment mechanisms in these environments from the agility perspective and outline a new mechanism that exploits properties of a virtualized utility computing platform. This new mechanism employs ghost virtual machines (VMs), which participate in application clusters, but do not handle client requests until activated by the resource management system. We evaluate this, as well as other, mechanisms on a utility computing testbed. The results show that this ghost VM approach is superior to other approaches in its agility, and allows a new VM to be added to an existing application cluster in a few seconds with negligible overhead. This is a promising result as we develop resource management algorithms for a globally distributed utility computing platform.", "We consider policies for CPU load balancing in networks of workstations. We address the question of whether preemptive migration (migrating active processes) is necessary, or whether remote execution (migrating processes only at the time of birth) is sufficient for load balancing. We show that resolving this issue is strongly tied to understanding the process lifetime distribution. Our measurements indicate that the distribution of lifetimes for a UNIX process is Pareto (heavy-tailed), with a consistent functional form over a variety of workloads. We show how to apply this distribution to derive a preemptive migration policy that requires no hand-tuned parameters. We used a trace-driven simulation to show that our preemptive migration strategy is far more effective than remote execution, even when the memory transfer cost is high.", "", "Popular Web sites cannot rely on a single powerful server nor on independent mirrored-servers to support the ever-increasing request load. Distributed Web server architectures that transparently schedule client requests offer a way to meet dynamic scalability and availability requirements. The authors review the state of the art in load balancing techniques on distributed Web-server systems, and analyze the efficiencies and limitations of the various approaches." ] }
1407.2044
2950889756
In this paper we present a number of methods (manual, semi-automatic and automatic) for tracking individual targets in high density crowd scenes where thousand of people are gathered. The necessary data about the motion of individuals and a lot of other physical information can be extracted from consecutive image sequences in different ways, including optical flow and block motion estimation. One of the famous methods for tracking moving objects is the block matching method. This way to estimate subject motion requires the specification of a comparison window which determines the scale of the estimate. In this work we present a real-time method for pedestrian recognition and tracking in sequences of high resolution images obtained by a stationary (high definition) camera located in different places on the Haram mosque in Mecca. The objective is to estimate pedestrian velocities as a function of the local density.The resulting data of tracking moving pedestrians based on video sequences are presented in the following section. Through the evaluated system the spatio-temporal coordinates of each pedestrian during the Tawaf ritual are established. The pilgrim velocities as function of the local densities in the Mataf area (Haram Mosque Mecca) are illustrated and very precisely documented.
Other approaches use a neural network framework recursively to predict pedestrian motion and trajectory @cite_4 . However the pedestrian trajectories in this system are calculated with incorrect simplifications. In particular, only the nearest neighbour trajectories are considered. The main shortcoming of such an estimation is that there is no uncertainty in this prediction, moreover a comparison of different path prediction shows this is still far from the reality in order to predict that all objects will follow the same set of paths exactly.
{ "cite_N": [ "@cite_4" ], "mid": [ "2076381545" ], "abstract": [ "Abstract Rule-based systems employed to model complex object behaviours, do not necessarily provide a realistic portrayal of true behaviour. To capture the real characteristics in a specific environment, a better model may be learnt from observation. This paper presents a novel approach to learning long-term spatio-temporal patterns of objects in image sequences, using a neural network paradigm to predict future behaviour. The results demonstrate the application of our approach to the problem of predicting animal behaviour in response to a predator." ] }
1407.2044
2950889756
In this paper we present a number of methods (manual, semi-automatic and automatic) for tracking individual targets in high density crowd scenes where thousand of people are gathered. The necessary data about the motion of individuals and a lot of other physical information can be extracted from consecutive image sequences in different ways, including optical flow and block motion estimation. One of the famous methods for tracking moving objects is the block matching method. This way to estimate subject motion requires the specification of a comparison window which determines the scale of the estimate. In this work we present a real-time method for pedestrian recognition and tracking in sequences of high resolution images obtained by a stationary (high definition) camera located in different places on the Haram mosque in Mecca. The objective is to estimate pedestrian velocities as a function of the local density.The resulting data of tracking moving pedestrians based on video sequences are presented in the following section. Through the evaluated system the spatio-temporal coordinates of each pedestrian during the Tawaf ritual are established. The pilgrim velocities as function of the local densities in the Mataf area (Haram Mosque Mecca) are illustrated and very precisely documented.
A method which allowed people counting based on video texture synthesis and to reproduce motion in a novel way was introduced by Heisele and Woehler @cite_12 . The method works under the assumption that people can be segmented from the moving background by means of appearance or motion properties. The scene image is clustered based on the color and position (R, G, B, X, Y) of pixel. The appearance of each pixel in a video frame is modelled as a mixture of Gaussian distributions. A algorithm is used that matches a spherical crust template to the foreground regions of the depth map. Matching is done by a time delay neural network for object recognition and motion analysis.
{ "cite_N": [ "@cite_12" ], "mid": [ "2134203912" ], "abstract": [ "In this paper we present an algorithm for recognizing walking pedestrians in sequences of color images taken from a moving camera. The recognition is based on the characteristic motion of the legs of a pedestrian walking parallel to the image plane. Each image is segmented into region-like image parts by clustering pixels in a combined color position feature space. The proposed clustering technique implies matching of corresponding clusters in consecutive frames and therefore allows clusters to be tracked over a sequence of images. Based on the observation of clusters over time a two-stage classifier extracts those clusters which most likely represent the legs of pedestrians. A fast polynomial classifier performs a rough preselection of clusters by evaluating temporal changes of a shape-dependent clusters feature. The final classification is done by a time delay neural network (TDNN) with spatio-temporal receptive fields." ] }
1407.2044
2950889756
In this paper we present a number of methods (manual, semi-automatic and automatic) for tracking individual targets in high density crowd scenes where thousand of people are gathered. The necessary data about the motion of individuals and a lot of other physical information can be extracted from consecutive image sequences in different ways, including optical flow and block motion estimation. One of the famous methods for tracking moving objects is the block matching method. This way to estimate subject motion requires the specification of a comparison window which determines the scale of the estimate. In this work we present a real-time method for pedestrian recognition and tracking in sequences of high resolution images obtained by a stationary (high definition) camera located in different places on the Haram mosque in Mecca. The objective is to estimate pedestrian velocities as a function of the local density.The resulting data of tracking moving pedestrians based on video sequences are presented in the following section. Through the evaluated system the spatio-temporal coordinates of each pedestrian during the Tawaf ritual are established. The pilgrim velocities as function of the local densities in the Mataf area (Haram Mosque Mecca) are illustrated and very precisely documented.
A significant task in video intelligence systems is the extraction of information about a moving objects e.g. detecting a moving crowd with PedCount (a pedestrian counter system using CCTV) was developed by Tsuchikawa @cite_23 . It extracts the object using the one line path in the image by background subtraction to make a space-time (X-T) binary image. The direction of each travelling pedestrian is realized by the attitude of pedestrian region in the X-T image. They reported the need of background image reconstruction due to image illumination change. An algorithm to distinguish moving object from illumination change is explained based on the variance of the pixel value and frame difference.
{ "cite_N": [ "@cite_23" ], "mid": [ "2137519983" ], "abstract": [ "The paper presents a moving object extraction method robust against illumination level changes for a real-time pedestrian counting system based on an x-t space-time image. This system, called the Ped-Counter, counts the number of pedestrians using TV camera images and image processing. The TV camera takes images of a path or the entrance of a store or an exhibition hall. A practical location for the counting system is under constant illumination conditions, such as under an arcade or in a room. If the system is used in other locations and under different conditions, for example, an outdoor path or an entrance where there is direct sunlight, then it must be robust against illumination level changes. The paper proposes two new processes for extracting moving objects from space-time images using background subtraction. The first is a background image reconstruction process using statistical characteristic analysis of temporal changes of a target pixel. The other is a moving object extraction process using delayed subtraction. These processes make the system robust against illumination level changes. This method is evaluated over a half-year period at an entrance experiencing drastic illumination level changes. We also confirm that this method can extract moving objects, that it is robust against season, weather, and time, and that the Ped-Counter is accurate." ] }
1407.1687
2949371678
Neural network techniques are widely applied to obtain high-quality distributed representations of words, i.e., word embeddings, to address text mining, information retrieval, and natural language processing tasks. Recently, efficient methods have been proposed to learn word embeddings from context that captures both semantic and syntactic relationships between words. However, it is challenging to handle unseen words or rare words with insufficient context. In this paper, inspired by the study on word recognition process in cognitive psychology, we propose to take advantage of seemingly less obvious but essentially important morphological knowledge to address these challenges. In particular, we introduce a novel neural network architecture called KNET that leverages both contextual information and morphological word similarity built based on morphological knowledge to learn word embeddings. Meanwhile, the learning architecture is also able to refine the pre-defined morphological knowledge and obtain more accurate word similarity. Experiments on an analogical reasoning task and a word similarity task both demonstrate that the proposed KNET framework can greatly enhance the effectiveness of word embeddings.
Word embedding as continuous vectors has been studied for a long time @cite_7 . Many different types of models were proposed for learning continuous representations of words, such as the well-known Latent Semantic Analysis (LSA) @cite_6 and Latent Dirichlet Allocation (LDA) @cite_34 . However, such probabilistic approaches usually yield the limitation in terms of scalability. Recently, deep learning methods have been applied to obtain continuous word embeddings to solve a variety of text mining, information retrieval, and natural language processing tasks @cite_32 @cite_15 @cite_8 @cite_30 @cite_37 @cite_4 @cite_12 @cite_9 @cite_16 @cite_28 @cite_26 . For example, Collobert @cite_32 @cite_16 proposed a unified neural network architecture that learns word representations based on large amounts of unlabeled training data, to deal with several different natural language processing tasks.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_4", "@cite_7", "@cite_8", "@cite_28", "@cite_9", "@cite_32", "@cite_6", "@cite_16", "@cite_15", "@cite_34", "@cite_12" ], "mid": [ "", "1423339008", "2158139315", "2962769333", "", "", "2131462252", "1978516841", "2117130368", "2953062473", "2158899491", "22861983", "1880262756", "1662133657" ], "abstract": [ "", "Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .", "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "There have been several efforts to extend distributional semantics beyond individual words, to measure the similarity of word pairs, phrases, and sentences (briefly, tuples ; ordered sets of words, contiguous or noncontiguous). One way to extend beyond words is to compare two tuples using a function that combines pairwise similarities between the component words in the tuples. A strength of this approach is that it works with both relational similarity (analogy) and compositional similarity (paraphrase). However, past work required hand-coding the combination function for different tasks. The main contribution of this paper is that combination functions are generated by supervised learning. We achieve state-of-the-art results in measuring relational similarity between word pairs (SAT analogies and SemEval 2012 Task 2) and measuring compositional similarity between noun-modifier phrases and unigrams (multiple-choice paraphrase questions).", "", "", "Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the non-hierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models.", "Deep stacking networks (DSN) are a special type of deep model equipped with parallel and scalable learning. We report successful applications of DSN to an information retrieval (IR) task pertaining to relevance prediction for sponsor search after careful regularization methods are incorporated to the previous DSN methods developed for speech and image classification tasks. The DSN-based system significantly outperforms the LambdaRank-based system which represents a recent state-of-the-art for IR in normalized discounted cumulative gain (NDCG) measures, despite the use of mean square error as DSN's training objective. We demonstrate desirable monotonic correlation between NDCG and classification rate in a wide range of IR quality. The weaker correlation and more flat relationship in the high IR-quality region suggest the need for developing new learning objectives and optimization methods.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two-mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field." ] }
1407.1687
2949371678
Neural network techniques are widely applied to obtain high-quality distributed representations of words, i.e., word embeddings, to address text mining, information retrieval, and natural language processing tasks. Recently, efficient methods have been proposed to learn word embeddings from context that captures both semantic and syntactic relationships between words. However, it is challenging to handle unseen words or rare words with insufficient context. In this paper, inspired by the study on word recognition process in cognitive psychology, we propose to take advantage of seemingly less obvious but essentially important morphological knowledge to address these challenges. In particular, we introduce a novel neural network architecture called KNET that leverages both contextual information and morphological word similarity built based on morphological knowledge to learn word embeddings. Meanwhile, the learning architecture is also able to refine the pre-defined morphological knowledge and obtain more accurate word similarity. Experiments on an analogical reasoning task and a word similarity task both demonstrate that the proposed KNET framework can greatly enhance the effectiveness of word embeddings.
There are some knowledge related word embedding works in the literature, but most of them were targeted at the problems of knowledge base completion and enhancement @cite_18 @cite_17 @cite_31 rather than producing high-quality word embeddings, which is different with our work. In contrast, some recent efforts have explored how to take advantage of knowledge to product better word embedding. For example, @cite_29 introduced a co-learning framework to produce both the word representation and the morpheme representation such that each of them can be mutually reinforced. @cite_0 proposed a new learning objective that integrates both a neural language model objective and a semantic prior knowledge objective which can result in better word embedding for semantic tasks. Moreover, a recent work @cite_36 took empirical studies on how to incorporate various types of knowledge in order to enhance word embedding. According to this work, morphological, syntactic, and semantic knowledge are all valuable to improve the quality of word embedding. In this paper, as we aim at obtaining high-quality word embeddings for rare words and unknown words, we focus on leveraging morphological knowledge since it can generate critical correlation between rare unknown words with popular ones.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_29", "@cite_0", "@cite_31", "@cite_17" ], "mid": [ "2156954687", "68293321", "2142377809", "2250930514", "1792926363", "2127426251" ], "abstract": [ "", "The basis of applying deep learning to solve natural language processing tasks is to obtain high-quality distributed representations of words, i.e., word embeddings, from large amounts of text data. However, text itself usually contains incomplete and ambiguous information, which makes necessity to leverage extra knowledge to understand it. Fortunately, text itself already contains well-defined morphological and syntactic knowledge; moreover, the large amount of texts on the Web enable the extraction of plenty of semantic knowledge. Therefore, it makes sense to design novel deep learning algorithms and systems in order to leverage the above knowledge to compute more effective word embeddings. In this paper, we conduct an empirical study on the capacity of leveraging morphological, syntactic, and semantic knowledge to achieve high-quality word embeddings. Our study explores these types of knowledge to define new basis for word representation, provide additional input information, and serve as auxiliary supervision in deep learning, respectively. Experiments on an analogical reasoning task, a word similarity task, and a word completion task have all demonstrated that knowledge-powered deep learning can enhance the effectiveness of word embedding.", "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.", "Word embeddings learned on unlabeled data are a popular tool in semantics, but may not capture the desired semantics. We propose a new learning objective that incorporates both a neural language model objective (, 2013) and prior knowledge from semantic resources to learn improved lexical semantic embeddings. We demonstrate that our embeddings improve over those learned solely on raw text in three settings: language modeling, measuring semantic similarity, and predicting human judgements.", "This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on scoring functions that operate by learning low-dimensional embeddings of words, entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over methods that rely on text features alone.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1407.1687
2949371678
Neural network techniques are widely applied to obtain high-quality distributed representations of words, i.e., word embeddings, to address text mining, information retrieval, and natural language processing tasks. Recently, efficient methods have been proposed to learn word embeddings from context that captures both semantic and syntactic relationships between words. However, it is challenging to handle unseen words or rare words with insufficient context. In this paper, inspired by the study on word recognition process in cognitive psychology, we propose to take advantage of seemingly less obvious but essentially important morphological knowledge to address these challenges. In particular, we introduce a novel neural network architecture called KNET that leverages both contextual information and morphological word similarity built based on morphological knowledge to learn word embeddings. Meanwhile, the learning architecture is also able to refine the pre-defined morphological knowledge and obtain more accurate word similarity. Experiments on an analogical reasoning task and a word similarity task both demonstrate that the proposed KNET framework can greatly enhance the effectiveness of word embeddings.
Some previous works have attempted to include morphology in continuous models, especially in the speech recognition field, including Letter n-gram @cite_19 and feature-rich DNN-LMs @cite_23 . The first work improves the letter-based word representation by replacing the 1-of- @math word input of restricted Boltzman machine with a vector indicating all n-grams of order n and smaller that occur in the word. Additional information such as capitalization is added as well. In the model of feature-rich DNN-LMs, the authors expand the inputs of the network to be a mixture of 142 selected full words and morphemes together with their features such as morphological tags. Both of these works intend to capture more morphological information so as to better generalize to rare unknown words and to lower the out-of-vocabulary rate.
{ "cite_N": [ "@cite_19", "@cite_23" ], "mid": [ "2250741237", "2050469586" ], "abstract": [ "We present a letter-based encoding for words in continuous space language models. We represent the words completely by letter n-grams instead of using the word index. This way, similar words will automatically have a similar representation. With this we hope to better generalize to unknown or rare words and to also capture morphological information. We show their influence in the task of machine translation using continuous space language models based on restricted Boltzmann machines. We evaluate the translation quality as well as the training time on a German-to-English translation task of TED and university lectures as well as on the news translation task translating from English to German. Using our new approach a gain in BLEU score by up to 0.4 points can be achieved.", "Egyptian Arabic (EA) is a colloquial version of Arabic. It is a low-resource morphologically rich language that causes problems in Large Vocabulary Continuous Speech Recognition (LVCSR). Building LMs on morpheme level is considered a better choice to achieve higher lexical coverage and better LM probabilities. Another approach is to utilize information from additional features such as morphological tags. On the other hand, LMs based on Neural Networks (NNs) with a single hidden layer have shown superiority over the conventional n-gram LMs. Recently, Deep Neural Networks (DNNs) with multiple hidden layers have achieved better performance in various tasks. In this paper, we explore the use of feature-rich DNN-LMs, where the inputs to the network are a mixture of words and morphemes along with their features. Significant Word Error Rate (WER) reductions are achieved compared to the traditional word-based LMs." ] }
1407.1687
2949371678
Neural network techniques are widely applied to obtain high-quality distributed representations of words, i.e., word embeddings, to address text mining, information retrieval, and natural language processing tasks. Recently, efficient methods have been proposed to learn word embeddings from context that captures both semantic and syntactic relationships between words. However, it is challenging to handle unseen words or rare words with insufficient context. In this paper, inspired by the study on word recognition process in cognitive psychology, we propose to take advantage of seemingly less obvious but essentially important morphological knowledge to address these challenges. In particular, we introduce a novel neural network architecture called KNET that leverages both contextual information and morphological word similarity built based on morphological knowledge to learn word embeddings. Meanwhile, the learning architecture is also able to refine the pre-defined morphological knowledge and obtain more accurate word similarity. Experiments on an analogical reasoning task and a word similarity task both demonstrate that the proposed KNET framework can greatly enhance the effectiveness of word embeddings.
In the NLP and text mining domain, Luong @cite_25 proposed a morphological Recursive Neural Network (morphoRNN) that combines recursive neural networks and neural language models to learn better word representations, in which they regarded each morpheme as a basic unit and leveraged neural language models to consider contextual information in learning morphologically-aware word representations. We will compare our proposed model with morphoRNN in Section .
{ "cite_N": [ "@cite_25" ], "mid": [ "2251012068" ], "abstract": [ "Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way." ] }
1407.2002
1968407925
Display Omitted We model usage patterns of five different ontology-engineering projects.Users work in micro-workflows and specific user-roles can be identified.Class hierarchy influences users' edit behavior.Users edit ontologies top-down, breadth-first and prefer closely related classes.Users perform property-based workflows. Biomedical taxonomies, thesauri and ontologies in the form of the International Classification of Diseases as a taxonomy or the National Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in acquiring, representing and processing information about human health. With increasing adoption and relevance, biomedical ontologies have also significantly increased in size. For example, the 11th revision of the International Classification of Diseases, which is currently under active development by the World Health Organization contains nearly 50 , 000 classes representing a vast variety of different diseases and causes of death. This evolution in terms of size was accompanied by an evolution in the way ontologies are engineered. Because no single individual has the expertise to develop such large-scale ontologies, ontology-engineering projects have evolved from small-scale efforts involving just a few domain experts to large-scale projects that require effective collaboration between dozens or even hundreds of experts, practitioners and other stakeholders. Understanding the way these different stakeholders collaborate will enable us to improve editing environments that support such collaborations. In this paper, we uncover how large ontology-engineering projects, such as the International Classification of Diseases in its 11th revision, unfold by analyzing usage logs of five different biomedical ontology-engineering projects of varying sizes and scopes using Markov chains. We discover intriguing interaction patterns (e.g., which properties users frequently change after specific given ones) that suggest that large collaborative ontology-engineering projects are governed by a few general principles that determine and drive development. From our analysis, we identify commonalities and differences between different projects that have implications for project managers, ontology editors, developers and contributors working on collaborative ontology-engineering projects and tools in the biomedical domain.
More recent research on collaborative authoring systems, such as Wikipedia, focuses on describing and defining not only the act of collaboration amongst strangers and uncertain situations that contribute to a digital good @cite_21 but also on antagonism and sabotage of said systems @cite_28 . It has also been discovered only recently that Wikipedia editors are slowly but steadily declining @cite_19 . Therefore have analyzed what impact reverts have on new editors of Wikipedia. showed that an increase in participation can be achieved by directly delegating specific tasks to contributors. As simple as this approach may appear, the identification of work (and thus specific tasks) is still a tedious and time-consuming process, which can only partly be automated due to its assigned complexity.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_21" ], "mid": [ "2113439836", "1993500013", "2073018527" ], "abstract": [ "Research on trolls is scarce, but their activities challenge online communities; one of the main challenges of the Wikipedia community is to fight against vandalism and trolls. This study identifies Wikipedia trolls’ behaviours and motivations, and compares and contrasts hackers with trolls; it extends our knowledge about this type of vandalism and concludes that Wikipedia trolls are one type of hacker. This study reports that boredom, attention seeking, and revenge motivate trolls; they regard Wikipedia as an entertainment venue, and find pleasure from causing damage to the community and other people. Findings also suggest that trolls’ behaviours are characterized as repetitive, intentional, and harmful actions that are undertaken in isolation and under hidden virtual identities, involving violations of Wikipedia policies, and consisting of destructive participation in the community.", "Prior research on Wikipedia has characterized the growth in content and editors as being fundamentally exponential in nature, extrapolating current trends into the future. We show that recent editing activity suggests that Wikipedia growth has slowed, and perhaps plateaued, indicating that it may have come against its limits to growth. We measure growth, population shifts, and patterns of editor and administrator activities, contrasting these against past results where possible. Both the rate of page growth and editor growth has declined. As growth has declined, there are indicators of increased coordination and overhead costs, exclusion of newcomers, and resistance to new edits. We discuss some possible explanations for these new developments in Wikipedia including decreased opportunities for sharing existing knowledge and increased bureaucratic stress on the socio-technical system itself.", "Wikipedia editors are uniquely motivated to collaborate around current and breaking news events. However, the speed, urgency, and intensity with which these collaborations unfold also impose a substantial burden on editors' abilities to effectively coordinate tasks and process information. We analyze the patterns of activity on Wikipedia following the 2011 Tōhoku earthquake and tsunami to understand the dynamics of editor attention and participation, novel practices employed to collaborate on these articles, and the resulting coauthorship structures which emerge between editors and articles. Our findings have implications for supporting future coverage of breaking news articles, theorizing about motivations to participate in online community, and illuminating Wikipedia's potential role in storing cultural memories of catastrophe." ] }
1407.1974
2952872837
Stein kernel has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: i) Eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation; ii) More importantly, eigenvalues only reflect the property of an individual SPD matrix. They are not necessarily optimal for computing Stein kernel when the goal is to discriminate different sets of SPD matrices. To address the two issues in one shot, we propose a discriminative Stein kernel, in which an extra parameter vector is defined to adjust the eigenvalues of the input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three different kernel learning criteria that are commonly used in the literature are employed respectively as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare our proposed discriminative Stein kernel with the original Stein kernel and other commonly used methods for evaluating the similarity between SPD matrices. The experimental results demonstrate that, the discriminative Stein kernel can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original Stein kernel and other commonly used methods.
The set of SPD matrices with the size of @math can be defined as @math . SPD matrices arise in various pattern analysis and computer vision tasks. Geometrically, SPD matrices form a convex half-cone in the vector space of matrices and the cone constitutes a Riemannian manifold. A Riemannian manifold is a real smooth manifold that is differentiable and equipped with a smoothly varying inner product for each tangent space. The family of the inner products is referred to as a Riemannian metric. The special manifold structure of SPD matrices is of great importance in analysis and optimization @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2158225132" ], "abstract": [ "Positive definite matrices abound in a dazzling variety of applications. This ubiquity can be in part attributed to their rich geometric structure: positive definite matrices form a self-dual convex cone whose strict interior is a Riemannian manifold. The manifold view is endowed with a \"natural\" distance function while the conic view is not. Nevertheless, drawing motivation from the conic view, we introduce the S-Divergence as a \"natural\" distance-like function on the open cone of positive definite matrices. We motivate the S-divergence via a sequence of results that connect it to the Riemannian distance. In particular, we show that (a) this divergence is the square of a distance; and (b) that it has several geometric properties similar to those of the Riemannian distance, though without being computationally as demanding. The S-divergence is even more intriguing: although nonconvex, we can still compute matrix means and medians using it to global optimality. We complement our results with some numerical experiments illustrating our theorems and our optimization algorithm for computing matrix medians." ] }
1407.1974
2952872837
Stein kernel has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: i) Eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation; ii) More importantly, eigenvalues only reflect the property of an individual SPD matrix. They are not necessarily optimal for computing Stein kernel when the goal is to discriminate different sets of SPD matrices. To address the two issues in one shot, we propose a discriminative Stein kernel, in which an extra parameter vector is defined to adjust the eigenvalues of the input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three different kernel learning criteria that are commonly used in the literature are employed respectively as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare our proposed discriminative Stein kernel with the original Stein kernel and other commonly used methods for evaluating the similarity between SPD matrices. The experimental results demonstrate that, the discriminative Stein kernel can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original Stein kernel and other commonly used methods.
Let @math and @math be two SPD matrices. How to measure the similarity between @math and @math is a fundamental issue in SPD data processing and analysis. Recent years have seen extensive work on this issue. Respecting the Riemannian manifold, one widely used Riemannian metric is the affine-invariant Riemannian metric (AIRM) @cite_12 , which is defined as where @math represents the matrix logarithm and @math is the Frobenius norm. The computational cost of AIRM could be high due to the use of matrix inverse and square rooting. Some other methods directly map SPD matrices into Euclidean spaces to utilize linear algorithms @cite_35 @cite_0 . However, they fail to take full advantage of the geometry structure of Riemannian manifold.
{ "cite_N": [ "@cite_0", "@cite_35", "@cite_12" ], "mid": [ "2242953327", "2125389556", "1983496390" ], "abstract": [ "We introduce Generalized Dictionary Learning (GDL), a simple but practical framework for learning dictionaries over the manifold of positive definite matrices. We illustrate GDL by applying it to Nearest Neighbor (NN) retrieval, a task of fundamental importance in disciplines such as machine learning and computer vision. GDL distinguishes itself from traditional dictionary learning approaches by explicitly taking into account the manifold structure of the data. In particular, GDL allows performing \"sparse coding\" of positive definite matrices, which enables better NN retrieval. Experiments on several covariance matrix datasets show that GDL achieves performance rivaling state-of-the-art techniques.", "A novel approach to action recognition in video based onthe analysis of optical flow is presented. Properties of opticalflow useful for action recognition are captured usingonly the empirical covariance matrix of a bag of featuressuch as flow velocity, gradient, and divergence. The featurecovariance matrix is a low-dimensional representationof video dynamics that belongs to a Riemannian manifold.The Riemannian manifold of covariance matrices is transformedinto the vector space of symmetric matrices underthe matrix logarithm mapping. The log-covariance matrixof a test action segment is approximated by a sparse linearcombination of the log-covariance matrices of training actionsegments using a linear program and the coefficients ofthe sparse linear representation are used to recognize actions.This approach based on the unique blend of a logcovariance-descriptor and a sparse linear representation istested on the Weizmann and KTH datasets. The proposedapproach attains leave-one-out cross validation scores of94.4 correct classification rate for the Weizmann datasetand 98.5 for the KTH dataset. Furthermore, the methodis computationally efficient and easy to implement.", "Tensors are nowadays a common source of geometric information. In this paper, we propose to endow the tensor space with an affine-invariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular and complete manifold without boundaries (null eigenvalues are at the infinity), the geodesic between two tensors and the mean of a set of tensors are uniquely defined, etc. We have previously shown that the Riemannian metric provides a powerful framework for generalizing statistics to manifolds. In this paper, we show that it is also possible to generalize to tensor fields many important geometric data processing algorithms such as interpolation, filtering, diffusion and restoration of missing data. For instance, most interpolation and Gaussian filtering schemes can be tackled efficiently through a weighted mean computation. Linear and anisotropic diffusion schemes can be adapted to our Riemannian framework, through partial differential evolution equations, provided that the metric of the tensor space is taken into account. For that purpose, we provide intrinsic numerical schemes to compute the gradient and Laplace-Beltrami operators. Finally, to enforce the fidelity to the data (either sparsely distributed tensors or complete tensors fields) we propose least-squares criteria based on our invariant Riemannian distance which are particularly simple and efficient to solve." ] }
1407.1808
2950612966
We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.
For semantic segmentation, several researchers have tried to use activations from off-the-shelf object detectors to guide the segmentation process. Yang al @cite_17 use object detections from the deformable parts model @cite_3 to segment the image, pasting figure-ground masks and reasoning about their relative depth ordering. Arbel ' a ez al @cite_20 use poselet detections @cite_9 as features to score region candidates, in addition to appearance-based cues. Ladicky al @cite_5 use object detections as higher order potentials in a CRF-based segmentation system: all pixels in the foreground of a detected object are encouraged to share the category label of the detection. In addition, their system is allowed to switch off these potentials by assigning a true false label to each detection. This system was extended by Boix al @cite_13 who added a global, image-level node in the CRF to reason about the categories present in the image, and by Kim al @cite_10 who added relationships between objects. In more recent work, Tighe al @cite_8 use exemplar object detectors to segment out the scene as well as individual instances.
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_9", "@cite_3", "@cite_5", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "", "", "1864464506", "2168356304", "1610707153", "", "2115150266", "2083542343" ], "abstract": [ "", "", "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "Computer vision algorithms for individual tasks such as object recognition, detection and segmentation have shown impressive results in the recent past. The next challenge is to integrate all these algorithms and address the problem of scene understanding. This paper is a step towards this goal. We present a probabilistic framework for reasoning about regions, objects, and their attributes such as object class, location, and spatial extent. Our model is a Conditional Random Field defined on pixels, segments and objects. We define a global energy function for the model, which combines results from sliding window detectors, and low-level pixel-based unary and pairwise relations. One of our primary contributions is to show that this energy function can be solved efficiently. Experimental results show that our model achieves significant improvement over the baseline methods on CamVid and PASCAL VOC datasets.", "", "We address the problem of segmenting and recognizing objects in real world images, focusing on challenging articulated categories such as humans and other animals. For this purpose, we propose a novel design for region-based object detectors that integrates efficiently top-down information from scanning-windows part models and global appearance cues. Our detectors produce class-specific scores for bottom-up regions, and then aggregate the votes of multiple overlapping candidates through pixel classification. We evaluate our approach on the PASCAL segmentation challenge, and report competitive performance with respect to current leading techniques. On VOC2010, our method obtains the best results in 6 20 categories and the highest performance on articulated objects.", "We formulate a layered model for object detection and image segmentation. We describe a generative probabilistic model that composites the output of a bank of object detectors in order to define shape masks and explain the appearance, depth ordering, and labels of all pixels in an image. Notably, our system estimates both class labels and object instance labels. Building on previous benchmark criteria for object detection and image segmentation, we define a novel score that evaluates both class and instance segmentation. We evaluate our system on the PASCAL 2009 and 2010 segmentation challenge data sets and show good test results with state-of-the-art performance in several categories, including segmenting humans." ] }