aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1512.02968
2953254179
Social media has emerged to be a popular platform for people to express their viewpoints on political protests like the Arab Spring. Millions of people use social media to communicate and mobilize their viewpoints on protests. Hence, it is a valuable tool for organizing social movements. However, the mechanisms by which protest affects the population is not known, making it difficult to estimate the number of protestors. In this paper, we are inspired by sociological theories of protest participation and propose a framework to predict from the user's past status messages and interactions whether the next post of the user will be a declaration of protest. Drawing concepts from these theories, we model the interplay between the user's status messages and messages interacting with him over time and predict whether the next post of the user will be a declaration of protest. We evaluate the framework using data from the social media platform Twitter on protests during the recent Nigerian elections and demonstrate that it can effectively predict whether the next post of a user is a declaration of protest.
The use of social media for political campaigns has received considerable attention in the literature @cite_14 @cite_20 @cite_1 . @cite_30 study the problem of extracting campaigns from social media using textual co-similarity, and the authors here do not concentrate on the behavior of individual users. The use of social media for political mobilization has been studied in @cite_24 , and the authors analyze the effect of political messages on political self-expression and interactions. They, however, do not predict future declarations of protest of an individual user. Behavior adoption by social media users has been modeled as an effect of information propagation in @cite_19 @cite_15 @cite_12 . The effect of external influence on information propagation in networks have been studied in @cite_8 @cite_33 . Propagation networks might not be fully available in many scenarios, and we utilize the user history to predict whether his next post will be a declaration of protest.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_8", "@cite_1", "@cite_24", "@cite_19", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2019077935", "2121319388", "2164227199", "", "2293350154", "", "2040811557", "2951545534", "1551048630", "" ], "abstract": [ "In this manuscript, we study the problem of detecting coordinated free text campaigns in large-scale social media. These campaigns—ranging from coordinated spam messages to promotional and advertising campaigns to political astro-turfing—are growing in significance and reach with the commensurate rise in massive-scale social systems. Specifically, we propose and evaluate a content-driven framework for effectively linking free text posts with common “talking points” and extracting campaigns from large-scale social media. Three of the salient features of the campaign extraction framework are: (i) first, we investigate graph mining techniques for isolating coherent campaigns from large message-based graphs; (ii) second, we conduct a comprehensive comparative study of text-based message correlation in message and user levels; and (iii) finally, we analyze temporal behaviors of various campaign types. Through an experimental study over millions of Twitter messages we identify five major types of campaigns—namely Spam, Promotion, Template, News, and Celebrity campaigns—and we show how these campaigns may be extracted with high precision and recall.", "The Twitter Revolutions of 2009 reinvigorated the question of whether new social media have any real effect on contentious politics. In this article, the authors argue that evaluating the relation between transforming communication technologies and collective action demands recognizing how such technologies infuse specific protest ecologies. This includes looking beyond informational functions to the role of social media as organizing mechanisms and recognizing that traces of these media may reflect larger organizational schemes. Three points become salient in the case of Twitter against this background: (a) Twitter streams represent crosscutting networking mechanisms in a protest ecology, (b) they embed and are embedded in various kinds of gatekeeping processes, and (c) they reflect changing dynamics in the ecology over time. The authors illustrate their argument with reference to two hashtags used in the protests around the 2009 United Nations Climate Summit in Copenhagen.", "Many people share their activities with others through online communities. These shared activities have an impact on other users' activities. For example, users are likely to become interested in items that are adopted (e.g. liked, bought and shared) by their friends. In this paper, we propose a probabilistic model for discovering latent influence from sequences of item adoption events. An inhomogeneous Poisson process is used for modeling a sequence, in which adoption by a user triggers the subsequent adoption of the same item by other users. For modeling adoption of multiple items, we employ multiple inhomogeneous Poisson processes, which share parameters, such as influence for each user and relations between users. The proposed model can be used for finding influential users, discovering relations between users and predicting item popularity in the future. We present an efficient Bayesian inference procedure of the proposed model based on the stochastic EM algorithm. The effectiveness of the proposed model is demonstrated by using real data sets in a social bookmark sharing service.", "", "Social media is increasingly being used to access and disseminate information on sociopolitical issues like gun rights and general elections. The popularity and openness of social media makes it conducive for some individuals, known as advocates, who use social media to push their agendas on these issues strategically. Identifying these advocates will caution social media users before reading their information and also enable campaign managers to identify advocates for their digital political campaigns. A significant challenge in identifying advocates is that they employ nuanced strategies to shape user opinion and increase the spread of their messages, making it difficult to distinguish them from random users posting on the campaign. In this paper, we draw from social movement theories and design a quantitative framework to study the nuanced message strategies, propagation strategies, and community structure adopted by advocates for political campaigns in social media. Based on observations of their social media activities manifesting from these strategies, we investigate how to model these strategies for identifying them. We evaluate the framework using two datasets from Twitter, and our experiments demonstrate its effectiveness in identifying advocates for political campaigns with ramifications of this work directed towards assisting users as they navigate through social media spaces.", "", "In networks, contagions such as information, purchasing behaviors, and diseases, spread and diffuse from node to node over the edges of the network. Moreover, in real-world scenarios multiple contagions spread through the network simultaneously. These contagions not only propagate at the same time but they also interact and compete with each other as they spread over the network. While traditional empirical studies and models of diffusion consider individual contagions as independent and thus spreading in isolation, we study how different contagions interact with each other as they spread through the network. We develop a statistical model that allows for competition as well as cooperation of different contagions in information diffusion. Competing contagions decrease each other's probability of spreading, while cooperating contagions help each other in being adopted throughout the network. We evaluate our model on 18,000 contagions simultaneously spreading through the Twitter network. Our model learns how different contagions interact with each other and then uses these interactions to more accurately predict the diffusion of a contagion through the network. Moreover, the model also provides a compelling hypothesis for the principles that govern content interaction in information diffusion. Most importantly, we find very strong effects of interactions between contagions. Interactions cause a relative change in the spreading probability of a contagion by em 71 on the average.", "Networks provide a skeleton for the spread of contagions, like, information, ideas, behaviors and diseases. Many times networks over which contagions diffuse are unobserved and need to be inferred. Here we apply survival theory to develop general additive and multiplicative risk models under which the network inference problems can be solved efficiently by exploiting their convexity. Our additive risk model generalizes several existing network inference models. We show all these models are particular cases of our more general model. Our multiplicative model allows for modeling scenarios in which a node can either increase or decrease the risk of activation of another node, in contrast with previous approaches, which consider only positive risk increments. We evaluate the performance of our network inference algorithms on large synthetic and real cascade datasets, and show that our models are able to predict the length and duration of cascades in real data.", "We examine the growth, survival, and context of 256 novel hashtags during the 2012 U.S. presidential debates. Our analysis reveals the trajectories of hashtag use fall into two distinct classes: \"winners\" that emerge more quickly and are sustained for longer periods of time than other \"also-rans\" hashtags. We propose a \"conversational vibrancy\" framework to capture dynamics of hashtags based on their topicality, interactivity, diversity, and prominence. Statistical analyses of the growth and persistence of hashtags reveal novel relationships between features of this framework and the relative success of hashtags. Specifically, retweets always contribute to faster hashtag adoption, replies extend the life of \"winners\" while having no effect on \"also-rans.\" This is the first study on the lifecycle of hashtag adoption and use in response to purely exogenous shocks. We draw on theories of uses and gratification, organizational ecology, and language evolution to discuss these findings and their implications for understanding social influence and collective action in social media more generally.", "" ] }
1512.02968
2953254179
Social media has emerged to be a popular platform for people to express their viewpoints on political protests like the Arab Spring. Millions of people use social media to communicate and mobilize their viewpoints on protests. Hence, it is a valuable tool for organizing social movements. However, the mechanisms by which protest affects the population is not known, making it difficult to estimate the number of protestors. In this paper, we are inspired by sociological theories of protest participation and propose a framework to predict from the user's past status messages and interactions whether the next post of the user will be a declaration of protest. Drawing concepts from these theories, we model the interplay between the user's status messages and messages interacting with him over time and predict whether the next post of the user will be a declaration of protest. We evaluate the framework using data from the social media platform Twitter on protests during the recent Nigerian elections and demonstrate that it can effectively predict whether the next post of a user is a declaration of protest.
Considerable attention has been given to predicting characteristics of a user's future status messages from his history @cite_5 @cite_32 @cite_22 @cite_21 . The authors in @cite_23 predict the characteristics of the user's future messages by modeling the topics from his previous posts. This utilizes only the content of the individual user. The authors in @cite_10 use collaborative filtering methods by incorporating the content of similar users and posts to compute characteristics of future messages. The authors of @cite_26 predict the characteristics of future messages by integrating the past content of the user, temporal information with the effect of interactions between candidate users. We build upon these works to model how messages interact with the user over time to affect the probability of his protest declaration in his next post.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_32", "@cite_23", "@cite_5", "@cite_10" ], "mid": [ "1974778163", "2094524282", "2060614944", "2246346552", "1553232534", "2104894372", "1683269307" ], "abstract": [ "The adoption of hashtags in major social networks including Twitter, Facebook, and Google+ is a strong evidence of its importance in facilitating information diffusion and social chatting. To understand the factors (e.g., user interest, posting time and tweet content) that may affect hashtag annotation in Twitter and to capture the implicit relations between latent topics in tweets and their corresponding hashtags, we propose two PLSA-style topic models to model the hashtag annotation behavior in Twitter. Content-Pivoted Model (CPM) assumes that tweet content guides the generation of hashtags while Hashtag-Pivoted Model (HPM) assumes that hashtags guide the generation of tweet content. Both models jointly incorporate user, time, hashtag and tweet content in a probabilistic framework. The PLSA-style models also enable us to verify the impact of social factor on hashtag annotation by introducing social network regularization in the two models. We evaluate the proposed models using perplexity and demonstrate their effectiveness in two applications: retrospective hashtag annotation and related hashtag discovery. Our results show that HPM outperforms CPM by perplexity and both user and time are important factors that affect model performance. In addition, incorporating social network regularization does not improve model performance. Our experimental results also demonstrate the effectiveness of our models in both applications compared with baseline methods.", "Topic recommendation can help users deal with the information overload issue in micro-blogging communities. This paper proposes to use the implicit information network formed by the multiple relationships among users, topics and micro-blogs, and the temporal information of micro-blogs to find semantically and temporally relevant topics of each topic, and to profile users' time-drifting topic interests. The Content based, Nearest Neighborhood based and Matrix Factorization models are used to make personalized recommendations. The effectiveness of the proposed approaches is demonstrated in the experiments conducted on a real world dataset that collected from Twitter.com.", "Social media is gaining popularity as a medium of communication before, during, and after crises. In several recent disasters, it has become evident that social media sites like Twitter and Facebook are an important source of information, and in cases they have even assisted in relief efforts. We propose a novel approach to identify a subset of active users during a crisis who can be tracked for fast access to information. Using a Twitter dataset that consists of 12.9 million tweets from 5 countries that are part of the \"Arab Spring\" movement, we show how instant information access can be achieved by user identification along two dimensions: user's location and the user's affinity towards topics of discussion. Through evaluations, we demonstrate that users selected by our approach generate more information and the quality of the information is better than that of users identified using state-of-the-art techniques.", "Social media is being increasingly used to request information and help in situations like natural disasters, where time is a critical commodity. However, generic social media platforms are not explicitly designed for timely information seeking, making it difficult for users to obtain prompt responses. Algorithms to ensure prompt responders for questions in social media have to understand the factors affecting their response time. In this paper, we draw from sociological studies on information seeking and organizational behavior to model the future availability and past response behavior of the candidate responders. We integrate these criteria with their interests to identify users who can provide timely and relevant responses to questions posted in social media. We propose a learning algorithm to derive optimal rankings of responders for a given question. We present questions posted on Twitter as a form of information seeking activity in social media. Our experiments demonstrate that the proposed framework is useful in identifying timely and relevant responders for questions in social media.", "Since the introduction of microblogging services, there has been a continuous growth of short-text social networking on the Internet. With the generation of large amounts of microposts, there is a need for effective categorization and search of the data. Twitter, one of the largest microblogging sites, allows users to make use of hashtags to categorize their posts. However, the majority of tweets do not contain tags, which hinders the quality of the search results. In this paper, we propose a novel method for unsupervised and content-based hashtag recommendation for tweets. Our approach relies on Latent Dirichlet Allocation (LDA) to model the underlying topic assignment of language classified tweets. The advantage of our approach is the use of a topic distribution to recommend general hashtags.", "Researchers and social observers have both believed that hashtags, as a new type of organizational objects of information, play a dual role in online microblogging communities (e.g., Twitter). On one hand, a hashtag serves as a bookmark of content, which links tweets with similar topics; on the other hand, a hashtag serves as the symbol of a community membership, which bridges a virtual community of users. Are the real users aware of this dual role of hashtags? Is the dual role affecting their behavior of adopting a hashtag? Is hashtag adoption predictable? We take the initiative to investigate and quantify the effects of the dual role on hashtag adoption. We propose comprehensive measures to quantify the major factors of how a user selects content tags as well as joins communities. Experiments using large scale Twitter datasets prove the effectiveness of the dual role, where both the content measures and the community measures significantly correlate to hashtag adoption on Twitter. With these measures as features, a machine learning model can effectively predict the future adoption of hashtags that a user has never used before.", "Twitter network is currently overwhelmed by massive amount of tweets generated by its users. To effectively organize and search tweets, users have to depend on appropriate hashtags inserted into tweets. We begin our research on hashtags by first analyzing a Twitter dataset generated by more than 150,000 Singapore users over a three-month period. Among several interesting findings about hashtag usage by this user community, we have found a consistent and significant use of new hashtags on a daily basis. This suggests that most hashtags have very short life span. We further propose a novel hashtag recommendation method based on collaborative filtering and the method recommends hashtags found in the previous month's data. Our method considers both user preferences and tweet content in selecting hashtags to be recommended. Our experiments show that our method yields better performance than recommendation based only on tweet content, even by considering the hashtags adopted by a small number (1 to 3)of users who share similar user preferences." ] }
1512.02968
2953254179
Social media has emerged to be a popular platform for people to express their viewpoints on political protests like the Arab Spring. Millions of people use social media to communicate and mobilize their viewpoints on protests. Hence, it is a valuable tool for organizing social movements. However, the mechanisms by which protest affects the population is not known, making it difficult to estimate the number of protestors. In this paper, we are inspired by sociological theories of protest participation and propose a framework to predict from the user's past status messages and interactions whether the next post of the user will be a declaration of protest. Drawing concepts from these theories, we model the interplay between the user's status messages and messages interacting with him over time and predict whether the next post of the user will be a declaration of protest. We evaluate the framework using data from the social media platform Twitter on protests during the recent Nigerian elections and demonstrate that it can effectively predict whether the next post of a user is a declaration of protest.
The study of Brownian motion on networks has been investigated in @cite_3 @cite_17 . The theoretical development for adapting Brownian motion in the network environment has been considered in @cite_3 , with applications in community discovery. It has also been used for model movements of stock price, due to its ability to model sharp changes @cite_7 . More recently, Brownian motion has been used to model information propagation processes in the presence of external influence in social networks @cite_4 . In most scenarios, the complete propagation network between the candidate users is not available. To address this, we take each user as a separate entity and model his past interactions and status messages to predict his protest participation.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_17" ], "mid": [ "2040736673", "1499969990", "", "1535365835" ], "abstract": [ "Modeling the movement of information within social media outlets, like Twitter, is key to understanding to how ideas spread but quantifying such movement runs into several difficulties. Two specific areas that elude a clear characterization are (i) the intrinsic random nature of individuals to potentially adopt and subsequently broadcast a Twitter topic, and (ii) the dissemination of information via non-Twitter sources, such as news outlets and word of mouth, and its impact on Twitter propagation. These distinct yet inter-connected areas must be incorporated to generate a comprehensive model of information diffusion. We propose a bispace model to capture propagation in the union of (exclusively) Twitter and non-Twitter environments. To quantify the stochastic nature of Twitter topic propagation, we combine principles of geometric Brownian motion and traditional network graph theory. We apply Poisson process functions to model information diffusion outside of the Twitter mentions network. We discuss techniques to unify the two sub-models to accurately model information dissemination. We demonstrate the novel application of these techniques on real Twitter datasets related to mass protest adoption in social communities.", "WINNER of a Riskbook.com Best of 2004 Book Award!During the last decade, financial models based on jump processes have acquired increasing popularity in risk management and option pricing. Much has been published on the subject, but the technical nature of most papers makes them difficult for nonspecialists to understand, and the mathematical tools required for applications can be intimidating. Potential users often get the impression that jump and Levy processes are beyond their reach.Financial Modelling with Jump Processes shows that this is not so. It provides a self-contained overview of the theoretical, numerical, and empirical aspects involved in using jump processes in financial modelling, and it does so in terms within the grasp of nonspecialists. The introduction of new mathematical tools is motivated by their use in the modelling process, and precise mathematical statements of results are accompanied by intuitive explanations.Topics covered in this book include: jump-diffusion models, Levy processes, stochastic calculus for jump processes, pricing and hedging in incomplete markets, implied volatility smiles, time-inhomogeneous jump processes and stochastic volatility models with jumps. The authors illustrate the mathematical concepts with many numerical and empirical examples and provide the details of numerical implementation of pricing and calibration algorithms.This book demonstrates that the concepts and tools necessary for understanding and implementing models with jumps can be more intuitive that those involved in the Black Scholes and diffusion models. If you have even a basic familiarity with quantitative methods in finance, Financial Modelling with Jump Processes will give you a valuable new set of tools for modelling market fluctuations.", "", "The networks considered here consist of sets of intercon- nected vertices, examples of which include social networks, technological networks, and biological networks. Two important issues are to measure the extent of proximity between vertices and to identify the community structure of a network. In this paper, the proximity index between two nearest-neighboring vertices of a network is measured by a biased Brown- ian particle which moves on the network. This proximity index integrates both the local and the global structural information of a given network, and it is used by an agglomerative hierarchical algorithm to identify the community structure of the network. This method is applied to several artificial or real-world networks and satisfying results are attained. Find- ing the proximity indices for all nearest-neighboring vertex pairs needs a computational time that scales as O(N 3 ), with N being the total number of vertices in the network." ] }
1512.02896
2195256693
Most users of online services have unique behavioral or usage patterns. These behavioral patterns can be exploited to identify and track users by using only the observed patterns in the behavior. We study the task of identifying users from statistics of their behavioral patterns. In particular, we focus on the setting in which we are given histograms of users’ data collected during two different experiments. We assume that, in the first data set, the users’ identities are anonymized or hidden and that, in the second data set, their identities are known. We study the task of identifying the users by matching the histograms of their data in the first data set with the histograms from the second data set. In recent works, the optimal algorithm for this user identification task is introduced. In this paper, we evaluate the effectiveness of this method on three different types of data sets with up to 50 000 users, and in multiple scenarios. Using data sets such as call data records, web browsing histories, and GPS trajectories, we demonstrate that a large fraction of users can be easily identified given only histograms of their data; hence, these histograms can act as users’ fingerprints. We also verify that simultaneous identification of users achieves better performance compared with one-by-one user identification. Furthermore, we show that using the optimal method for identification indeed gives higher identification accuracy than the heuristics-based approaches in the practical scenarios. The accuracy obtained under this optimal method can thus be used to quantify the maximum level of user identification that is possible in such settings. We show that the key factors affecting the accuracy of the optimal identification algorithm are the duration of the data collection, the number of users in the anonymized data set, and the resolution of the data set. We also analyze the effectiveness of @math -anonymization in resisting user identification attacks on these data sets.
A matching problem studied in the database community is that of identifying different data records that refer to the same real-world object @cite_40 . Similarly, in natural-language processing, the problem of linking different mentions of the same underlying entity in text @cite_16 is analogous to the objective in the user-matching problem. Another example from the information-retrieval literature is the problem of classifying documents by their authors, given documents from different authors with the same name @cite_34 . User matching has also been studied in the social-networks community in which the objective is to identify different profiles that belong to the same underlying user @cite_33 . Such problems fall under the umbrella-term @cite_38 . In these problems, the available information about the users is often not in the form of histograms, and the solutions proposed are often based on heuristics and practical convenience; whereas the solution we propose in this paper is specific to the setting in which the only information available about the users is in the form of histograms, and in this setting, the solution is optimal for minimizing the probability of misclassification.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_34", "@cite_40", "@cite_16" ], "mid": [ "2075633077", "1969890816", "2171179180", "2108991785", "2050273484" ], "abstract": [ "In this paper, we consider the problem of linking users across multiple online communities. Specifically, we focus on the alias-disambiguation step of this user linking task, which is meant to differentiate users with the same usernames. We start quantitatively analyzing the importance of the alias-disambiguation step by conducting a survey on 153 volunteers and an experimental analysis on a large dataset of About.me (75,472 users). The analysis shows that the alias-disambiguation solution can address a major part of the user linking problem in terms of the coverage of true pairwise decisions (46.8 ). To the best of our knowledge, this is the first study on human behaviors with regards to the usages of online usernames. We then cast the alias-disambiguation step as a pairwise classification problem and propose a novel unsupervised approach. The key idea of our approach is to automatically label training instances based on two observations: (a) rare usernames are likely owned by a single natural person, e.g. pennystar88 as a positive instance; (b) common usernames are likely owned by different natural persons, e.g. tank as a negative instance. We propose using the n-gram probabilities of usernames to estimate the rareness or commonness of usernames. Moreover, these two observations are verified by using the dataset of Yahoo! Answers. The empirical evaluations on 53 forums verify: (a) the effectiveness of the classifiers with the automatically generated training data and (b) that the rareness and commonness of usernames can help user linking. We also analyze the cases where the classifiers fail.", "In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs.", "Nowadays, searches for Webpages of a person with a given name constitute a notable fraction of queries to web search engines. Such a query would normally return Webpages related to several namesakes, who happened to have the queried name, leaving the burden of disambiguating and collecting pages relevant to a particular person (from among the namesakes) on the user. In this article we develop a Web People Search approach that clusters Webpages based on their association to different people. Our method exploits a variety of semantic information extracted from Web pages, such as named entities and hyperlinks, to disambiguate among namesakes referred to on the Web pages. We demonstrate the effectiveness of our approach by testing the efficacy of the disambiguation algorithms and its impact on person search.", "Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area", "In recent years there has been substantial work on the important problem of coreference resolution, most of which has concentrated on the development of new models and algorithmic techniques. These works often show that complex models improve over a weak pairwise baseline. However, less attention has been given to the importance of selecting strong features to support learning a coreference model. This paper describes a rather simple pairwise classification model for coreference resolution, developed with a well-designed set of features. We show that this produces a state-of-the-art system that outperforms systems built with complex models. We suggest that our system can be used as a baseline for the development of more complex models -- which may have less impact when a more robust set of features is used. The paper also presents an ablation study and discusses the relative contributions of various features." ] }
1512.02909
1655894887
Microaggregation is a technique for disclosure limitation aimed at protecting the privacy of data subjects in microdata releases. It has been used as an alternative to generalization and suppression to generate @math -anonymous data sets, where the identity of each subject is hidden within a group of @math subjects. Unlike generalization, microaggregation perturbs the data and this additional masking freedom allows improving data utility in several ways, such as increasing data granularity, reducing the impact of outliers, and avoiding discretization of numerical data. @math -Anonymity, on the other side, does not protect against attribute disclosure, which occurs if the variability of the confidential values in a group of @math subjects is too small. To address this issue, several refinements of @math -anonymity have been proposed, among which @math -closeness stands out as providing one of the strictest privacy guarantees. Existing algorithms to generate @math -close data sets are based on generalization and suppression (they are extensions of @math -anonymization algorithms based on the same principles). This paper proposes and shows how to use microaggregation to generate @math -anonymous @math -close data sets. The advantages of microaggregation are analyzed, and then several microaggregation algorithms for @math -anonymous @math -closeness are presented and empirically evaluated.
Same as for @math -anonymity, the most common way to attain @math -closeness is to use generalization and suppression. In fact, the algorithms for @math -anonymity based on those principles can be adapted to yield @math -closeness by adding the @math -closeness constraint in the search for a feasible minimal generalization: in @cite_3 the Incognito algorithm and in @cite_19 the Mondrian algorithm are respectively adapted to @math -closeness. SABRE @cite_20 is another interesting approach specifically designed for @math -closeness. In SABRE the data set is first partitioned into a set of buckets and then the equivalence classes are generated by taking an appropriate number of records from each of the buckets. Both the buckets and the number of records from each bucket that are included in each equivalence class are selected with @math -closeness in mind. One of the algorithms proposed in our paper uses a similar principle. However, the buckets in SABRE are generated in an iterative greedy manner which may yield more buckets than our algorithm (which analytically determines the minimal number of required buckets). A greater number of buckets leads to equivalence classes with more records and, thus, to more information loss.
{ "cite_N": [ "@cite_19", "@cite_20", "@cite_3" ], "mid": [ "2164649498", "2119149536", "2136114025" ], "abstract": [ "The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented (in Section 2) values for each sensitive attribute. In this paper, we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we propose a new notion of privacy called “closeness.” We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We then propose a more flexible privacy model called (n,t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments.", "Today, the publication of microdata poses a privacy threat: anonymous personal records can be re-identified using third data sources. Past research has tried to develop a concept of privacy guarantee that an anonymized data set should satisfy before publication, culminating in the notion of t-closeness. To satisfy t-closeness, the records in a data set need to be grouped into Equivalence Classes (ECs), such that each EC contains records of indistinguishable quasi-identifier values, and its local distribution of sensitive attribute (SA) values conforms to the global table distribution of SA values. However, despite this progress, previous research has not offered an anonymization algorithm tailored for t-closeness. In this paper, we cover this gap with SABRE, a SA Bucketization and REdistribution framework for t-closeness. SABRE first greedily partitions a table into buckets of similar SA values and then redistributes the tuples of each bucket into dynamically determined ECs. This approach is facilitated by a property of the Earth Mover's Distance (EMD) that we employ as a measure of distribution closeness: If the tuples in an EC are picked proportionally to the sizes of the buckets they hail from, then the EMD of that EC is tightly upper-bounded using localized upper bounds derived for each bucket. We prove that if the t-closeness constraint is properly obeyed during partitioning, then it is obeyed by the derived ECs too. We develop two instantiations of SABRE and extend it to a streaming environment. Our extensive experimental evaluation demonstrates that SABRE achieves information quality superior to schemes that merely applied algorithms tailored for other models to t-closeness, and can be much faster as well.", "The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain \"identifying\" attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of l-diversity has been proposed to address this; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute. In this paper we show that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. We propose a novel privacy notion called t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We choose to use the earth mover distance measure for our t-closeness requirement. We discuss the rationale for t-closeness and illustrate its advantages through examples and experiments." ] }
1512.02909
1655894887
Microaggregation is a technique for disclosure limitation aimed at protecting the privacy of data subjects in microdata releases. It has been used as an alternative to generalization and suppression to generate @math -anonymous data sets, where the identity of each subject is hidden within a group of @math subjects. Unlike generalization, microaggregation perturbs the data and this additional masking freedom allows improving data utility in several ways, such as increasing data granularity, reducing the impact of outliers, and avoiding discretization of numerical data. @math -Anonymity, on the other side, does not protect against attribute disclosure, which occurs if the variability of the confidential values in a group of @math subjects is too small. To address this issue, several refinements of @math -anonymity have been proposed, among which @math -closeness stands out as providing one of the strictest privacy guarantees. Existing algorithms to generate @math -close data sets are based on generalization and suppression (they are extensions of @math -anonymization algorithms based on the same principles). This paper proposes and shows how to use microaggregation to generate @math -anonymous @math -close data sets. The advantages of microaggregation are analyzed, and then several microaggregation algorithms for @math -anonymous @math -closeness are presented and empirically evaluated.
@cite_21 an approach to attain @math -closeness-like privacy is proposed which, unlike the methods based on generalization suppression, is perturbative. Also, @cite_21 guarantees the threshold @math only on average and uses a distance other than EMD. Another computational approach to @math -closeness is presented in @cite_0 @cite_25 which aims at connecting @math -closeness and differential privacy; @cite_0 @cite_25 also use a distance different from EMD but their method is non-perturbative (the truthfulness of the data is preserved).
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_25" ], "mid": [ "1967525428", "2147324825", "2045134072" ], "abstract": [ "k-anonymity and e-differential privacy are two mainstream privacy models, the former introduced to anonymize data sets and the latter to limit the knowledge gain that results from including one individual in the data set. Whereas basic k-anonymity only protects against identity disclosure, t-closeness was presented as an extension of k-anonymity that also protects against attribute disclosure. We show here that, if not quite equivalent, t-closeness and e-differential privacy are strongly related to one another when it comes to anonymizing data sets. Specifically, k-anonymity for the quasi-identifiers combined with e-differential privacy for the confidential attributes yields stochastic t-closeness (an extension of t-closeness), with t a function of k and e. Conversely, t-closeness can yield e-differential privacy when t=exp(e 2) and the assumptions made by t-closeness about the prior and posterior views of the data hold.", "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.", "k-Anonymity and e-differential privacy are two main privacy models proposed within the computer science community. Whereas the former was proposed for privacy-preserving data publishing, i.e. data set anonymization, the latter initially arose in the context of interactive databases and was later extended to data publishing. We show here that t-closeness, one of the extensions of k-anonymity, can actually yielde-differential privacy in data publishing when t =exp(e). We detail a construction based on bucketization that realizes the previous implication; hence, as an ancillary result, we provide a new computational procedure to achieve t-closeness and e-differential privacy in data publishing." ] }
1512.02909
1655894887
Microaggregation is a technique for disclosure limitation aimed at protecting the privacy of data subjects in microdata releases. It has been used as an alternative to generalization and suppression to generate @math -anonymous data sets, where the identity of each subject is hidden within a group of @math subjects. Unlike generalization, microaggregation perturbs the data and this additional masking freedom allows improving data utility in several ways, such as increasing data granularity, reducing the impact of outliers, and avoiding discretization of numerical data. @math -Anonymity, on the other side, does not protect against attribute disclosure, which occurs if the variability of the confidential values in a group of @math subjects is too small. To address this issue, several refinements of @math -anonymity have been proposed, among which @math -closeness stands out as providing one of the strictest privacy guarantees. Existing algorithms to generate @math -close data sets are based on generalization and suppression (they are extensions of @math -anonymization algorithms based on the same principles). This paper proposes and shows how to use microaggregation to generate @math -anonymous @math -close data sets. The advantages of microaggregation are analyzed, and then several microaggregation algorithms for @math -anonymous @math -closeness are presented and empirically evaluated.
Most of the approaches to attain @math -closeness have been designed to preserve the truthfulness of the data. In this paper we evaluate the use of microaggregation, a perturbative masking technique. In @math -anonymity the relation between the quasi-identifiers and the confidential data is broken by making records in the anonymized data set indistinguishable in terms of quasi-identifiers within a group of @math records. Microaggregation, when performed on the projection on quasi-identifier attributes, produces a @math -anonymous data set @cite_10 . Microaggregation was also used for @math -anonymity without naming it in @cite_29 : clustering was used with the additional requirement that each cluster must have @math or more records.
{ "cite_N": [ "@cite_29", "@cite_10" ], "mid": [ "2099519519", "2033472483" ], "abstract": [ "Individual privacy will be at risk if a published data set is not properly deidentified. k-anonymity is a major technique to de-identify a data set. Among a number of k-anonymization schemes, local recoding methods are promising for minimizing the distortion of a k-anonymity view. This paper addresses two major issues in local recoding k-anonymization in attribute hierarchical taxonomies. First, we define a proper distance metric to achieve local recoding generalization with small distortion. Second, we propose a means to control the inconsistency of attribute domains in a generalized view by local recoding. We show experimentally that our proposed local recoding method based on the proposed distance metric produces higher quality k-anonymity tables in three quality measures than a global recoding anonymization method, Incognito, and a multidimensional recoding anonymization method, Multi. The proposed inconsistency handling method is able to balance distortion and consistency of a generalized view.", "k-Anonymity is a useful concept to solve the tension between data utility and respondent privacy in individual data (microdata) protection. However, the generalization and suppression approach proposed in the literature to achieve k-anonymity is not equally suited for all types of attributes: (i) generalization suppression is one of the few possibilities for nominal categorical attributes; (ii) it is just one possibility for ordinal categorical attributes which does not always preserve ordinality; (iii) and it is completely unsuitable for continuous attributes, as it causes them to lose their numerical meaning. Since attributes leading to disclosure (and thus needing k-anonymization) may be nominal, ordinal and also continuous, it is important to devise k-anonymization procedures which preserve the semantics of each attribute type as much as possible. We propose in this paper to use categorical microaggregation as an alternative to generalization suppression for nominal and ordinal k-anonymization; we also propose continuous microaggregation as the method for continuous k-anonymization." ] }
1512.02909
1655894887
Microaggregation is a technique for disclosure limitation aimed at protecting the privacy of data subjects in microdata releases. It has been used as an alternative to generalization and suppression to generate @math -anonymous data sets, where the identity of each subject is hidden within a group of @math subjects. Unlike generalization, microaggregation perturbs the data and this additional masking freedom allows improving data utility in several ways, such as increasing data granularity, reducing the impact of outliers, and avoiding discretization of numerical data. @math -Anonymity, on the other side, does not protect against attribute disclosure, which occurs if the variability of the confidential values in a group of @math subjects is too small. To address this issue, several refinements of @math -anonymity have been proposed, among which @math -closeness stands out as providing one of the strictest privacy guarantees. Existing algorithms to generate @math -close data sets are based on generalization and suppression (they are extensions of @math -anonymization algorithms based on the same principles). This paper proposes and shows how to use microaggregation to generate @math -anonymous @math -close data sets. The advantages of microaggregation are analyzed, and then several microaggregation algorithms for @math -anonymous @math -closeness are presented and empirically evaluated.
While microaggregation has been proposed to satisfy another refinement of @math -anonymity ( @math -sensitive @math -anonymity, @cite_8 ), no attempt has been made to use it for @math -closeness.
{ "cite_N": [ "@cite_8" ], "mid": [ "1966406552" ], "abstract": [ "Micro-data protection is a hot topic in the field of Statistical Disclosure Control (SDC), that has gained special interest after the disclosure of 658000 queries by the AOL search engine in August 2006. Many algorithms, methods and properties have been proposed to deal with micro-data disclosure, p-Sensitive k-anonymity has been recently defined as a sophistication of k-anonymity. This new property requires that there be at least p different values for each confidential attribute within the records sharing a combination of key attributes. Like k-anonymity, the algorithm originally proposed to achieve this property was based on generalisations and suppressions; when data sets are numerical this has several data utility problems, namely turning numerical key attributes into categorical, injecting new categories, injecting missing data, and so on. In this article, we recall the foundational concepts of micro-aggregation, k-anonymity and p-sensitive k-anonymity. We show that k-anonymity and p-sensitive k-anonymity can be achieved in numerical data sets by means of micro-aggregation heuristics properly adapted to deal with this task. In addition, we present and evaluate two heuristics for p-sensitive k-anonymity which, being based on micro-aggregation, overcome most of the drawbacks resulting from the generalisation and suppression method." ] }
1512.02497
2190165033
This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset, and object category detection, where we out-perform for "chair" detection on a subset of the Pascal VOC dataset.
A 3D understanding of 2D natural images has been a problem of interest in computer vision since its very beginning @cite_48 . Our work is in line with traditional geometry-centric approaches for object recognition based on alignment @cite_49 . There has been a number of successful approaches for instance-level recognition, e.g., @cite_37 @cite_52 @cite_34 , typically based on SIFT matching @cite_16 with geometric constraints. More recent approaches have leveraged contour-based representation to align skylines @cite_19 and statues @cite_21 . Furthermore, simplified or parametric geometric models have been used for category recognition detection @cite_32 @cite_58 @cite_53 @cite_29 @cite_41 @cite_51 . We will focus our discussion in this section on prior work using CAD models for category recognition and 2D-3D alignment.
{ "cite_N": [ "@cite_37", "@cite_41", "@cite_48", "@cite_53", "@cite_21", "@cite_58", "@cite_52", "@cite_16", "@cite_32", "@cite_29", "@cite_19", "@cite_49", "@cite_34", "@cite_51" ], "mid": [ "2100398441", "2118824402", "155495712", "2098245519", "2118132750", "1486646730", "1616969904", "2151103935", "2111087635", "1964201035", "104903125", "1910706092", "2059412355", "2114111978" ], "abstract": [ "Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases.", "In this paper we seek to detect rectangular cuboids and localize their corners in uncalibrated single-view images depicting everyday scenes. In contrast to recent approaches that rely on detecting vanishing points of the scene and grouping line segments to form cuboids, we build a discriminative parts-based detector that models the appearance of the cuboid corners and internal edges while enforcing consistency to a 3D cuboid model. Our model copes with different 3D viewpoints and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a variety of indoor and outdoor scenes and show qualitative and quantitative results on our collected database. Our model out-performs baseline detectors that use 2D constraints alone on the task of localizing cuboid corners.", "", "We present an approach to detecting and analyzing the 3D configuration of objects in real-world images with heavy occlusion and clutter. We focus on the application of finding and analyzing cars. We do so with a two-stage model; the first stage reasons about 2D shape and appearance variation due to within-class variation (station wagons look different than sedans) and changes in viewpoint. Rather than using a view-based model, we describe a compositional representation that models a large number of effective views and shapes using a small number of local view-based templates. We use this model to propose candidate detections and 2D estimates of shape. These estimates are then refined by our second stage, using an explicit 3D model of shape and viewpoint. We use a morphable model to capture 3D within-class variation, and use a weak-perspective camera model to capture viewpoint. We learn all model parameters from 2D annotations. We demonstrate state-of-the-art accuracy for detection, viewpoint estimation, and 3D shape reconstruction on challenging images from the PASCAL VOC 2011 dataset.", "We describe a scalable approach to 3D smooth object retrieval which searches for and localizes all the occurrences of a user outlined object in a dataset of images in real time. The approach is illustrated on sculptures.", "Since most current scene understanding approaches operate either on the 2D image or using a surface-based representation, they do not allow reasoning about the physical constraints within the 3D scene. Inspired by the \"Blocks World\" work in the 1960's, we present a qualitative physical representation of an outdoor scene where objects have volume and mass, and relationships describe 3D structure and mechanical configurations. Our representation allows us to apply powerful global geometric constraints between 3D volumes as well as the laws of statics in a qualitative manner. We also present a novel iterative \"interpretation-by-synthesis\" approach where, starting from an empty ground plane, we progressively \"build up\" a physically-plausible 3D interpretation of the image. For surface layout estimation, our method demonstrates an improvement in performance over the state-of-the-art [9]. But more importantly, our approach automatically generates 3D parse graphs which describe qualitative geometric and mechanical properties of objects and relationships between objects within an image.", "We address the problem of determining where a photo was taken by estimating a full 6-DOF-plus-intrincs camera pose with respect to a large geo-registered 3D point cloud, bringing together research on image localization, landmark recognition, and 3D pose estimation. Our method scales to datasets with hundreds of thousands of images and tens of millions of 3D points through the use of two new techniques: a co-occurrence prior for RANSAC and bidirectional matching of image features with 3D points. We evaluate our method on several large data sets, and show state-of-the-art results on landmark recognition as well as the ability to locate cameras to within meters, requiring only seconds per query.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2].", "Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.", "Given a picture taken somewhere in the world, automatic geo-localization of that image is a task that would be extremely useful e.g. for historical and forensic sciences, documentation purposes, organization of the world's photo material and also intelligence applications. While tremendous progress has been made over the last years in visual location recognition within a single city, localization in natural environments is much more difficult, since vegetation, illumination, seasonal changes make appearance-only approaches impractical. In this work, we target mountainous terrain and use digital elevation models to extract representations for fast visual database lookup. We propose an automated approach for very large scale visual localization that can efficiently exploit visual information contours and geometric constraints consistent orientation at the same time. We validate the system on the scale of a whole country Switzerland, 40 000km2 using a new dataset of more than 200 landscape query pictures with ground truth.", "Recent advances in object recognition have emphasized the integration of intensity-derived features such as affine patches with associated geometric constraints leading to impressive performance in complex scenes. Over the four previous decades, the central paradigm of recognition was based on formal geometric object descriptions with a focus on the properties of such descriptions under perspective image formation. This paper will review the key advances of the geometric era and investigate the underlying causes of the movement away from formal geometry and prior models towards the use of statistical learning methods based on appearance features.", "This article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented.", "Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching." ] }
1512.02497
2190165033
This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset, and object category detection, where we out-perform for "chair" detection on a subset of the Pascal VOC dataset.
Rendered views from CAD models have been used as input for training an object class detector @cite_31 @cite_0 @cite_38 or for viewpoint prediction @cite_22 . Most similar to us are approaches that align models directly to images. Examples include alignment of IKEA furniture models to images @cite_12 , exemplar-based object detection @cite_40 by matching discriminative elements @cite_30 @cite_45 , and using hand-crafted features for retrieving CAD models for depth prediction @cite_28 and compositing from multiple models @cite_13 . Also related are approaches for CAD retrieval given RGB-D images (e.g., from Kinect scans) @cite_4 @cite_24 . More recently there has been work to enrich the feature representation for matching and alignment using CNNs, which include CAD retrieval based on CNN responses (e.g., AlexNet @cite_1 pool5'' features) @cite_50 , learning a transformation from CNN features to light-field descriptors for 3D shapes @cite_25 , and training a Siamese network for style retrieval @cite_15 . Building on efficient CNN-based object class detection, e.g., R-CNN @cite_27 , our approach extends the above CNN-based approaches for efficient CAD-exemplar detection.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_4", "@cite_22", "@cite_15", "@cite_28", "@cite_1", "@cite_0", "@cite_24", "@cite_40", "@cite_45", "@cite_27", "@cite_50", "@cite_31", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2010625607", "2083544878", "1949568868", "1591870335", "2021354639", "2015112703", "2163605009", "1769599646", "116751493", "1989684337", "", "2102605133", "1661149683", "2962759496", "2021261909", "2083163329", "2071634722" ], "abstract": [ "This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.", "The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.", "The goal of this work is to represent objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene and then using a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel surface normals in images containing renderings of synthetic objects. When tested on real data, our method outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place into the scene the model that fits best. We observe a 48 relative improvement in performance at the task of 3D detection over the current state-of-the-art [34], while being an order of magnitude faster.", "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.", "Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively 'lifting' it back to 3D, by exploiting a collection of aligned 3D models of related objects. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shape-specific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depth-enhanced image editing and image relighting.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Convolutional neural networks have recently shown excellent results in general object detection and many other tasks. Albeit very effective, they involve many user-defined design choices. In this paper we want to better understand these choices by inspecting two key aspects “what did the network learn?”, and “what can the network learn?”. We exploit new annotations (Pascal3D+), to enable a new empirical analysis of the R-CNN detector. Despite common belief, our results indicate that existing state-of-the-art convnets are not invariant to various appearance factors. In fact, all considered networks have similar weak points which cannot be mitigated by simply increasing the training data (architectural changes are needed). We show that overall performance can improve when using image renderings as data augmentation. We report the best known results on Pascal3D+ detection and view-point estimation tasks.", "The depth information of RGB-D sensors has greatly simplified some common challenges in computer vision and enabled breakthroughs for several tasks. In this paper, we propose to use depth maps for object detection and design a 3D detector to overcome the major difficulties for recognition, namely the variations of texture, illumination, shape, viewpoint, clutter, occlusion, self-occlusion and sensor noises. We take a collection of 3D CAD models and render each CAD model from hundreds of viewpoints to obtain synthetic depth maps. For each depth rendering, we extract features from the 3D point cloud and train an Exemplar-SVM classifier. During testing and hard-negative mining, we slide a 3D detection window in 3D space. Experiment results show that our 3D detector significantly outperforms the state-of-the-art algorithms for both RGB and RGB-D images, and achieves about ×1.7 improvement on average precision compared to DPM and R-CNN. All source code and data are available online.", "This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of , at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet [18], Places [43], and Oxford VGG [8]. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.", "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "We present an approach to automatic 3D reconstruction of objects depicted in Web images. The approach reconstructs objects from single views. The key idea is to jointly analyze a collection of images of different objects along with a smaller collection of existing 3D models. The images are analyzed and reconstructed together. Joint analysis regularizes the formulated optimization problems, stabilizes correspondence estimation, and leads to reasonable reproduction of object appearance without traditional multi-view cues.", "Both 3D models and 2D images contain a wealth of information about everyday objects in our environment. However, it is difficult to semantically link together these two media forms, even when they feature identical or very similar objects. We propose a joint embedding space populated by both 3D shapes and 2D images of objects, where the distances between embedded entities reflect similarity between the underlying objects. This joint embedding space facilitates comparison between entities of either form, and allows for cross-modality retrieval. We construct the embedding space using 3D shape similarity measure, as 3D shapes are more pure and complete than their appearance in images, leading to more robust distance metrics. We then employ a Convolutional Neural Network (CNN) to \"purify\" images by muting distracting factors. The CNN is trained to map an image to a point in the embedding space, so that it is close to a point attributed to a 3D model of a similar object to the one depicted in the image. This purifying capability of the CNN is accomplished with the help of a large amount of training data consisting of images synthesized from 3D shapes. Our joint embedding allows cross-view image retrieval, image-based shape retrieval, as well as shape-based image retrieval. We evaluate our method on these retrieval tasks and show that it consistently out-performs state-of-the-art methods, and demonstrate the usability of a joint embedding in a number of additional applications.", "We address the problem of localizing and estimating the fine-pose of objects in the image with exact 3D models. Our main focus is to unify contributions from the 1970s with recent advances in object detection: use local keypoint detectors to find candidate poses and score global alignment of each candidate pose to the image. Moreover, we also provide a new dataset containing fine-aligned objects with their exactly matched 3D models, and a set of models for widely used objects. We also evaluate our algorithm both on object detection and fine pose estimation, and show that our method outperforms state-of-the art algorithms." ] }
1512.02497
2190165033
This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset, and object category detection, where we out-perform for "chair" detection on a subset of the Pascal VOC dataset.
Bridging two very different image modalities is a classic problem for alignment @cite_23 . Past approaches have addressed this problem using two main strategies. A first line of work has used manually-designed feature detectors and adapted them, for example by adding a mask, so that they focus on the information available in both CAD models and real images @cite_30 @cite_45 @cite_26 . Another line of work has focused on increasing the realism of rendered views, e.g., by extracting likely textures and background from annotated images @cite_31 @cite_0 @cite_22 @cite_38 . Domain adaptation approaches have been formulated for CNNs @cite_54 @cite_3 @cite_5 @cite_7 , most recently for object detection @cite_17 , fine tuning across tasks @cite_20 , and, in a contemporary work, transfer learning from RGB to optical flow and depth @cite_43 . Most similar to our approach is domain adaptation with CAD @cite_38 , which adapted hand-crafted features (HOG @cite_35 ) for object detection. We formulate a generic domain adaptation approach over image features, which can be applied to hand-crafted features, e.g., HOG @cite_35 or CNN responses.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_35", "@cite_26", "@cite_22", "@cite_7", "@cite_54", "@cite_3", "@cite_0", "@cite_43", "@cite_45", "@cite_23", "@cite_5", "@cite_31", "@cite_20", "@cite_17" ], "mid": [ "2010625607", "2083544878", "2161969291", "2033547469", "1591870335", "2963826681", "2616180702", "1821462560", "1769599646", "753847829", "", "1506898119", "1690739335", "2962759496", "2214409633", "2115699968" ], "abstract": [ "This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.", "The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real-world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the data set shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.", "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P(x) is structurally related to some task of interest, say predicting P(y x). This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Convolutional neural networks have recently shown excellent results in general object detection and many other tasks. Albeit very effective, they involve many user-defined design choices. In this paper we want to better understand these choices by inspecting two key aspects “what did the network learn?”, and “what can the network learn?”. We exploit new annotations (Pascal3D+), to enable a new empirical analysis of the R-CNN detector. Despite common belief, our results indicate that existing state-of-the-art convnets are not invariant to various appearance factors. In fact, all considered networks have similar weak points which cannot be mitigated by simply increasing the training data (architectural changes are needed). We show that overall performance can improve when using image renderings as data augmentation. We report the best known results on Pascal3D+ detection and view-point estimation tasks.", "In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.", "", "This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.", "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at lsda.berkeleyvision.org." ] }
1512.02595
2949640717
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.
This work is inspired by previous work in both deep learning and speech recognition. Feed-forward neural network acoustic models were explored more than 20 years ago @cite_30 @cite_21 @cite_16 . Recurrent neural networks and networks with convolution were also used in speech recognition around the same time @cite_34 @cite_22 . More recently DNNs have become a fixture in the ASR pipeline with almost all state of the art speech work containing some form of deep neural network @cite_42 @cite_26 @cite_72 @cite_19 @cite_54 @cite_27 . Convolutional networks have also been found beneficial for acoustic models @cite_8 @cite_70 . Recurrent neural networks, typically LSTMs, are just beginning to be deployed in state-of-the art recognizers @cite_43 @cite_5 @cite_37 and work well together with convolutional layers for the feature extraction @cite_15 . Models with both bidirectional @cite_43 and unidirectional recurrence have been explored as well.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_22", "@cite_8", "@cite_70", "@cite_54", "@cite_21", "@cite_42", "@cite_34", "@cite_19", "@cite_27", "@cite_72", "@cite_43", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "1553004968", "", "", "", "2155273149", "", "", "", "1993882792", "1517386993", "2125964738", "", "", "2950689855", "", "1600744878", "" ], "abstract": [ "From the Publisher: Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state-of-the-art continuous speech recognition systems based on Hidden Markov Models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e., HMM emission probability estimation and feature extraction. The book describes a successful five year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical system. Using standard databases and comparing with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition: A Hybrid Approach is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and or neural network researchers. This book is also suitable as a text for advanced courses on neural networks or speech processing.", "", "", "", "Convolutional Neural Networks (CNN) have showed success in achieving translation invariance for many image processing tasks. The success is largely attributed to the use of local filtering and max-pooling in the CNN architecture. In this paper, we propose to apply CNN to speech recognition within the framework of hybrid NN-HMM model. We propose to use local filtering and max-pooling in frequency domain to normalize speaker variance to achieve higher multi-speaker speech recognition performance. In our method, a pair of local filtering layer and max-pooling layer is added at the lowest end of neural network (NN) to normalize spectral variations of speech signals. In our experiments, the proposed CNN architecture is evaluated in a speaker independent speech recognition task using the standard TIMIT data sets. Experimental results show that the proposed CNN method can achieve over 10 relative error reduction in the core TIMIT test sets when comparing with a regular NN using the same number of hidden layers and weights. Our results also show that the best result of the proposed CNN model is better than previously published results on the same TIMIT test sets that use a pre-trained deep NN model.", "", "", "", "Gaussian mixture models are currently the dominant technique for modeling the emission distribution of hidden Markov models for speech recognition. We show that better phone recognition on the TIMIT dataset can be achieved by replacing Gaussian mixture models by deep neural networks that contain many layers of features and a very large number of parameters. These networks are first pre-trained as a multi-layer generative model of a window of spectral feature vectors without making use of any discriminative information. Once the generative pre-training has designed the features, we perform discriminative fine-tuning using backpropagation to adjust the features slightly to make them better at predicting a probability distribution over the states of monophone hidden Markov models.", "This chapter describes a use of recurrent neural networks (i.e., feedback is incorporated in the computation) as an acoustic model for continuous speech recognition. The form of the recurrent neural network is described along with an appropriate parameter estimation procedure. For each frame of acoustic data, the recurrent network generates an estimate of the posterior probability of of the possible phones given the observed acoustic signal. The posteriors are then converted into scaled likelihoods and used as the observation probabilities within a conventional decoding paradigm (e.g., Viterbi decoding). The advantages of using recurrent networks are that they require a small number of parameters and provide a fast decoding capability (relative to conventional, large-vocabulary, HMM systems)3.", "The context-independent deep belief network (DBN) hidden Markov model (HMM) hybrid architecture has recently achieved promising results for phone recognition. In this work, we propose a context-dependent DBN-HMM system that dramatically outperforms strong Gaussian mixture model (GMM)-HMM baselines on a challenging, large vocabulary, spontaneous speech recognition dataset from the Bing mobile voice search task. Our system achieves absolute sentence accuracy improvements of 5.8 and 9.2 over GMM-HMMs trained using the minimum phone error rate (MPE) and maximum likelihood (ML) criteria, respectively, which translate to relative error reductions of 16.0 and 23.2 .", "", "", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "", "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4–6 relative improvement in WER over an LSTM, the strongest of the three individual models.", "" ] }
1512.02595
2949640717
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.
End-to-end speech recognition is an active area of research, showing compelling results when used to re-score the outputs of a DNN-HMM @cite_74 and standalone @cite_12 . Two methods are currently used to map variable length audio sequences directly to variable length transcriptions. The RNN encoder-decoder paradigm uses an encoder RNN to map the input to a fixed length vector and a decoder network to expand the fixed length vector into a sequence of output predictions @cite_71 @cite_52 . Adding an attentional mechanism to the decoder greatly improves performance of the system, particularly with long inputs or outputs @cite_38 . In speech, the RNN encoder-decoder with attention performs well both in predicting phonemes @cite_73 or graphemes @cite_49 @cite_64 .
{ "cite_N": [ "@cite_38", "@cite_64", "@cite_52", "@cite_74", "@cite_71", "@cite_49", "@cite_73", "@cite_12" ], "mid": [ "2133564696", "", "", "2102113734", "2950635152", "2296748324", "1586532344", "1922655562" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "", "", "This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network architecture and the Connectionist Temporal Classification objective function. A modification to the objective function is introduced that trains the network to minimise the expectation of an arbitrary transcription loss function. This allows a direct optimisation of the word error rate, even in the absence of a lexicon or language model. The system achieves a word error rate of 27.3 on the Wall Street Journal corpus with no prior linguistic information, 21.9 with only a lexicon of allowed words, and 8.2 with a trigram language model. Combining the network with a baseline system further reduces the error rate to 6.7 .", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "The use of Deep Belief Networks (DBN) to pretrain Neural Networks has recently led to a resurgence in the use of Artificial Neural Network Hidden Markov Model (ANN HMM) hybrid systems for Automatic Speech Recognition (ASR). In this paper we report results of a DBN-pretrained context-dependent ANN HMM system trained on two datasets that are much larger than any reported previously with DBN-pretrained ANN HMM systems 5870 hours of Voice Search and 1400 hours of YouTube data. On the first dataset, the pretrained ANN HMM system outperforms the best Gaussian Mixture Model Hidden Markov Model (GMM HMM) baseline, built with a much larger dataset by 3.7 absolute WER, while on the second dataset, it outperforms the GMM HMM baseline by 4.7 absolute. Maximum Mutual Information (MMI) fine tuning and model combination using Segmental Conditional Random Fields (SCARF) give additional gains of 0.1 and 0.4 on the first dataset and 0.5 and 0.9 absolute on the second dataset.", "We replace the Hidden Markov Model (HMM) which is traditionally used in in continuous speech recognition with a bi-directional recurrent neural network encoder coupled to a recurrent neural network decoder that directly emits a stream of phonemes. The alignment between the input and output sequences is established using an attention mechanism: the decoder emits each symbol based on a context created with a subset of input symbols elected by the attention mechanism. We report initial results demonstrating that this new approach achieves phoneme error rates that are comparable to the state-of-the-art HMM-based decoders, on the TIMIT dataset.", "We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a \"phoneme.\" Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0 error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems." ] }
1512.02595
2949640717
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.
Existing speech systems can also be used to bootstrap new data collection. In one approach, the authors use one speech engine to align and filter a thousand hours of read speech @cite_6 . In another approach, a heavy-weight offline speech recognizer is used to generate transcriptions for tens of thousands of hours of speech @cite_67 . This is then passed through a filter and used to re-train the recognizer, resulting in significant performance gains. We draw inspiration from these past approaches in bootstrapping larger datasets and data augmentation to increase the effective amount of labeled data for our system.
{ "cite_N": [ "@cite_67", "@cite_6" ], "mid": [ "2294962864", "1494198834" ], "abstract": [ "Deep neural networks (DNNs) have recently become the state of the art technology in speech recognition systems. In this paper we propose a new approach to constructing large high quality unsupervised sets to train DNN models for large vocabulary speech recognition. The core of our technique consists of two steps. We first redecode speech logged by our production recognizer with a very accurate (and hence too slow for real-time usage) set of speech models to improve the quality of ground truth transcripts used for training alignments. Using confidence scores, transcript length and transcript flattening heuristics designed to cull salient utterances from three decades of speech per language, we then carefully select training data sets consisting of up to 15K hours of speech to be used to train acoustic models without any reliance on manual transcription. We show that this approach yields models with approximately 18K context dependent states that achieve 10 relative improvement in large vocabulary dictation and voice-search systems for Brazilian Portuguese, French, Italian and Russian languages.", "This paper introduces a new corpus of read English speech, suitable for training and evaluating speech recognition systems. The LibriSpeech corpus is derived from audiobooks that are part of the LibriVox project, and contains 1000 hours of speech sampled at 16 kHz. We have made the corpus freely available for download, along with separately prepared language-model training data and pre-built language models. We show that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models trained on WSJ itself. We are also releasing Kaldi scripts that make it easy to build these systems." ] }
1512.02326
2191098973
This paper proposes the problem of point-and-count as a test case to break the what-and-where deadlock. Different from the traditional detection problem, the goal is to discover key salient points as a way to localize and count the number of objects simultaneously. We propose two alternatives, one that counts first and then point, and another that works the other way around. Fundamentally, they pivot around whether we solve "what" or "where" first. We evaluate their performance on dataset that contains multiple instances of the same class, demonstrating the potentials and their synergies. The experiences derive a few important insights that explains why this is a much harder problem than classification, including strong data bias and the inability to deal with object scales robustly in state-of-art convolutional neural networks.
The proposal of using pointers'' to localize objects is potentially a controversial choice. It is stronger than dotting @cite_9 , but weaker than bounding box and segmentation. One would argue that it is more natural since salient features are what distinguish an object the most; it would be hard to imagine one needs to draw a bounding box or trace its contour whenever there is a need to point and count. The difficult is in how to evaluate this new way of localization; for the time being we measure how well a pointer is contained within the supplied bounding box to mitigate the issue.
{ "cite_N": [ "@cite_9" ], "mid": [ "2145983039" ], "abstract": [ "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data." ] }
1512.02326
2191098973
This paper proposes the problem of point-and-count as a test case to break the what-and-where deadlock. Different from the traditional detection problem, the goal is to discover key salient points as a way to localize and count the number of objects simultaneously. We propose two alternatives, one that counts first and then point, and another that works the other way around. Fundamentally, they pivot around whether we solve "what" or "where" first. We evaluate their performance on dataset that contains multiple instances of the same class, demonstrating the potentials and their synergies. The experiences derive a few important insights that explains why this is a much harder problem than classification, including strong data bias and the inability to deal with object scales robustly in state-of-art convolutional neural networks.
A large body of literature exists on the problem of counting alone. For instance @cite_9 treats it as density estimation, and the more recent work from @cite_12 counts by classifying hidden features of convolutional network, which one of our approaches adopts as a building block. Their dataset SOS and MOS provide excellent collection for our study. @cite_13 provides an interesting viewpoint of object representation as an indirect learning problem casted as a learning-to-count approach. As we will demonstrate in this paper, solving point and count problem together is much harder. It is also more rewarding, as the experience reveals a number of problems in today's CNN architecture that would otherwise be difficult to discover.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_12" ], "mid": [ "2145983039", "1909455558", "1954873805" ], "abstract": [ "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "Learning to count is a learning strategy that has been recently proposed in the literature for dealing with problems where estimating the number of object instances in a scene is the final objective. In this framework, the task of learning to detect and localize individual object instances is seen as a harder task that can be evaded by casting the problem as that of computing a regression value from hand-crafted image features. In this paper we explore the features that are learned when training a counting convolutional neural network in order to understand their underlying representation. To this end we define a counting problem for MNIST data and show that the internal representation of the network is able to classify digits in spite of the fact that no direct supervision was provided for them during training. We also present preliminary results about a deep network that is able to count the number of pedestrians in a scene.", "People can immediately and precisely identify that an image contains 1, 2, 3 or 4 items by a simple glance. The phenomenon, known as Subitizing, inspires us to pursue the task of Salient Object Subitizing (SOS), i.e. predicting the existence and the number of salient objects in a scene using holistic cues. To study this problem, we propose a new image dataset annotated using an online crowdsourcing marketplace. We show that a proposed subitizing technique using an end-to-end Convolutional Neural Network (CNN) model achieves significantly better than chance performance in matching human labels on our dataset. It attains 94 accuracy in detecting the existence of salient objects, and 42–82 accuracy (chance is 20 ) in predicting the number of salient objects (1, 2, 3, and 4+), without resorting to any object localization process. Finally, we demonstrate the usefulness of the proposed subitizing technique in two computer vision applications: salient object detection and object proposal." ] }
1512.02356
2263894731
Given a convex polygon @math with @math vertices, the two-center problem is to find two congruent closed disks of minimum radius such that they completely cover @math . We propose an algorithm for this problem in the streaming setup, where the input stream is the vertices of the polygon in clockwise order. It produces a radius @math satisfying @math using @math space, where @math is the optimum solution. Next, we show that in non-streaming setup, we can improve the approximation factor by @math , maintaining the time complexity of the algorithm to @math , and using @math extra space in addition to the space required for storing the input.
In the streaming model, @cite_11 and Guha @cite_2 have designed a ( @math )-approximation algorithm for the @math -center problem of a point set in @math using @math space. For the 1-center problem, Agarwal and Sharathkumar @cite_7 suggested a @math -factor approximation algorithm using @math space. The approximation factor was later improved to @math by Chan and Pathak @cite_18 . Recently, Kim and Ahn @cite_3 proposed a ( @math )-approximation algorithm for the two-center problem of a point set in @math . It uses @math space and update time where insertion and deletion of the points in the set are allowable. To the best of our knowledge, there is no approximation result for the two-center problem for a convex polygon under the streaming model.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_3", "@cite_2", "@cite_11" ], "mid": [ "2055900608", "1985548569", "2215609590", "2146420558", "2161345103" ], "abstract": [ "At [email protected]?10, Agarwal and Sharathkumar presented a streaming algorithm for approximating the minimum enclosing ball of a set of points in d-dimensional Euclidean space. Their algorithm requires one pass, uses O(d) space, and was shown to have approximation factor at most (1+3) [email protected] 1.3661. We prove that the same algorithm has approximation factor less than 1.22, which brings us much closer to a (1+2) 2 1.207 lower bound given by Agarwal and Sharathkumar. We also apply this technique to the dynamic version of the minimum enclosing ball problem (in the non-streaming setting). We give an O(dn)-space data structure that can maintain a 1.22-approximate minimum enclosing ball in O(dlogn) expected amortized time per insertion deletion.", "We develop (single-pass) streaming algorithms for maintaining extent measures of a stream S of n points in Rd. We focus on designing streaming algorithms whose working space is polynomial in d (poly(d)) and sublinear in n. For the problems of computing diameter, width and minimum enclosing ball of S, we obtain lower bounds on the worst-case approximation ratio of any streaming algorithm that uses poly(d) space. On the positive side, we introduce the notion of blurred ball cover and use it for answering approximate farthest-point queries and maintaining approximate minimum enclosing ball and diameter of S. We describe a streaming algorithm for maintaining a blurred ball cover whose working space is linear in d and independent of n.", "Abstract In the k -center problem for streaming points in d -dimensional metric space, input points are given in a data stream and the goal is to find the k smallest congruent balls whose union covers all input points by examining them. In the single-pass streaming model, input points are allowed to be examined only once and the amount of space that can be used to store relative information is limited. In this paper, we present a single-pass, ( 1.8 + e ) -factor, O ( d e ) -space data stream algorithm for the Euclidean 2-center problem for any d ≥ 1 . This is the first result with an approximation factor below 2 using O ( d e ) space for any d . Our algorithm naturally extends to the Euclidean k -center problem with k > 2 . We present a single-pass ( 1.8 + e ) -factor data stream algorithm for the Euclidean k -center problem for any d ≥ 1 , which uses O ( 2 k ( k + 3 ) ! d e ) space and O ( 2 k ( k + 2 ) ! d e ) update time.", "In this paper we investigate algorithms and lower bounds for summarization problems over a single pass data stream. In particular we focus on histogram construction and K-center clustering. We provide a simple framework that improves upon all previous algorithms on these problems in either the space bound, the approximation factor or the running time. The framework uses a notion of \"streamstrapping\" where summaries created for the initial prefixes of the data are used to develop better approximation algorithms. We also prove the first non-trivial lower bounds for these problems. We show that the stricter requirement that if an algorithm accurately approximates the error of every bucket or every cluster produced by it, then these upper bounds are almost the best possible. This property of accurate estimation is true of all known upper bounds on these problems.", "Abstract Given a set P of n points in the plane, we seek two squares such that their center points belong to P , their union contains P , and the area of the larger square is minimal. We present efficient algorithms for three variants of this problem: in the first the squares are axis parallel, in the second they are free to rotate but must remain parallel to each other, and in the third they are free to rotate independently." ] }
1512.02325
2193145675
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.
There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches -- Deformable Part Model (DPM) @cite_14 and Selective Search @cite_12 -- had comparable performance. However, after the dramatic improvement brought on by R-CNN @cite_18 , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_12" ], "mid": [ "2102605133", "2120419212", "2088049833" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1512.02325
2193145675
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.
The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet @cite_20 speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN @cite_21 extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox @cite_19 for learning objectness.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_20" ], "mid": [ "2068730032", "", "2179352600" ], "abstract": [ "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101." ] }
1512.02325
2193145675
We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.
Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat @cite_15 , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO @cite_1 uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat @cite_15 ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO @cite_1 .
{ "cite_N": [ "@cite_15", "@cite_1" ], "mid": [ "2963542991", "2963037989" ], "abstract": [ "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork." ] }
1512.02341
2949137045
In this paper, we propose a method of ranking recently created Twitter accounts according to their prospective popularity. Early detection of new promising accounts is useful for trend prediction, viral marketing, user recommendation, and so on. New accounts are, however, difficult to evaluate because they have not established their reputations, and we cannot apply existing link-based or other popularity-based account evaluation methods. Our method first finds "early adopters", i.e., users who often find new good information sources earlier than others. Our method then regards new accounts followed by good early adopters as promising, even if they do not have many followers now. In order to find good early adopters, we estimate the frequency of link propagation from each account, i.e., how many times the follow links from the account have been copied by its followers. If its followers have copied many of its follow links in the past, the account must be an early adopter, who find good information sources earlier than its followers. We develop a method of inferring which links are created by copying which links. One advantage of our method is that our method only uses information that can be easily obtained only by crawling neighbors of the target accounts in the current Twitter graph. We evaluated our method by an experiment on Twitter data. We chose then-new accounts from an old snapshot of Twitter, compute their ranking by our method, and compare it with the number of followers the accounts currently have. The result shows that our method produces better rankings than various baseline methods, especially for new accounts that have only a few followers.
In sociology, there has been extensive research on the behavior of people in the real world. The results of the research are also helpful for understanding the behavior of people on social media. There have been some studies that have shown that the behavior reported in the past research in sociology is also observed on social media @cite_17 @cite_1 @cite_11 @cite_18 . One of such studies on the behavior of people has proposed the concept of triadic closure for explaining how people behave when they connect to each other. Our method uses this concept for inferring which links are created by copying which links.
{ "cite_N": [ "@cite_18", "@cite_11", "@cite_1", "@cite_17" ], "mid": [ "2166531833", "2155186673", "1491850760", "1980722028" ], "abstract": [ "It has often been taken as a working assumption that directed links in information networks are frequently formed by “short-cutting” a two-step path between the source and the destination — a kind of implicit “link copying” analogous to the process of triadic closure in social networks. Despite the role of this assumption in theoretical models such as preferential attachment, it has received very little direct empirical investigation. Here we develop a formalization and methodology for studying this type of directed closure process, and we provide evidence for its important role in the formation of links on Twitter. We then analyze a sequence of models designed to capture the structural phenomena related to directed closure that we observe in the Twitter data.", "We study the extent to which the formation of a two-way relationship can be predicted in a dynamic social network. A two-way (called reciprocal) relationship, usually developed from a one-way (parasocial) relationship, represents a more trustful relationship between people. Understanding the formation of two-way relationships can provide us insights into the micro-level dynamics of the social network, such as what is the underlying community structure and how users influence each other. Employing Twitter as a source for our experimental data, we propose a learning framework to formulate the problem of reciprocal relationship prediction into a graphical model. The framework incorporates social theories into a machine learning model. We demonstrate that it is possible to accurately infer 90 of reciprocal relationships in a dynamic network. Our study provides strong evidence of the existence of the structural balance among reciprocal relationships. In addition, we have some interesting findings, e.g., the likelihood of two \"elite\" users creating a reciprocal relationships is nearly 8 times higher than the likelihood of two ordinary users. More importantly, our findings have potential implications such as how social structures can be inferred from individuals' behaviors.", "Trust reciprocity, a special form of link reciprocity, exists in many networks of trust among users. In this paper, we seek to determine the extent to which reciprocity exists in a trust network and develop quantitative models for measuring reciprocity and reciprocity related behaviors. We identify several reciprocity behaviors and their respective measures. These behavior measures can be employed for predicting if a trustee will return trust to her trustor given that the latter initiates a trust link earlier. We develop for this reciprocal trust prediction task a number of ranking method and classification methods, and evaluated them on an Epinions trust network data. Our results show that reciprocity related behaviors provide good features for both ranking and classification based methods under different", "We study the detailed growth of a social networking site with full temporal information by examining the creation process of each friendship relation that can collectively lead to the macroscopic properties of the network. We first study the reciprocal behavior of users, and find that link requests are quickly responded to and that the distribution of reciprocation intervals decays in an exponential form. The degrees of inviters accepters are slightly negatively correlative with reciprocation time. In addition, the temporal feature of the online community shows that the distributions of intervals of user behaviors, such as sending or accepting link requests, follow a power law with a universal exponent, and peaks emerge for intervals of an integral day. We finally study the preferential selection and linking phenomena of the social networking site and find that, for the former, a linear preference holds for preferential sending and reception, and for the latter, a linear preference also holds for preferential acceptance, creation, and attachment. Based on the linearly preferential linking, we put forward an analyzable network model which can reproduce the degree distribution of the network. The research framework presented in the paper could provide a potential insight into how the micro-motives of users lead to the global structure of online social networks." ] }
1512.02341
2949137045
In this paper, we propose a method of ranking recently created Twitter accounts according to their prospective popularity. Early detection of new promising accounts is useful for trend prediction, viral marketing, user recommendation, and so on. New accounts are, however, difficult to evaluate because they have not established their reputations, and we cannot apply existing link-based or other popularity-based account evaluation methods. Our method first finds "early adopters", i.e., users who often find new good information sources earlier than others. Our method then regards new accounts followed by good early adopters as promising, even if they do not have many followers now. In order to find good early adopters, we estimate the frequency of link propagation from each account, i.e., how many times the follow links from the account have been copied by its followers. If its followers have copied many of its follow links in the past, the account must be an early adopter, who find good information sources earlier than its followers. We develop a method of inferring which links are created by copying which links. One advantage of our method is that our method only uses information that can be easily obtained only by crawling neighbors of the target accounts in the current Twitter graph. We evaluated our method by an experiment on Twitter data. We chose then-new accounts from an old snapshot of Twitter, compute their ranking by our method, and compare it with the number of followers the accounts currently have. The result shows that our method produces better rankings than various baseline methods, especially for new accounts that have only a few followers.
Recently, there have also been many studies on link prediction in online social network. For example, Liben-Nowell and Kleinberg @cite_2 was the first to formulate the problem of link-prediction on social network and they proposed a prediction method based on the proximity of nodes in the network. @cite_10 proposed a method that estimates the probability of future links by inferring latent paths of link propagation in the network. They estimate how important each node is as a mediator of link propagation by using a probabilistic model. Our method is based on a similar concept of early adopters. Their method, however, requires multiple snapshots of the network structure at different time point. On the other hand, our method for estimating the future popularity of a given account only requires information that can be obtained by crawling the neighbors of the target account in the current snapshot of the network structure. This is one big advantage of our method.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2010627475", "2148847267" ], "abstract": [ "It's well known that the transitivity of friendship is a popular sociological principle in social networks. However, it's still unknown that to what extent people's friend-making behaviors follow this principle and to what extent it can benefit the link prediction task. In this paper, we try to adopt this sociological principle to explain the evolution of networks and study the latent friendship propagation. Unlike traditional link prediction approaches, we model link formation as results of individuals' friend-making behaviors combined with personal interests. We propose the Latent Friendship Propagation Network (LFPN), which depicts the evolution progress of one's egocentric network and reveals future growth potentials driven by the transitivity of friendship based on personal interests. We model individuals' social behaviors using the Latent Friendship Propagation Model (LFPM), a probabilistic generative model from which the LFPN can be learned effectively. To evaluate the power of the friendship propagation in link prediction, we design LFPN-RW which models the friend-making behavior as a random walk upon the LFPN naturally and captures the co-influence effect of the friend circles as well as personal interests to provide more accurate prediction. Experimental results on real-world datasets show that LFPN-RW outperforms the state-of-the-art approaches. This convinces that the transitivity of friendship actually plays important roles in the evolution of social networks.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc." ] }
1512.02341
2949137045
In this paper, we propose a method of ranking recently created Twitter accounts according to their prospective popularity. Early detection of new promising accounts is useful for trend prediction, viral marketing, user recommendation, and so on. New accounts are, however, difficult to evaluate because they have not established their reputations, and we cannot apply existing link-based or other popularity-based account evaluation methods. Our method first finds "early adopters", i.e., users who often find new good information sources earlier than others. Our method then regards new accounts followed by good early adopters as promising, even if they do not have many followers now. In order to find good early adopters, we estimate the frequency of link propagation from each account, i.e., how many times the follow links from the account have been copied by its followers. If its followers have copied many of its follow links in the past, the account must be an early adopter, who find good information sources earlier than its followers. We develop a method of inferring which links are created by copying which links. One advantage of our method is that our method only uses information that can be easily obtained only by crawling neighbors of the target accounts in the current Twitter graph. We evaluated our method by an experiment on Twitter data. We chose then-new accounts from an old snapshot of Twitter, compute their ranking by our method, and compare it with the number of followers the accounts currently have. The result shows that our method produces better rankings than various baseline methods, especially for new accounts that have only a few followers.
There have also been many studies on estimation of the influential power of nodes in social network. For example, @cite_4 compared three indicators, PageRank, the number of followers, and the number of retweets, for the estimation of popularity of Twitter accounts, and they showed that there is a discrepancy between the number of followers of an account and the popularity of tweets by the account, which suggests that the number of followers is not an only major factor of influential power of nodes. @cite_15 also proposed a method for estimating influential power of Twitter accounts. Their method is based on the number of followers, but they also consider the interests of the followers and compute the probability that each tweet is actually read by the followers. These two studies focus on influential power of information sources in information dissemination, while the early adopter score used in our method indicates the influential power of nodes in link propagation.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2076219102", "2101196063" ], "abstract": [ "This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4 of the users in Twitter follow more than 80 of their followers, and (2) 80.5 of the users have 80 of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank.", "Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it." ] }
1512.02341
2949137045
In this paper, we propose a method of ranking recently created Twitter accounts according to their prospective popularity. Early detection of new promising accounts is useful for trend prediction, viral marketing, user recommendation, and so on. New accounts are, however, difficult to evaluate because they have not established their reputations, and we cannot apply existing link-based or other popularity-based account evaluation methods. Our method first finds "early adopters", i.e., users who often find new good information sources earlier than others. Our method then regards new accounts followed by good early adopters as promising, even if they do not have many followers now. In order to find good early adopters, we estimate the frequency of link propagation from each account, i.e., how many times the follow links from the account have been copied by its followers. If its followers have copied many of its follow links in the past, the account must be an early adopter, who find good information sources earlier than its followers. We develop a method of inferring which links are created by copying which links. One advantage of our method is that our method only uses information that can be easily obtained only by crawling neighbors of the target accounts in the current Twitter graph. We evaluated our method by an experiment on Twitter data. We chose then-new accounts from an old snapshot of Twitter, compute their ranking by our method, and compare it with the number of followers the accounts currently have. The result shows that our method produces better rankings than various baseline methods, especially for new accounts that have only a few followers.
The discovery of early adopters in online community has been discussed in several studies. @cite_14 analyzed how users adopt new contents in a social network in Second Life, and identified early adopters, but also found that early adopters do not always have significant influence on the other users. Saez- @cite_3 proposed a method of identifying early adopters that also have significant influence on the others in information network, such as Twitter, and called such users trendsetters. These studies focused on temporal relationship of users' adoption of contents, such as hashtags and URLs. @cite_13 also proposed a method of identifying leaders in online communities whose actions, e.g., tagging resources or rating songs, are imitated by many users. On the other hand, we focused on the adoption of new Twitter accounts, i.e., the creation of new follow links, and imitation of them by the followers. We showed that the idea similar to theirs can also be applied to such a type of actions in order to predict future popularity of new accounts in Twitter. Another contribution of this paper is to develop a method of inferring who copied which follow links, the information which is not immediately available.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_3" ], "mid": [ "2107260313", "2043396212", "2096734559" ], "abstract": [ "Social influence determines to a large extent what we adopt and when we adopt it. This is just as true in the digital domain as it is in real life, and has become of increasing importance due to the deluge of user-created content on the Internet. In this paper, we present an empirical study of user-to-user content transfer occurring in the context of a time-evolving social network in Second Life, a massively multiplayer virtual world. We identify and model social influence based on the change in adoption rate following the actions of one's friends and find that the social network plays a significant role in the adoption of content. Adoption rates quicken as the number of friends adopting increases and this effect varies with the connectivity of a particular user. We further find that sharing among friends occurs more rapidly than sharing among strangers, but that content that diffuses primarily through social influence tends to have a more limited audience. Finally, we examine the role of individuals, finding that some play a more active role in distributing content than others, but that these influencers are distinct from the early adopters.", "We introduce a novel frequent pattern mining approach to discover leaders and tribes in social networks. In particular, we consider social networks where users perform actions. Actions may be as simple as tagging resources (urls) as in del.icio.us, rating songs as in Yahoo! Music, or movies as in Yahoo! Movies, or users buying gadgets such as cameras, handhelds, etc. and blogging a review on the gadgets. The assumption is that actions performed by a user can be seen by their network friends. Users seeing their friends' actions are sometimes tempted to perform those actions. We are interested in the problem of studying the propagation of such \"influence\", and on this basis, identifying which users are leaders when it comes to setting the trend for performing various actions. We consider alternative definitions of leaders based on frequent patterns and develop algorithms for their efficient discovery. Our definitions are based on observing the way influence propagates in a time window, as the window is moved in time. Given a social graph and a table of user actions, our algorithms can discover leaders of various flavors by making one pass over the actions table. We run detailed experiments to evaluate the utility and scalability of our algorithms on real-life data. The results of our experiments confirm on the one hand, the efficiency of the proposed algorithm, and on the other hand, the effectiveness and relevance of the overall framework. To the best of our knowledge, this the first frequent pattern based approach to social network mining.", "Influential people have an important role in the process of information diffusion. However, there are several ways to be influential, for example, to be the most popular or the first that adopts a new idea. In this paper we present a methodology to find trendsetters in information networks according to a specific topic of interest. Trendsetters are people that adopt and spread new ideas influencing other people before these ideas become popular. At the same time, not all early adopters are trendsetters because only few of them have the ability of propagating their ideas by their social contacts through word-of-mouth. Differently from other influence measures, a trendsetter is not necessarily popular or famous, but the one whose ideas spread over the graph successfully. Other metrics such as node in-degree or even standard Pagerank focus only in the static topology of the network. We propose a ranking strategy that focuses on the ability of some users to push new ideas that will be successful in the future. To that end, we combine temporal attributes of nodes and edges of the network with a Pagerank based algorithm to find the trendsetters for a given topic. To test our algorithm we conduct innovative experiments over a large Twitter dataset. We show that nodes with high in-degree tend to arrive late for new trends, while users in the top of our ranking tend to be early adopters that also influence their social contacts to adopt the new trend." ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
The MQO problem has received significant attention in the past @cite_18 @cite_17 @cite_20 @cite_2 @cite_1 . Initial work @cite_18 @cite_17 @cite_2 @cite_20 proposed solutions that were not fully integrated with the query optimizer and were primarily exhaustive.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "2058978608", "1987605824", "1548134621", "2069811933", "2150974577" ], "abstract": [ "Some recently proposed extensions to relational database systems, as well as to deductive database systems, require support for multiple-query processing. For example, in a database system enhanced with inference capabilities, a simple query involving a rule with multiple definitions may expand to more than one actual query that has to be run over the database. It is an interesting problem then to come up with algorithms that process these queries together instead of one query at a time. The main motivation for performing such an interquery optimization lies in the fact that queries may share common data. We examine the problem of multiple-query optimization in this paper. The first major contribution of the paper is a systematic look at the problem, along with the presentation and analysis of algorithms that can be used for multiple-query optimization. The second contribution lies in the presentation of experimental results. Our results show that using multiple-query processing algorithms may reduce execution cost considerably.", "Next generation decision support applications, besides being capable of processing huge amounts of data, require the ability to integrate and reason over data from multiple, heterogeneous data sources. Often, these data sources differ in a variety of aspects such as their data models, the query languages they support, and their network protocols. Also, typically they are spread over a wide geographical area. The cost of processing decision support queries in such a setting is quite high. However, processing these queries often involves redundancies such as repeated access of same data source and multiple execution of similar processing sequences. Minimizing these redundancies would significantly reduce the query processing cost. In this paper, we (1) propose an architecture for processing complex decision support queries involving multiple, heterogeneous data sources; (2) introduce the notion of transient-views — materialized views that exist only in the context of execution of a query — that is useful for minimizing the redundancies involved in the execution of these queries; (3) develop a cost-based algorithm that takes a query plan as input and generates an optimal “covering plan”, by minimizing redundancies in the original plan; (4) validate our approach by means of an implementation of the algorithms and a detailed performance study based on TPC-D benchmark queries on a commercial database system; and finally, (5) compare and contrast our approach with work in related areas, in particular, the areas of answering queries using views and optimization using common sub-expressions. Our experiments demonstrate the practicality and usefulness of transient-views in significantly improving the performance of decision support queries.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-", "Abstract Multiple-query processing has received a lot of attention recently. The problem arises in many areas, such as extended relational database systems and deductive database systems. In this paper we describe a heuristic search algorithm for this problem. This algorithm uses an improved heuristic function that enables it to expand only a small fraction of the nodes expanded by an algorithm that has been proposed in the past. In addition, it handles implied relationships without increasing the size of the search space or the number of nodes generated in this space. We include both theoretical analysis and experimental results to demonstrate the utility of the algorithm.", "The problem of identifying common subexpressions and using them in the simultaneous optimization of multiple queries is dealt with. In particular, emphasis is placed on the strategy of selecting access plans for single queries and their integration into a global access plan that takes advantage of common tasks. A dynamic programming algorithm is presented for the selection of individual access plans such that the resulting global access plan is of minimum processing cost. The computational complexity of this algorithm represents a significant improvement over existing algorithms. >" ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
Subramanian and Venkataraman @cite_1 consider sharing only among the best plans of the query; this approach can be implemented as an efficient, post-optimization phase in existing systems, but can be highly suboptimal.
{ "cite_N": [ "@cite_1" ], "mid": [ "1987605824" ], "abstract": [ "Next generation decision support applications, besides being capable of processing huge amounts of data, require the ability to integrate and reason over data from multiple, heterogeneous data sources. Often, these data sources differ in a variety of aspects such as their data models, the query languages they support, and their network protocols. Also, typically they are spread over a wide geographical area. The cost of processing decision support queries in such a setting is quite high. However, processing these queries often involves redundancies such as repeated access of same data source and multiple execution of similar processing sequences. Minimizing these redundancies would significantly reduce the query processing cost. In this paper, we (1) propose an architecture for processing complex decision support queries involving multiple, heterogeneous data sources; (2) introduce the notion of transient-views — materialized views that exist only in the context of execution of a query — that is useful for minimizing the redundancies involved in the execution of these queries; (3) develop a cost-based algorithm that takes a query plan as input and generates an optimal “covering plan”, by minimizing redundancies in the original plan; (4) validate our approach by means of an implementation of the algorithms and a detailed performance study based on TPC-D benchmark queries on a commercial database system; and finally, (5) compare and contrast our approach with work in related areas, in particular, the areas of answering queries using views and optimization using common sub-expressions. Our experiments demonstrate the practicality and usefulness of transient-views in significantly improving the performance of decision support queries." ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
To choose the set of nodes to be materialized, @cite_19 use a greedy algorithm which has already been discussed in detail in Section . @cite_27 explores the possibility of sharing intermediate results by pipelining, avoiding unnecessary materializiations. @cite_14 consider the MQO problem in Volcano taking scheduling and caching into account. They present an exhaustive algorithm which takes @math time, which is clearly infeasible.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_14" ], "mid": [ "", "2645036108", "2124186830" ], "abstract": [ "", "Database systems frequently have to execute a set of related queries, which share several common subexpressions. Multi-query optimization exploits this, by finding evaluation plans that share common results. Current approaches to multi-query optimization assume that common subexpressions are materialized. Significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. However, plans with pipelining may not always be realizable with limited buffer space, as we show. We present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. We show that finding a valid schedule with minimum cost is NP-hard. We present a greedy heuristic for finding good schedules. Finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the TPCD benchmark.", "Database systems frequently have to execute a batch of related queries. Multi-query optimization exploits evalu ation plans that share common results. Current approaches to multi-query optimization assume there is infinite disk space, and very limited memory space. Pipelining was the only option considered for avoiding expensive disk writes. The availability of fairly large and inexpensive main memory motivates the need to make best use of available main memory for caching shared results, and scheduling queries in a manner that facilitates caching. Pipelining needs to be exploited at the same time. We look at the problem of multi-query optimization taking into account query scheduling, caching and pipelining. We first prove that MQO with either just query scheduling or just caching is NP-complete. We then provide the first known algorithms for the most general MQO problem with scheduling, caching and pipelining. After showing the connections of this problem with other traditional scheduling problems and graph theoretic problems we outline heuristics for MQO with scheduling, caching and pipelining." ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
@cite_23 consider physical properties in a cost-based fashion. However, their solution is also based on heuristics which materializes common subexpression at the LQDAG level. The best physical property for each subexpression is chosen and all consumers are forced to use the same physical property, which can be sub-optimal. Furthermore, even with this heuristic, their approach can be very expensive when there are many potential physical properties for each subexpression.
{ "cite_N": [ "@cite_23" ], "mid": [ "2090773975" ], "abstract": [ "Many companies now routinely run massive data analysis jobs -- expressed in some scripting language -- on large clusters of low-end servers. Many analysis scripts are complex and contain common sub expressions, that is, intermediate results that are subsequently joined and aggregated in multiple different ways. Applying conventional optimization techniques to such scripts will produce plans that execute a common sub expression multiple times, once for each consumer, which is clearly wasteful. Moreover, different consumers may have different physical requirements on the result: one consumer may want it partitioned on a column A and another one partitioned on column B. To find a truly optimal plan, the optimizer must trade off such conflicting requirements in a cost-based manner. In this paper we show how to extend a Cascade-style optimizer to correctly optimize scripts containing common sub expression. The approach has been prototyped in SCOPE, Microsoft's system for massive data analysis. Experimental analysis of both simple and large real-world scripts shows that the extended optimizer produces plans with 21 to 57 lower estimated costs." ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
Submodular maximization has received a significant amount of attention in optimization @cite_3 @cite_28 @cite_4 and has wide applicability in machine learning, computer vision and information retrieval @cite_15 @cite_12 @cite_22 @cite_9 . In this problem, we are given a submodular function @math and a universe @math , with the goal of selecting a subset @math such that @math is maximized. Typically, @math must satisfy additional feasibility constraints such as cardinality, knapsack or matroid constraints.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_28", "@cite_9", "@cite_3", "@cite_15", "@cite_12" ], "mid": [ "2179494254", "2169551590", "2757107770", "2133374323", "2621717961", "2061958803", "" ], "abstract": [ "We consider the Unconstrained Submodular Maximization problem in which we are given a nonnegative submodular function @math , and the objective is to find a subset @math maximizing @math . This is one of the most basic submodular optimization problems, having a wide range of applications. Some well-known problems captured by Unconstrained Submodular Maximization include Max-Cut, Max-DiCut, and variants of Max-SAT and maximum facility location. We present a simple randomized linear time algorithm achieving a tight approximation guarantee of 1 2, thus matching the known hardness result of Feige, Mirrokni, and Vondrak [SIAM J. Comput., 40 (2011), pp. 1133--1153]. Our algorithm is based on an adaptation of the greedy approach which exploits certain symmetry properties of the problem.", "In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images. The user marks certain pixels as \"object\" or \"background\" to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained solution gives the best balance of boundary and region properties among all segmentations satisfying the constraints. The topology of our segmentation is unrestricted and both \"object\" and \"background\" segments may consist of several isolated parts. Some experimental results are presented in the context of photo video editing and medical image segmentation. We also demonstrate an interesting Gestalt example. A fast implementation of our segmentation method is possible via a new max-flow algorithm.", "LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)gez(SxcupT)+z(SxcapT) for allS, T inN. Such a function is called submodular. We consider the problem maxSsubN a(S):|S|leK,z(S) submodular . Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a ldquogreedyrdquo heuristic always produces a solution whose value is at least 1 –[(K – 1) K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e – 1) e, where e is the base of the natural logarithm.", "In this paper, we extend the class of energy functions for which the optimal alpha-expansion and alphabeta-swap moves can be computed in polynomial time. Specifically, we introduce a novel family of higher order clique potentials, and show that the expansion and swap moves for any energy function composed of these potentials can be found by minimizing a submodular function. We also show that for a subset of these potentials, the optimal move can be found by solving an st-mincut problem. We refer to this subset as the Pn Potts model. Our results enable the use of powerful alpha-expansion and alphabeta-swap move making algorithms for minimization of energy functions involving higher order cliques. Such functions have the capability of modeling the rich statistics of natural scenes and can be used for many applications in Computer Vision. We demonstrate their use in one such application, i.e., the texture-based image or video-segmentation problem.", "", "We propose a new family of non-submodular global energy functions that still use submodularity internally to couple edges in a graph cut. We show it is possible to develop an efficient approximation algorithm that, thanks to the internal submodularity, can use standard graph cuts as a subroutine. We demonstrate the advantages of edge coupling in a natural setting, namely image segmentation. In particular, for fine-structured objects and objects with shading variation, our structured edge coupling leads to significant improvements over standard approaches.", "" ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
This problem is @math -hard even for the simplest problems which involve only and monotone functions. @cite_28 show that a simple greedy algorithm gives a @math approximation for monotone submodular maximization under cardinality constraints. They further show that it is @math -hard to obtain a better approximation guarantee. Sviridenko @cite_6 presents a modified greedy algorithm for monotone submodular function maximization under knapsack constraints. Their algorithm is the main motivation for our marginal greedy algorithm.
{ "cite_N": [ "@cite_28", "@cite_6" ], "mid": [ "2757107770", "2033885045" ], "abstract": [ "LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)gez(SxcupT)+z(SxcapT) for allS, T inN. Such a function is called submodular. We consider the problem maxSsubN a(S):|S|leK,z(S) submodular . Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a ldquogreedyrdquo heuristic always produces a solution whose value is at least 1 –[(K – 1) K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e – 1) e, where e is the base of the natural logarithm.", "In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations." ] }
1512.02568
2954345759
Complex queries for massive data analysis jobs have become increasingly commonplace. Many such queries contain com- mon subexpressions, either within a single query or among multiple queries submitted as a batch. Conventional query optimizers do not exploit these subexpressions and produce sub-optimal plans. The problem of multi-query optimization (MQO) is to generate an optimal combined evaluation plan by computing common subexpressions once and reusing them. Exhaustive algorithms for MQO explore an O(n^n) search space. Thus, this problem has primarily been tackled using various heuristic algorithms, without providing any theoretical guarantees on the quality of their solution. In this paper, instead of the conventional cost minimization problem, we treat the problem as maximizing a linear transformation of the cost function. We propose a greedy algorithm for this transformed formulation of the problem, which under weak, intuitive assumptions, provides an approximation factor guarantee for this formulation. We go on to show that this factor is optimal, unless P = NP. Another noteworthy point about our algorithm is that it can be easily incorporated into existing transformation-based optimizers. We finally propose optimizations which can be used to improve the efficiency of our algorithm.
@cite_4 gave a @math -approximation algorithm for unconstrained non-monotone submodular maximization, for which there is a matching hardness result. However, all these results assume non-negativity of the function @math . Mittal and Shulz @cite_24 show that a constant factor approximation for non-negative supermodular minimization is @math -hard. Inapproximability of non-monotone submodular maximization (with possibly negative values) is also well known. To the best of our knowledge, ours is the first work which, under the assumption of @math , provides an approximation algorithm with a matching hardness of approximation result for unconstrained non-monotone submodular maximization when the function may take negative values. Since the hardness of approximation factor depends on the optimal (and may go to 0), this rules out constant factor approximations for the problem even in the restricted setting of @math .
{ "cite_N": [ "@cite_24", "@cite_4" ], "mid": [ "2007598971", "2179494254" ], "abstract": [ "We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest.", "We consider the Unconstrained Submodular Maximization problem in which we are given a nonnegative submodular function @math , and the objective is to find a subset @math maximizing @math . This is one of the most basic submodular optimization problems, having a wide range of applications. Some well-known problems captured by Unconstrained Submodular Maximization include Max-Cut, Max-DiCut, and variants of Max-SAT and maximum facility location. We present a simple randomized linear time algorithm achieving a tight approximation guarantee of 1 2, thus matching the known hardness result of Feige, Mirrokni, and Vondrak [SIAM J. Comput., 40 (2011), pp. 1133--1153]. Our algorithm is based on an adaptation of the greedy approach which exploits certain symmetry properties of the problem." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
The first significant efforts to create standard datasets were the Middlebury datasets for stereo disparity estimation @cite_23 and optical flow estimation @cite_12 . While the stereo dataset consists of real scenes, the optical flow dataset is a mixture of real scenes and rendered scenes. Both datasets are very small in today's terms. Especially the small test sets have led to heavy manual overfitting. An advantage of the stereo dataset is the availability of relevant real scenes, especially in the latest high-resolution version from 2014 @cite_2 .
{ "cite_N": [ "@cite_12", "@cite_23", "@cite_2" ], "mid": [ "2147253850", "2104974755", "63091017" ], "abstract": [ "The quantitative evaluation of optical flow algorithms by (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by , we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http: vision.middlebury.edu flow . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.", "Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.", "We present a structured lighting system for creating high-resolution stereo datasets of static indoor scenes with highly accurate ground-truth disparities. The system includes novel techniques for efficient 2D subpixel correspondence search and self-calibration of cameras and projectors with modeling of lens distortion. Combining disparity estimates from multiple projector positions we are able to achieve a disparity accuracy of 0.2 pixels on most observed surfaces, including in half-occluded regions. We contribute 33 new 6-megapixel datasets obtained with our system and demonstrate that they present new challenges for the next generation of stereo algorithms." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
MPI Sintel @cite_15 is an entirely synthetic dataset derived from a short open source animated 3D movie. It provides dense ground truth for optical flow. Since very recently, a beta testing version of disparities is available for training. With @math training frames, the Sintel dataset is the largest dataset currently available. It contains sufficiently realistic scenes including natural image degradations such as fog and motion blur. The authors put much effort into the correctness of the ground truth for all frames and pixels. This makes the dataset a very reliable test set for comparison of methods. However, for training convolutional networks, the dataset is still too small.
{ "cite_N": [ "@cite_15" ], "mid": [ "1513100184" ], "abstract": [ "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
The KITTI dataset was produced in 2012 @cite_6 and extended in 2015 @cite_0 . It contains stereo videos of road scenes from a calibrated pair of cameras mounted on a car. Ground truth for optical flow and disparity is obtained from a 3D laser scanner combined with the egomotion data of the car. While the dataset contains real data, the acquisition method restricts the ground truth to static parts of the scene. Moreover, the laser only provides sparse data up to a certain distance and height. For the most recent version, 3D models of cars were fitted to the point clouds to obtain a denser labeling and to include also moving objects. However, the ground truth in these areas is still an approximation.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "1921093919", "2115579991" ], "abstract": [ "This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods.", "We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
@cite_26 trained convolutional networks for optical flow estimation on a synthetic dataset of moving 2D chair images superimposed on natural background images. This dataset is large but limited to single-view optical flow. It does not contain 3D motions and is not yet publicly available.
{ "cite_N": [ "@cite_26" ], "mid": [ "2951309005" ], "abstract": [ "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
Convolutional networks @cite_19 have proven very successful for a variety of recognition tasks, such as image classification @cite_8 . Recent applications of convolutional networks include also depth estimation from single images @cite_24 , stereo matching @cite_11 , and optical flow estimation @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_24", "@cite_19", "@cite_11" ], "mid": [ "2951309005", "2618530766", "2171740948", "2147800946", "2963502507" ], "abstract": [ "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
The FlowNet of @cite_26 is most related to our work. It uses an encoder-decoder architecture with additional crosslinks between contracting and expanding network parts, where the encoder computes abstract features from receptive fields of increasing size, and the decoder reestablishes the original resolution via an expanding up-convolutional architecture @cite_13 . We adapt this approach for disparity estimation.
{ "cite_N": [ "@cite_26", "@cite_13" ], "mid": [ "2951309005", "1893585201" ], "abstract": [ "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
The disparity estimation method in Z @cite_11 uses a Siamese network for computing matching distances between image patches. To actually estimate the disparity, the authors then perform cross-based cost aggregation @cite_5 and semi-global matching (SGM) @cite_25 . In contrast to our work, Z have no end-to-end training of a convolutional network on the disparity estimation task, with corresponding consequences for computational efficiency and elegance.
{ "cite_N": [ "@cite_5", "@cite_25", "@cite_11" ], "mid": [ "2113873920", "2117248802", "2963502507" ], "abstract": [ "We propose an area-based local stereo matching algorithm for accurate disparity estimation across all image regions. A well-known challenge to local stereo methods is to decide an appropriate support window for the pixel under consideration, adapting the window shape or the pixelwise support weight to the underlying scene structures. Our stereo method tackles this problem with two key contributions. First, for each anchor pixel an upright cross local support skeleton is adaptively constructed, with four varying arm lengths decided on color similarity and connectivity constraints. Second, given the local cross-decision results, we dynamically construct a shape-adaptive full support region on the fly, merging horizontal segments of the crosses in the vertical neighborhood. Approximating image structures accurately, the proposed method is among the best performing local stereo methods according to the benchmark Middlebury stereo evaluation. Additionally, it reduces memory consumption significantly thanks to our compact local cross representation. To accelerate matching cost aggregation performed in an arbitrarily shaped 2-D region, we also propose an orthogonal integral image technique, yielding a speedup factor of 5-15 over the straightforward integration.", "This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.", "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets." ] }
1512.02134
2259424905
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
The fastest method in KITTI's scene flow top 7 is from @cite_21 with a runtime of 2.4 seconds. The method employs a seed growing algorithm for simultaneous disparity and optical flow estimation.
{ "cite_N": [ "@cite_21" ], "mid": [ "2119090229" ], "abstract": [ "A simple seed growing algorithm for estimating scene flow in a stereo setup is presented. Two calibrated and synchronized cameras observe a scene and output a sequence of image pairs. The algorithm simultaneously computes a disparity map between the image pairs and optical flow maps between consecutive images. This, together with calibration data, is an equivalent representation of the 3D scene flow, i.e. a 3D velocity vector is associated with each reconstructed point. The proposed method starts from correspondence seeds and propagates these correspondences to their neighborhood. It is accurate for complex scenes with large motions and produces temporally-coherent stereo disparity and optical flow results. The algorithm is fast due to inherent search space reduction. An explicit comparison with recent methods of spatiotemporal stereo and variational optical and scene flow is provided." ] }
1512.02181
2189646513
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner. Previous studies on teaching dimension focused on version-space learners which maintain all hypotheses consistent with the training data, and cannot be applied to modern machine learners which select a specific hypothesis via optimization. This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners.
Teaching dimension as a learning-theoretic quantity has attracted a long history of research. It was proposed independently in @cite_1 @cite_17 . Subsequent theoretical developments can be found in e.g. @cite_14 @cite_7 @cite_15 @cite_16 @cite_3 @cite_23 @cite_21 @cite_10 @cite_19 @cite_6 @cite_8 @cite_0 @cite_2 . Most of them assume little extra knowledge on the learner other than that it is consistent with the training data. While such version-space learners are elegant object of theoretical study, they diverge from the practice of modern machine learning. Our present paper is among the first to extend teaching dimension to optimization-based machine learners.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2146938707", "1583136812", "2027995847", "1570266953", "2020764470", "1996389440", "", "1606414949", "59277799", "2067905901", "2146250995", "", "2071268134", "", "2012508367" ], "abstract": [ "While most supervised machine learning models assume that training examples are sampled at random or adversarially, this article is concerned with models of learning from a cooperative teacher that selects \"helpful\" training examples. The number of training examples a learner needs for identifying a concept in a given class C of possible target concepts (sample complexity of C) is lower in models assuming such teachers, that is, \"helpful\" examples can speed up the learning process. The problem of how a teacher and a learner can cooperate in order to reduce the sample complexity, yet without using \"coding tricks\", has been widely addressed. Nevertheless, the resulting teaching and learning protocols do not seem to make the teacher select intuitively \"helpful\" examples. The two models introduced in this paper are built on what we call subset teaching sets and recursive teaching sets. They extend previous models of teaching by letting both the teacher and the learner exploit knowing that the partner is cooperative. For this purpose, we introduce a new notion of \"coding trick\" \"collusion\". We show how both resulting sample complexity measures (the subset teaching dimension and the recursive teaching dimension) can be arbitrarily lower than the classic teaching dimension and known variants thereof, without using coding tricks. For instance, monomials can be taught with only two examples independent of the number of variables. The subset teaching dimension turns out to be nonmonotonic with respect to subclasses of concept classes. We discuss why this nonmonotonicity might be inherent in many interesting cooperative teaching and learning scenarios.", "The present paper surveys recent developments in algorithmic teaching. First, the traditional teaching dimension model is recalled. Starting from the observation that the teaching dimension model sometimes leads to counterintuitive results, recently developed approaches are presented. Here, main emphasis is put on the following aspects derived from human teaching learning behavior: the order in which examples are presented should matter; teaching should become harder when the memory size of the learners decreases; teaching should become easier if the learners provide feedback; and it should be possible to teach infinite concepts and or finite and infinite concept classes. Recent developments in the algorithmic teaching achieving (some) of these aspects are presented and compared.", "WF ?X]JIO (’ th? l> wt’r of teaChillg by StUdyiIlg two uu-lille learuing models: teach er-clirecteci learninE and self-dlrectecl learning. In both models, the learner tries to identify an unkuowu concept based on examples of the concept presented one at, a time. The learner predirts wheth r each example is positive or negative with immediate feedback, and the ol)ject,ive is to minimize the uurnl)er of predict,iou mistakes. ThP examples are selected by the teacher in teacher-dlrectecl learning and hy tlhe learner itself in self-directed learning. R,oughly, teacher-directed learning represents the scenario in which a teacher teaches a class of learners, and self-directed learning represents the scenario in which a smart learnerasks questious and learns by itself. For all previolmly studied concept classes, the rnirrimum numl)er of mistalws in teacller-ciirectf ecl learning is always larger than that, in self-directed learning. This raises an mtermting question [.)t’ whrt, hrr teaching is helpful for all learners mrlu(ling the smart learner’. Assuming the existence of clue-way functioms, we construct com cept clahses for which the miuimum nurnher of mislakes is hnear in teacher-directed learning I,ut sllI>rrlJolyllorlllal m self-directed learning, cler lc llst,rt tillg the power of a helpful teacher in a Iearmng process.", "The present paper introduces a new model for teaching randomized learners. Our new model, though based on the classical teaching dimension model, allows to study the influence of various parameters such as the learner’s memory size, its ability to provide or to not provide feedback, and the influence of the order in which examples are presented. Furthermore, within the new model it is possible to investigate new aspects of teaching like teaching from positive data only or teaching with inconsistent teachers. Furthermore, we provide characterization theorems for teachability from positive data for both ordinary teachers and inconsistent teachers with and without feedback.", "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching. In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension, Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class.", "We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the nonintuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher learner pairs for the classes of 1-decision lists and Horn sentences", "", "We study the self-directed (SD) learning model. In this model a learner chooses examples, guesses their classification and receives immediate feedback indicating the correctness of its guesses. We consider several fundamental questions concerning this model: the parameters of a task that determine the cost of learning, the computational complexity of a student, and the relationship between this model and the teacher-directed (TD) learning model. We answer the open problem of relating the cost of self-directed learning to the VC-dimension by showing that no such relation exists. Furthermore, we refute the conjecture that for the intersection-closed case, the cost of self-directed learning is bounded by the VC-dimension. We also show that the cost of SD learning may be arbitrarily higher that that of TD learning. Finally, we discuss the number of queries needed for learning in this model and its relationship to the number of mistakes the student incurs. We prove a trade-off formula showing that an algorithm that makes fewer queries throughout its learning process, necessarily suffers a higher number of mistakes.", "Teaching is inextricably linked to learning, and there are many studies on the complexity of teaching as well as learning in computational learning theory. In this paper, we study the complexity of teaching in the situation that the number of examples is restricted, especially less than its teaching dimension. We formulate a model of teaching by a restricted number of examples, where the complexity is measured by the maximum error to a target concept. We call a concept class is optimally incrementally teachable if the teacher can optimally teach it to the learner whenever teaching is terminated. We study the complexity of the three concept classes of monotone monomials, monomials without the empty concept, and monomials in our model. We show that the boundary of optimally incremental teachability is different from that of polynomial teachability in the classical model. We also show that inconsistent examples help to reduce the maximum error in our model.", "Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas.", "This paper is concerned with various combinatorial parameters of classes that can be learned from a small set of examples. We show that the recursive teaching dimension, recently introduced by (2008), is strongly connected to known complexity notions in machine learning, e.g., the self-directed learning complexity and the VC-dimension. To the best of our knowledge these are the first results unveiling such relations between teaching and query learning as well as between teaching and the VC-dimension. It will turn out that for many natural classes the RTD is upper-bounded by the VCD, e.g., classes of VC-dimension 1, intersection-closed classes and finite maximum classes. However, we will also show that there are certain (but rare) classes for which the recursive teaching dimension exceeds the VC-dimension. Moreover, for maximum classes, the combinatorial structure induced by the RTD, called teaching plan, is highly similar to the structure of sample compression schemes. Indeed one can transform any repetition-free teaching plan for a maximum class C into an unlabeled sample compression scheme for C and vice versa, while the latter is produced by (i) the corner-peeling algorithm of Rubinstein and Rubinstein (2012) and (ii) the tail matching algorithm of Kuzmin and Warmuth (2007).", "", "", "", "This paper considers computational learning from the view-point of teaching. We introduce a notion of teachability with which we establish a relationship between the learnability and teachability. We also discuss the complexity issues of a teacher in relation to learning." ] }
1512.02181
2189646513
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner. Previous studies on teaching dimension focused on version-space learners which maintain all hypotheses consistent with the training data, and cannot be applied to modern machine learners which select a specific hypothesis via optimization. This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners.
Teaching dimension is distinct from VC dimension. For a finite hypothesis space @math , Goldman and Kearns @cite_1 proved the relation These inequalities are somewhat weak, as Goldman and Kearns had shown both cases where one quantity is much larger than the other. The distinction between TD and VC dimension is also present in our setting. For example, by inspecting the inhomogeneous SVM column in Table we note that TD does not depend on the dimensionality @math of the feature space @math . To see why this makes intuitive sense, note two @math -dimensional points are sufficient to specify any bisecting hyperplane in @math . On the other hand, recall that the VC dimension for inhomogeneous hyperplanes in @math is @math . Further quantification of the relation between TD and VC (and other capacity measures) remains an open research question.
{ "cite_N": [ "@cite_1" ], "mid": [ "2020764470" ], "abstract": [ "While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching. In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension, Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class." ] }
1512.02181
2189646513
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner. Previous studies on teaching dimension focused on version-space learners which maintain all hypotheses consistent with the training data, and cannot be applied to modern machine learners which select a specific hypothesis via optimization. This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners.
The teaching setting we considered is also distinct from active learning. In teaching the teacher knows the target model and her goal is to the target model as a training set, knowing that the decoder is special (namely a specific machine learning algorithm). This communication perspective highlights the difference to active learning, which must explore the hypothesis space to find the target model. Consequently, the teaching dimension can be dramatically smaller than the active learning query complexity for the same learner and hypothesis space. For example, Zhu @cite_18 demonstrated that to learn a 1D threshold classifier within @math error, the teaching dimension is a constant TD=2 regardless of @math , while active learning would require @math queries which can be arbitrarily larger than TD.
{ "cite_N": [ "@cite_18" ], "mid": [ "2100046253" ], "abstract": [ "What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual teaching examples. We give several examples to illustrate our framework." ] }
1512.02181
2189646513
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner. Previous studies on teaching dimension focused on version-space learners which maintain all hypotheses consistent with the training data, and cannot be applied to modern machine learners which select a specific hypothesis via optimization. This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners.
While the present paper focused on the theory of optimal teaching, there are practical applications, too. One such application is computer-aided personalized education. The human student is modeled by a computational cognitive model, or equivalently the learning algorithm. The educational goal is encoded in the target model. The optimal teaching set is then well-defined, and represents the best personalized lesson for the student @cite_9 @cite_18 @cite_12 . Patil showed that human students learn statistically significantly better under such optimal teaching set compared to an @math training set @cite_5 . Because contemporary cognitive models often employ optimization-based machine learners, our teaching dimension study helps to characterize these optimal lessons.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_18", "@cite_12" ], "mid": [ "2142544755", "2182055801", "2100046253", "2166493072" ], "abstract": [ "Basic decisions, such as judging a person as a friend or foe, involve categorizing novel stimuli. Recent work finds that people's category judgments are guided by a small set of examples that are retrieved from memory at decision time. This limited and stochastic retrieval places limits on human performance for probabilistic classification decisions. In light of this capacity limitation, recent work finds that idealizing training items, such that the saliency of ambiguous cases is reduced, improves human performance on novel test items. One shortcoming of previous work in idealization is that category distributions were idealized in an ad hoc or heuristic fashion. In this contribution, we take a first principles approach to constructing idealized training sets. We apply a machine teaching procedure to a cognitive model that is either limited capacity (as humans are) or unlimited capacity (as most machine learning systems are). As predicted, we find that the machine teacher recommends idealized training sets. We also find that human learners perform best when training recommendations from the machine teacher are based on a limited-capacity model. As predicted, to the extent that the learning model used by the machine teacher conforms to the true nature of human learners, the recommendations of the machine teacher prove effective. Our results provide a normative basis (given capacity constraints) for idealization procedures and offer a novel selection procedure for models of human learning.", "I draw the reader's attention to machine teaching, the problem of finding an optimal training set given a machine learning algorithm and a target model. In addition to generating fascinating mathematical questions for computer scientists to ponder, machine teaching holds the promise of enhancing education and personnel training. The Socratic dialogue style aims to stimulate critical thinking.", "What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual teaching examples. We give several examples to illustrate our framework.", "We study the empirical strategies that humans follow as they teach a target concept with a simple 1D threshold to a robot.1 Previous studies of computational teaching, particularly the teaching dimension model and the curriculum learning principle, offer contradictory predictions on what optimal strategy the teacher should follow in this teaching task. We show through behavioral studies that humans employ three distinct teaching strategies, one of which is consistent with the curriculum learning principle, and propose a novel theoretical framework as a potential explanation for this strategy. This framework, which assumes a teaching goal of minimizing the learner's expected generalization error at each iteration, extends the standard teaching dimension model and offers a theoretical justification for curriculum learning." ] }
1512.02181
2189646513
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner. Previous studies on teaching dimension focused on version-space learners which maintain all hypotheses consistent with the training data, and cannot be applied to modern machine learners which select a specific hypothesis via optimization. This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners.
Another application of optimal teaching is in computer security. In particular, optimal teaching is the mathematical formalism to study the so-called data poisoning attacks @cite_22 @cite_20 @cite_25 @cite_4 . Here the teacher'' is an attacker who has a nefarious target model in mind. The student'' is a learning agent (such as a spam filter) which accepts data and adapts itself. The attacker wants to minimally manipulate the input data in order to manipulate the learning agent toward the attacker's target model. Teaching dimension quantifies the difficulty of data-poisoning attacks, and enables research on defenses.
{ "cite_N": [ "@cite_4", "@cite_25", "@cite_22", "@cite_20" ], "mid": [ "2557044351", "290150691", "2125908420", "2293844262" ], "abstract": [ "Forecasting models play a key role in money-making ventures in many different markets. Such models are often trained on data from various sources, some of which may be untrustworthy. An actor in a given market may be incentivised to drive predictions in a certain direction to their own benefit. Prior analyses of intelligent adversaries in a machine-learning context have focused on regression and classification. In this paper we address the non-iid setting of time series forecasting. We consider a forecaster, Bob, using a fixed, known model and a recursive forecasting method. An adversary, Alice, aims to pull Bob's forecasts toward her desired target series, and may exercise limited influence on the initial values fed into Bob's model. We consider the class of linear autoregressive models, and a flexible framework of encoding Alice's desires and constraints. We describe a method of calculating Alice's optimal attack that is computationally tractable, and empirically demonstrate its effectiveness compared to random and greedy baselines on synthetic and real-world time series data. We conclude by discussing defensive strategies in the face of Alice-like adversaries.", "Latent Dirichlet allocation (LDA) is an increasingly popular tool for data analysis in many domains. If LDA output aects decision making (especially when money is involved), there is an incentive for attackers to compromise it. We ask the question: how can an attacker minimally poison the corpus so that LDA produces topics that the attacker wants the LDA user to see? Answering this question is important to characterize such attacks, and to develop defenses in the future. We give a novel bilevel optimization formulation to identify the optimal poisoning attack. We present an ecient solution (up to local optima) using descent method and implicit functions. We demonstrate poisoning attacks on LDA with extensive experiments, and discuss possible defenses.", "Machine learning's ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.", "We investigate a problem at the intersection of machine learning and security: training-set attacks on machine learners. In such attacks an attacker contaminates the training data so that a specific learning algorithm would produce a model profitable to the attacker. Understanding training-set attacks is important as more intelligent agents (e.g. spam filters and robots) are equipped with learning capability and can potentially be hacked via data they receive from the environment. This paper identifies the optimal training-set attack on a broad family of machine learners. First we show that optimal training-set attack can be formulated as a bilevel optimization problem. Then we show that for machine learners with certain Karush-Kuhn-Tucker conditions we can solve the bilevel problem efficiently using gradient methods on an implicit function. As examples, we demonstrate optimal training-set attacks on Support Vector Machines, logistic regression, and linear regression with extensive experiments. Finally, we discuss potential defenses against such attacks." ] }
1512.01915
2949379063
The aim of this paper is to investigate the interplay between knowledge shared by a group of agents and its coalition ability. We investigate this relation in the standard context of imperfect information concurrent game. We assume that whenever a set of agents form a coalition to achieve a goal, they share their knowledge before acting. Based on this assumption, we propose a new semantics for alternating-time temporal logic with imperfect information and perfect recall. It turns out that this semantics is sufficient to preserve all the desirable properties of coalition ability in traditional coalitional logics. Meanwhile, we investigate how knowledge sharing within a group of agents contributes to its coalitional ability through the interplay of epistemic and coalition modalities. This work provides a partial answer to the question: which kind of group knowledge is required for a group to achieve their goals in the context of imperfect information.
Finally, it is also worth mentioning that @cite_4 adopts a similar meaning of coalition so as to capture the notion of knowing how to play''. Besides the different motivations, that work is based on STIT framework and just considers one-step uniform strategies without investigating the interplay of epistemic and coalitional operators.
{ "cite_N": [ "@cite_4" ], "mid": [ "2083346314" ], "abstract": [ "Reasoning about capabilities, strategies and knowledge is important in the analysis of multiagent systems. Alternating-time Temporal Epistemic Logic (ATEL) was designed with this aim. Nevertheless, the original interpretation of the language suffered from some counterintuitive properties. These are due to the fact that the strategies the agent applies in worlds that he cannot distinguish may not be uniform, in the sense that the same action is applied in all indistinguishable worlds. Several refinements of the original ATEL semantics were proposed since then. In this paper we argue that the STIT framework can easily account for uniform strategies. STIT is a logic of agency that has been proposed in the 90ies in the domain of philosophy of action. It is the logic of constructions of the form \"agent a sees to it that φ\". To support our claim, we first present a straightforward solution in STIT logic augmented by a modal operator of knowledge. Then we offer a simplification, by introducing a modal logic of knowledge-based uniform agency, for one-step strategies, alias choices." ] }
1512.01891
2952039572
This paper proposes to learn high-performance deep ConvNets with sparse neural connections, referred to as sparse ConvNets, for face recognition. The sparse ConvNets are learned in an iterative way, each time one additional layer is sparsified and the entire model is re-trained given the initial weights learned in previous iterations. One important finding is that directly training the sparse ConvNet from scratch failed to find good solutions for face recognition, while using a previously learned denser model to properly initialize a sparser model is critical to continue learning effective features for face recognition. This paper also proposes a new neural correlation-based weight selection criterion and empirically verifies its effectiveness in selecting informative connections from previously learned models in each iteration. When taking a moderately sparse structure (26 -76 of weights in the dense model), the proposed sparse ConvNet model significantly improves the face recognition performance of the previous state-of-the-art DeepID2+ models given the same training data, while it keeps the performance of the baseline model with only 12 of the original parameters.
Orthogonal to weight pruning, @cite_6 @cite_10 @cite_0 explored singular value decomposition and low rank approximation of neural layers for model compression. @cite_30 @cite_15 proposed knowledge distillation, in which a small model (a single model) is trained to mimic the activations of a large model (an ensemble of models). Our weight pruning method may be combined with these techniques. For example, a small model may be first learned with knowledge distillation. Then weights in the small model is further pruned according to some significance criterion.
{ "cite_N": [ "@cite_30", "@cite_6", "@cite_0", "@cite_15", "@cite_10" ], "mid": [ "2952881492", "2167215970", "", "1821462560", "2950967261" ], "abstract": [ "Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on the TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks." ] }
1512.01764
2193438280
In this dissertation, we analyze the computational properties of game-theoretic centrality measures. The key idea behind game-theoretic approach to network analysis is to treat nodes as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph properties. Next, the centrality of any individual node is determined by a chosen game-theoretic solution concept (notably, the Shapley value) in the same way as the payoff of a player in a cooperative game. On one hand, the advantage of game-theoretic centrality measures is that nodes are ranked not only according to their individual roles but also according to how they contribute to the role played by all possible subsets of nodes. On the other hand, the disadvantage is that the game-theoretic solution concepts are typically computationally challenging. The main contribution of this dissertation is that we show that a wide variety of game-theoretic solution concepts on networks can be computed in polynomial time. Our focus is on centralities based on the Shapley value and its various extensions, such as the Semivalues and Coalitional Semivalues. Furthermore, we prove #P-hardness of computing the Shapley value in connectivity games and propose an algorithm to compute it. Finally, we analyse computational properties of generalized version of cooperative games in which order of player matters. We propose a new representation for such games, called generalized marginal contribution networks, that allows for polynomial computation in the size of the representation of two dedicated extensions of the Shapley value to this class of games.
The hardness result presented in this chapter is consistent with other studies of the complexity of the Shapley value in various settings. For instance, computing the Shapley value was shown to be #P-Complete for weighted majority games and in minimum spanning tree games . @cite_31 obtained negative results for a related problem of computing the Shapley-Shubik power index for the spanning connectivity games that are based on undirected, unweighted multigraphs. Also, @cite_45 showed that the computation of the Banzhaf index for connectivity games, in which agents own vertices and control adjacent edges and aim to become connected to the certain set of primary edges, is #P-Complete. A comprehensive review of these issues, including some positive results for certain settings, can be found in .
{ "cite_N": [ "@cite_31", "@cite_45" ], "mid": [ "2949557526", "2163300279" ], "abstract": [ "We consider the problem of maximizing the probability of hitting a strategically chosen hidden virtual network by placing a wiretap on a single link of a communication network. This can be seen as a two-player win-lose (zero-sum) game that we call the wiretap game. The value of this game is the greatest probability that the wiretapper can secure for hitting the virtual network. The value is shown to equal the reciprocal of the strength of the underlying graph. We efficiently compute a unique partition of the edges of the graph, called the prime-partition, and find the set of pure strategies of the hider that are best responses against every maxmin strategy of the wiretapper. Using these special pure strategies of the hider, which we call omni-connected-spanning-subgraphs, we define a partial order on the elements of the prime-partition. From the partial order, we obtain a linear number of simple two-variable inequalities that define the maxmin-polytope, and a characterization of its extreme points. Our definition of the partial order allows us to find all equilibrium strategies of the wiretapper that minimize the number of pure best responses of the hider. Among these strategies, we efficiently compute the unique strategy that maximizes the least punishment that the hider incurs for playing a pure strategy that is not a best response. Finally, we show that this unique strategy is the nucleolus of the recently studied simple cooperative spanning connectivity game.", "Many multiagent domains where cooperation among agents is crucial to achieving a common goal can be modeled as coalitional games. However, in many of these domains, agents are unequal in their power to affect the outcome of the game. Prior research on weighted voting games has explored power indices, which reflect how much \"real power\" a voter has. Although primarily used for voting games, these indices can be applied to any simple coalitional game. Computing these indices is known to be computationally hard in various domains, so one must sometimes resort to approximate methods for calculating them. We suggest and analyze randomized methods to approximate power indices such as the Banzhaf power index and the Shapley-Shubik power index. Our approximation algorithms do not depend on a specific representation of the game, so they can be used in any simple coalitional game. Our methods are based on testing the game's value for several sample coalitions. We also show that no approximation algorithm can do much better for general coalitional games, by providing lower bounds for both deterministic and randomized algorithms for calculating power indices." ] }
1512.01764
2193438280
In this dissertation, we analyze the computational properties of game-theoretic centrality measures. The key idea behind game-theoretic approach to network analysis is to treat nodes as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph properties. Next, the centrality of any individual node is determined by a chosen game-theoretic solution concept (notably, the Shapley value) in the same way as the payoff of a player in a cooperative game. On one hand, the advantage of game-theoretic centrality measures is that nodes are ranked not only according to their individual roles but also according to how they contribute to the role played by all possible subsets of nodes. On the other hand, the disadvantage is that the game-theoretic solution concepts are typically computationally challenging. The main contribution of this dissertation is that we show that a wide variety of game-theoretic solution concepts on networks can be computed in polynomial time. Our focus is on centralities based on the Shapley value and its various extensions, such as the Semivalues and Coalitional Semivalues. Furthermore, we prove #P-hardness of computing the Shapley value in connectivity games and propose an algorithm to compute it. Finally, we analyse computational properties of generalized version of cooperative games in which order of player matters. We propose a new representation for such games, called generalized marginal contribution networks, that allows for polynomial computation in the size of the representation of two dedicated extensions of the Shapley value to this class of games.
Finally, it should be mentioned that other types of connectivity games have been considered in the literature. These include vertex connectivity games proposed by @cite_45 and the spanning connectivity games proposed by @cite_31 . The common denominator of these games is that they are interested in maintaining the connectivity between certain set of nodes and they further study the ability of each node to affect the outcome of these games. In particular, the concepts like Banzhaf power index and Shapley-Shubik power index are utilized to measure the influence of nodes .
{ "cite_N": [ "@cite_31", "@cite_45" ], "mid": [ "2949557526", "2163300279" ], "abstract": [ "We consider the problem of maximizing the probability of hitting a strategically chosen hidden virtual network by placing a wiretap on a single link of a communication network. This can be seen as a two-player win-lose (zero-sum) game that we call the wiretap game. The value of this game is the greatest probability that the wiretapper can secure for hitting the virtual network. The value is shown to equal the reciprocal of the strength of the underlying graph. We efficiently compute a unique partition of the edges of the graph, called the prime-partition, and find the set of pure strategies of the hider that are best responses against every maxmin strategy of the wiretapper. Using these special pure strategies of the hider, which we call omni-connected-spanning-subgraphs, we define a partial order on the elements of the prime-partition. From the partial order, we obtain a linear number of simple two-variable inequalities that define the maxmin-polytope, and a characterization of its extreme points. Our definition of the partial order allows us to find all equilibrium strategies of the wiretapper that minimize the number of pure best responses of the hider. Among these strategies, we efficiently compute the unique strategy that maximizes the least punishment that the hider incurs for playing a pure strategy that is not a best response. Finally, we show that this unique strategy is the nucleolus of the recently studied simple cooperative spanning connectivity game.", "Many multiagent domains where cooperation among agents is crucial to achieving a common goal can be modeled as coalitional games. However, in many of these domains, agents are unequal in their power to affect the outcome of the game. Prior research on weighted voting games has explored power indices, which reflect how much \"real power\" a voter has. Although primarily used for voting games, these indices can be applied to any simple coalitional game. Computing these indices is known to be computationally hard in various domains, so one must sometimes resort to approximate methods for calculating them. We suggest and analyze randomized methods to approximate power indices such as the Banzhaf power index and the Shapley-Shubik power index. Our approximation algorithms do not depend on a specific representation of the game, so they can be used in any simple coalitional game. Our methods are based on testing the game's value for several sample coalitions. We also show that no approximation algorithm can do much better for general coalitional games, by providing lower bounds for both deterministic and randomized algorithms for calculating power indices." ] }
1512.01764
2193438280
In this dissertation, we analyze the computational properties of game-theoretic centrality measures. The key idea behind game-theoretic approach to network analysis is to treat nodes as players in a cooperative game, where the value of each coalition of nodes is determined by certain graph properties. Next, the centrality of any individual node is determined by a chosen game-theoretic solution concept (notably, the Shapley value) in the same way as the payoff of a player in a cooperative game. On one hand, the advantage of game-theoretic centrality measures is that nodes are ranked not only according to their individual roles but also according to how they contribute to the role played by all possible subsets of nodes. On the other hand, the disadvantage is that the game-theoretic solution concepts are typically computationally challenging. The main contribution of this dissertation is that we show that a wide variety of game-theoretic solution concepts on networks can be computed in polynomial time. Our focus is on centralities based on the Shapley value and its various extensions, such as the Semivalues and Coalitional Semivalues. Furthermore, we prove #P-hardness of computing the Shapley value in connectivity games and propose an algorithm to compute it. Finally, we analyse computational properties of generalized version of cooperative games in which order of player matters. We propose a new representation for such games, called generalized marginal contribution networks, that allows for polynomial computation in the size of the representation of two dedicated extensions of the Shapley value to this class of games.
In the already classic work, @cite_51 applied standard centrality measures to determine the key players in the 9 11 terrorist network. @cite_18 developed an algorithm based on the two well known centrality measures from social network analysis to automatically detect the hidden hierarchy in terrorist networks. Recently, @cite_24 explored the use of social network analysis methods to analyze terrorist networks mostly focusing on assigning roles to actors in the network. Most of the above mentioned work in the literature focus on identifying key players in the terrorist networks. A complementary approach is proposed by @cite_49 that combines aspects of traditional social network analysis with a novel multi-agent framework that describes how terrorist groups survive despite the aggressive counterterrorist operations.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_51", "@cite_49" ], "mid": [ "", "1892906076", "1903894141", "341170067" ], "abstract": [ "", "This paper provides a novel algorithm to automatically detect the hidden hierarchy in terrorist networks. The algorithm is based on centrality measures used in social network analysis literature. The advantage of such automatic methods is to detect key players in terrorist networks. We illustrate the algorithm over some case studies of terrorist events that have occurred in the past. The results show great promise in detecting high value individuals.", "This paper looks at the difficulty in mapping covert networks. Analyzing networks after an event is fairly easy for prosecution purposes. Mapping covert networks to prevent criminal activity is much more difficult. We examine the network surrounding the tragic events of September 11th, 2001. Through public data we are able to map a portion of the network centered around the 19 dead hijackers. This map gives us some insight into the terrorist organization, yet it is incomplete. Suggestions for further work and research are offered.", "Abstract : The purpose of this paper is to explore the question of how modern terrorist groups manage to survive in the face of aggressive counterterrorist operations by security forces. Al Qa'ida survives to this day, despite the destruction of their Afghanistan sanctuary, the loss of countless key personnel, and continuous pressure by the United States and their allies. Why has al Qa'ida survived? Since much of the literature on terrorism focuses on how to eliminate them, this research paper focuses on why they still endure. In other words, instead of asking, \"How do we kill them,\" this research asks, \"Why don't they die?\" This research employs a dynamic network analysis approach to explore the primary research question of terrorist survival. This analysis combines aspects of traditional social network analysis with a new multi-agent model that describes how terrorist groups raise agents through the organization to positions of prominence. The key to this process is the radicalization of members based on time, connectivity, and belief intensity. The testing dataset comes from the 1998 Tanzania Embassy bombing, expressed in the form of a meta-network. After four testing program iterations, the author concludes that terrorist organizational survival is based on the internal dynamics of leader selection and growth within the group as new members advance. These findings imply a number of recommendations for counterterrorist operations and intelligence activities in order to disrupt the growth and development of new leaders. Additionally, these results imply that current Joint and Army doctrine on network analysis insufficiently addresses the dynamic processes that network diagrams are intended to depict. American military counterinsurgency and counterterrorist operations can be greatly enhanced by moving from a network analysis approach based on structure to one based on dynamics." ] }
1512.02009
2190817982
Modeling document structure is of great importance for discourse analysis and related applications. The goal of this research is to capture the document intent structure by modeling documents as a mixture of topic words and rhetorical words. While the topics are relatively unchanged through one document, the rhetorical functions of sentences usually change following certain orders in discourse. We propose GMM-LDA, a topic modeling based Bayesian unsupervised model, to analyze the document intent structure cooperated with order information. Our model is flexible that has the ability to combine the annotations and do supervised learning. Additionally, entropic regularization can be introduced to model the significant divergence between topics and intents. We perform experiments in both unsupervised and supervised settings, results show the superiority of our model over several state-of-the-art baselines.
From the algorithmic perspective, our work is grounded in topic models, such as Latent Dirichlet Allocation (LDA) @cite_14 , which have been widely developed for many NLP tasks. Instead of representing documents as bags of words, many expanded models take specific structural constraints into consideration @cite_7 @cite_19 . Among different models, our work has a closer relation to the models with order structure. For order modeling, Markov chain can only capture the dependence locally @cite_10 @cite_20 @cite_23 , while the generalized Mallows model (GMM) @cite_11 has a global view @cite_5 @cite_16 @cite_21 . A more complete model can be obtained by dividing the words into different types. In early trial of zoneLDAb @cite_4 , a type of words are for describing background, which are independent of the category of the sentence. Boilerplate-LDA @cite_10 also considers two types: document-specific topic words and rhetorical words. Three types of words are learnt by a rule-based method in @cite_17 . However, global order structure is not considered in these models. Therefore, jointly modeling topics and intents with global order structure is of great value.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_21", "@cite_17", "@cite_19", "@cite_23", "@cite_5", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "1880262756", "221511872", "2124585778", "2107853397", "2250886571", "1498269992", "", "2133227439", "887185921", "2252180731", "2949952668", "2495505314" ], "abstract": [ "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "Document zone identification aims to automatically classify sequences of text-spans (e.g. sentences) within a document into predefined zone categories. Current approaches to document zone identification mostly rely on supervised machine learning methods, which require a large amount of annotated data, which is often difficult and expensive to obtain. In order to overcome this bottleneck, we propose graphical models based on the popular Latent Dirichlet Allocation (LDA) model. The first model, which we call zoneLDA aims to cluster the sentences into zone classes using only unlabelled data. We also study an extension of zoneLDA called zoneLDAb, which makes distinction between common words and non-common words within the different zone types. We present results on two different domains: the scientific domain and the technical domain. For the latter one we propose a new document zone classification schema, which has been annotated over a collection of 689 documents, achieving a Kappa score of 85 . Overall our experiments show promising results for both of the domains, outperforming the baseline model. Furthermore, on the technical domain the performance of the models are comparable to the supervised approach using the same feature sets. We thus believe that graphical models are a promising avenue of research for automatic document zoning.", "We present a method for unsupervised topic modelling which adapts methods used in document classification (, 2003; Griffiths and Steyvers, 2004) to unsegmented multi-party discourse transcripts. We show how Bayesian inference in this generative model can be used to simultaneously address the problems of topic segmentation and topic identification: automatically segmenting multi-party meetings into topically coherent segments with performance which compares well with previous unsupervised segmentation-only methods (, 2003) while simultaneously extracting topics which rate highly when assessed for coherence by human judges. We also show that this method appears robust in the face of off-topic dialogue and speech recognition errors.", "The label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels. This paper introduces new methods for label ranking that complement and improve upon existing approaches. More specifically, we propose extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction. The unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings.", "The goal of this research is to build a model to predict stock price movement using sentiments on social media. A new feature which captures topics and their sentiments simultaneously is introduced in the prediction model. In addition, a new topic model TSLDA is proposed to obtain this feature. Our method outperformed a model using only historical prices by about 6.07 in accuracy. Furthermore, when comparing to other sentiment analysis methods, the accuracy of our method was also better than LDA and JST based methods by 6.43 and 6.07 . The results show that incorporation of the sentiment information from social media can help to improve the stock prediction.", "Algorithms such as Latent Dirichlet Allocation (LDA) have achieved significant progress in modeling word document relationships. These algorithms assume each word in the document was generated by a hidden topic and explicitly model the word distribution of each topic as well as the prior distribution over topics in the document. Given these parameters, the topics of all words in the same document are assumed to be independent. In this paper, we propose modeling the topics of words in the document as a Markov chain. Specifically, we assume that all words in the same sentence have the same topic, and successive sentences are more likely to have the same topics. Since the topics are hidden, this leads to using the well-known tools of Hidden Markov Models for learning and inference. We show that incorporating this dependency allows us to learn better topics and to disambiguate words that can belong to different topics. Quantitatively, we show that we obtain better perplexity in modeling documents with only a modest increase in learning and inference complexity.", "", "We present a novel Bayesian topic model for learning discourse-level document structure. Our model leverages insights from discourse theory to constrain latent topic assignments in a way that reflects the underlying organization of document topics. We propose a global model in which both topic selection and ordering are biased to be similar across a collection of related documents. We show that this space of orderings can be effectively represented using a distribution over permutations called the Generalized Mallows Model. We apply our method to three complementary discourse-level tasks: cross-document alignment, document segmentation, and information ordering. Our experiments show that incorporating our permutation-based model in these applications yields substantial improvements in performance over previously proposed methods.", "Documents from the same domain usually discuss similar topics in a similar order. However, the number of topics and the exact topics discussed in each individual document can vary. In this paper we present a simple topic model that uses generalised Mallows models and incomplete topic orderings to incorporate this ordering regularity into the probabilistic generative process of the new model. We show how to reparameterise the new model so that a point-wise sampling algorithm from the Bayesian word segmentation literature can be used for inference. This algorithm jointly samples not only the topic orders and the topic assignments but also topic segmentations of documents. Experimental results show that our model performs significantly better than the other ordering-based topic models on nearly all the corpora that we used, and competitively with other state-of-the-art topic segmentation models on corpora that have a strong ordering regularity.", "In this paper we investigate whether unsupervised models can be used to induce conventional aspects of rhetorical language in scientific writing. We rely on the intuition that the rhetorical language used in a document is general in nature and independent of the document’s topic. We describe a Bayesian latent-variable model that implements this intuition. In two empirical evaluations based on the task of argumentative zoning (AZ), we demonstrate that our generality hypothesis is crucial for distinguishing between rhetorical and topical language and that features provided by our unsupervised model trained on a large corpus can improve the performance of a supervised AZ classifier.", "We consider the problem of modeling the content structure of texts within a specific domain, in terms of the topics the texts address and the order in which these topics appear. We first present an effective knowledge-lean method for learning content models from un-annotated documents, utilizing a novel adaptation of algorithms for Hidden Markov Models. We then apply our method to two complementary tasks: information ordering and extractive summarization. Our experiments show that incorporating content models in these applications yields substantial improvement over previously-proposed methods.", "On examine le modele de type Mallows pour une distance generale et on etudie ensuite les distances pour lesquelles le modele peut etre decompose en facteurs representant des etapes independantes dans le processus de classement" ] }
1512.01845
2339067149
Understanding a user's motivations provides valuable information beyond the ability to recommend items. Quite often this can be accomplished by perusing both ratings and review texts, since it is the latter where the reasoning for specific preferences is explicitly expressed. Unfortunately matrix factorization approaches to recommendation result in large, complex models that are difficult to interpret and give recommendations that are hard to clearly explain to users. In contrast, in this paper, we attack this problem through succinct additive co-clustering. We devise a novel Bayesian technique for summing co-clusterings of Poisson distributions. With this novel technique we propose a new Bayesian model for joint collaborative filtering of ratings and text reviews through a sum of simple co-clusterings. The simple structure of our model yields easily interpretable recommendations. Even with a simple, succinct structure, our model outperforms competitors in terms of predicting ratings with reviews.
Collaborative filtering is a rich field of research. In particular, factorization models that learn a bilinear model of latent factors have proven to be effective for recommendation problems @cite_27 @cite_12 . A variety of papers have adapted frequentist intuition on factorization to Bayesian models @cite_22 @cite_0 @cite_20 @cite_24 . Of particular note are Probabilistic Matrix Factorization (PMF) and Bayesian Probabilistic Matrix Factorization (BPMF) @cite_22 , which we will compare to later. Recently, ACCAMS @cite_8 took a drastically different approach from classic bilinear models. It uses an additive model of co-clusterings to approximate matrices succinctly. While the resulting model is simple and small, its prediction quality is as good as more complex factorization models.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_0", "@cite_24", "@cite_27", "@cite_12", "@cite_20" ], "mid": [ "2085040216", "2950904389", "2137245235", "", "2054141820", "", "2132708887" ], "abstract": [ "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.", "Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity.", "Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7 better than the score of Netflix's own system.", "", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "", "We present a probabilistic model for generating personalised recommendations of items to users of a web service. The Matchbox system makes use of content information in the form of user and item meta data in combination with collaborative filtering information from previous user behavior in order to predict the value of an item for a user. Users and items are represented by feature vectors which are mapped into a low-dimensional trait space' in which similarity is measured in terms of inner products. The model can be trained from different types of feedback in order to learn user-item preferences. Here we present three alternatives: direct observation of an absolute rating each user gives to some items, observation of a binary preference (like don't like) and observation of a set of ordinal ratings on a user-specific scale. Efficient inference is achieved by approximate message passing involving a combination of Expectation Propagation (EP) and Variational Message Passing. We also include a dynamics model which allows an item's popularity, a user's taste or a user's personal rating scale to drift over time. By using Assumed-Density Filtering (ADF) for training, the model requires only a single pass through the training data. This is an on-line learning algorithm capable of incrementally taking account of new data so the system can immediately reflect the latest user preferences. We evaluate the performance of the algorithm on the MovieLens and Netflix data sets consisting of approximately 1,000,000 and 100,000,000 ratings respectively. This demonstrates that training the model using the on-line ADF approach yields state-of-the-art performance with the option of improving performance further if computational resources are available by performing multiple EP passes over the training data." ] }
1512.01845
2339067149
Understanding a user's motivations provides valuable information beyond the ability to recommend items. Quite often this can be accomplished by perusing both ratings and review texts, since it is the latter where the reasoning for specific preferences is explicitly expressed. Unfortunately matrix factorization approaches to recommendation result in large, complex models that are difficult to interpret and give recommendations that are hard to clearly explain to users. In contrast, in this paper, we attack this problem through succinct additive co-clustering. We devise a novel Bayesian technique for summing co-clusterings of Poisson distributions. With this novel technique we propose a new Bayesian model for joint collaborative filtering of ratings and text reviews through a sum of simple co-clusterings. The simple structure of our model yields easily interpretable recommendations. Even with a simple, succinct structure, our model outperforms competitors in terms of predicting ratings with reviews.
A separate line of research in recommender systems has focused on using side information to improve prediction quality. This is particularly important when parts of the data are extremely sparse, i.e. the cold start problem. Content-based filtering is a popular approach to alleviate this problem. Regression based latent factor models (RLFM) @cite_10 address cold-start problem by utilizing user and item features. Cold-start users and items are able to share statistical strength with other users and items through similarity in features space. fLDA @cite_11 uses text associated with items and user features to regularize item and user factors, but does not make use of review text.
{ "cite_N": [ "@cite_10", "@cite_11" ], "mid": [ "2054553473", "2135505871" ], "abstract": [ "We propose a novel latent factor model to accurately predict response for large scale dyadic data in the presence of features. Our approach is based on a model that predicts response as a multiplicative function of row and column latent factors that are estimated through separate regressions on known row and column features. In fact, our model provides a single unified framework to address both cold and warm start scenarios that are commonplace in practical applications like recommender systems, online advertising, web search, etc. We provide scalable and accurate model fitting methods based on Iterated Conditional Mode and Monte Carlo EM algorithms. We show our model induces a stochastic process on the dyadic space with kernel (covariance) given by a polynomial function of features. Methods that generalize our procedure to estimate factors in an online fashion for dynamic applications are also considered. Our method is illustrated on benchmark datasets and a novel content recommendation application that arises in the context of Yahoo! Front Page. We report significant improvements over several commonly used methods on all datasets.", "We propose fLDA, a novel matrix factorization method to predict ratings in recommender system applications where a \"bag-of-words\" representation for item meta-data is natural. Such scenarios are commonplace in web applications like content recommendation, ad targeting and web search where items are articles, ads and web pages respectively. Because of data sparseness, regularization is key to good predictive accuracy. Our method works by regularizing both user and item factors simultaneously through user features and the bag of words associated with each item. Specifically, each word in an item is associated with a discrete latent factor often referred to as the topic of the word; item topics are obtained by averaging topics across all words in an item. Then, user rating on an item is modeled as user's affinity to the item's topics where user affinity to topics (user factors) and topic assignments to words in items (item factors) are learned jointly in a supervised fashion. To avoid overfitting, user and item factors are regularized through Gaussian linear regression and Latent Dirichlet Allocation (LDA) priors respectively. We show our model is accurate, interpretable and handles both cold-start and warm-start scenarios seamlessly through a single model. The efficacy of our method is illustrated on benchmark datasets and a new dataset from Yahoo! Buzz where fLDA provides superior predictive accuracy in cold-start scenarios and is comparable to state-of-the-art methods in warm-start scenarios. As a by-product, fLDA also identifies interesting topics that explains user-item interactions. Our method also generalizes a recently proposed technique called supervised LDA (sLDA) to collaborative filtering applications. While sLDA estimates item topic vectors in a supervised fashion for a single regression, fLDA incorporates multiple regressions (one for each user) in estimating the item factors." ] }
1512.01845
2339067149
Understanding a user's motivations provides valuable information beyond the ability to recommend items. Quite often this can be accomplished by perusing both ratings and review texts, since it is the latter where the reasoning for specific preferences is explicitly expressed. Unfortunately matrix factorization approaches to recommendation result in large, complex models that are difficult to interpret and give recommendations that are hard to clearly explain to users. In contrast, in this paper, we attack this problem through succinct additive co-clustering. We devise a novel Bayesian technique for summing co-clusterings of Poisson distributions. With this novel technique we propose a new Bayesian model for joint collaborative filtering of ratings and text reviews through a sum of simple co-clusterings. The simple structure of our model yields easily interpretable recommendations. Even with a simple, succinct structure, our model outperforms competitors in terms of predicting ratings with reviews.
More recently, there has been a growing line of research on using Poisson distributions in matrix factorization models @cite_29 @cite_25 @cite_13 . This work shows the exciting potential uses of Poisson distributions for understanding matrix data. However, all of these models are left with limited interpretability since they, too, rely on bilinear models. Additionally, all of these models rely on variational inference to learn the models. Our work provides the building blocks for using Poisson distributions in a wide array of additive clustering applications and is the first work to learn a model of this sort through Gibbs sampling rather than variational inference.
{ "cite_N": [ "@cite_29", "@cite_13", "@cite_25" ], "mid": [ "143080133", "1984127251", "2950857600" ], "abstract": [ "We develop a Bayesian nonparametric Poisson factorization model for recommendation systems. Poisson factorization implicitly models each user’s limited budget of attention (or money) that allows consumption of only a small subset of the available items. In our Bayesian nonparametric variant, the number of latent components is theoretically unbounded and eectively estimated when computing a posterior with observed user behavior data. To approximate the posterior, we develop an ecient variational inference algorithm. It adapts the dimensionality of the latent components to the data, only requires iteration over the user item pairs that have been rated, and has computational complexity on the same order as for a parametric model with xed dimensionality. We studied our model and algorithm with large realworld data sets of user-movie preferences. Our model eases the computational burden of searching for the number of latent components and gives better predictive performance than its parametric counterpart.", "Preference-based recommendation systems have transformed how we consume media. By analyzing usage data, these methods uncover our latent preferences for items (such as articles or movies) and form recommendations based on the behavior of others with similar tastes. But traditional preference-based recommendations do not account for the social aspect of consumption, where a trusted friend might point us to an interesting item that does not match our typical preferences. In this work, we aim to bridge the gap between preference- and social-based recommendations. We develop social Poisson factorization (SPF), a probabilistic model that incorporates social network information into a traditional factorization method; SPF introduces the social aspect to algorithmic recommendation. We develop a scalable algorithm for analyzing data with SPF, and demonstrate that it outperforms competing methods on six real-world datasets; data sources include a social reader and Etsy.", "We present a Bayesian tensor factorization model for inferring latent group structures from dynamic pairwise interaction patterns. For decades, political scientists have collected and analyzed records of the form \"country @math took action @math toward country @math at time @math \"---known as dyadic events---in order to form and test theories of international relations. We represent these event data as a tensor of counts and develop Bayesian Poisson tensor factorization to infer a low-dimensional, interpretable representation of their salient patterns. We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods. We also provide a comparison of our variational updates to their maximum likelihood counterparts. In doing so, we identify a better way to form point estimates of the latent factors than that typically used in Bayesian Poisson matrix factorization. Finally, we showcase our model as an exploratory analysis tool for political scientists. We show that the inferred latent factor matrices capture interpretable multilateral relations that both conform to and inform our knowledge of international affairs." ] }
1512.01845
2339067149
Understanding a user's motivations provides valuable information beyond the ability to recommend items. Quite often this can be accomplished by perusing both ratings and review texts, since it is the latter where the reasoning for specific preferences is explicitly expressed. Unfortunately matrix factorization approaches to recommendation result in large, complex models that are difficult to interpret and give recommendations that are hard to clearly explain to users. In contrast, in this paper, we attack this problem through succinct additive co-clustering. We devise a novel Bayesian technique for summing co-clusterings of Poisson distributions. With this novel technique we propose a new Bayesian model for joint collaborative filtering of ratings and text reviews through a sum of simple co-clusterings. The simple structure of our model yields easily interpretable recommendations. Even with a simple, succinct structure, our model outperforms competitors in terms of predicting ratings with reviews.
Modeling online reviews has long been a focus of the data mining, machine learning and natural language processing communities @cite_4 . Significant research has focused on understanding and finding patterns in online reviews @cite_16 . More closely related to our work, a variety of papers model aspects and sentiments of reviews @cite_23 @cite_18 @cite_2 @cite_28 . For example, @cite_28 considers hierarchical structures in aspects and sentiments. However, in these works ratings are not considered jointly.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_23", "@cite_2", "@cite_16" ], "mid": [ "2168240343", "2160660844", "582033733", "2108420397", "", "2127411301" ], "abstract": [ "We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level. We deviate from the traditional view of discourse, as we induce types of discourse relations and associated discourse cues relevant to the considered opinion analysis task; consequently, the induced discourse relations play the role of opinion and aspect shifters. The quantitative analysis that we conducted indicated that the integration of a discourse model increased the prediction accuracy results with respect to the discourse-agnostic approach and the qualitative analysis suggests that the induced representations encode a meaningful discourse structure.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "To help users quickly understand the major opinions from massive online reviews, it is important to automatically reveal the latent structure of the aspects, sentiment polarities, and the association between them. However, there is little work available to do this effectively. In this paper, we propose a hierarchical aspect sentiment model (HASM) to discover a hierarchical structure of aspect-based sentiments from unlabeled online reviews. In HASM, the whole structure is a tree. Each node itself is a two-level tree, whose root represents an aspect and the children represent the sentiment polarities associated with it. Each aspect or sentiment polarity is modeled as a distribution of words. To automatically extract both the structure and parameters of the tree, we use a Bayesian nonparametric model, recursive Chinese Restaurant Process (rCRP), as the prior and jointly infer the aspect-sentiment tree from the review texts. Experiments on two real datasets show that our model is comparable to two other hierarchical topic models in terms of quantitative measures of topic trees. It is also shown that our model achieves better sentence-level classification accuracy than previously proposed aspect-sentiment joint models.", "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.", "", "Vibrant online communities are in constant flux. As members join and depart, the interactional norms evolve, stimulating further changes to the membership and its social dynamics. Linguistic change --- in the sense of innovation that becomes accepted as the norm --- is essential to this dynamic process: it both facilitates individual expression and fosters the emergence of a collective identity. We propose a framework for tracking linguistic change as it happens and for understanding how specific users react to these evolving norms. By applying this framework to two large online communities we show that users follow a determined two-stage lifecycle with respect to their susceptibility to linguistic change: a linguistically innovative learning phase in which users adopt the language of the community followed by a conservative phase in which users stop changing and the evolving community norms pass them by. Building on this observation, we show how this framework can be used to detect, early in a user's career, how long she will stay active in the community. Thus, this work has practical significance for those who design and maintain online communities. It also yields new theoretical insights into the evolution of linguistic norms and the complex interplay between community-level and individual-level linguistic change." ] }
1512.01885
2266490807
Humans routinely confront the following key question which could be viewed as a probabilistic variant of the controllability problem: While faced with an uncertain environment governed by causal structures, how should they practice their autonomy by intervening on driver variables, in order to increase (or decrease) the probability of attaining their desired (or undesired) state for some target variable? In this paper, for the first time, the problem of probabilistic controllability in Causal Bayesian Networks (CBNs) is studied. More specifically, the aim of this paper is two-fold: (i) to introduce and formalize the problem of probabilistic structural controllability in CBNs, and (ii) to identify a sufficient set of driver variables for the purpose of probabilistic structural controllability of a generic CBN. We also elaborate on the nature of minimality the identified set of driver variables satisfies. In this context, the term "structural" signifies the condition wherein solely the structure of the CBN is known.
Authors in @cite_5 , drawing on the idea of SCT proposed by Lin @cite_6 , aim at identifying the minimal set of variables which are sufficient for the purpose of structural controllability of a generic large-scale LTI system. In a subsequent work, authors in @cite_4 , relax the objective of structural controllability of the system in whole, to merely that of a particular set of desired variables called target variables. This line of thought has been motivated due to the understanding that in large-scale systems, it may neither be attainable nor required to control the full system but, rather, to merely control a subset of the variables of the system (analogous to target variables in our problem) which are deemed pivotal for the realization of the task at hand. In this light, @cite_4 is concerned with the very same question underlying our work, yet, perusing it in radically different settings. In @cite_5 @cite_4 both variables and their inter-connections are deterministic in nature whereas, in our case, both have probabilistic natures; a point of departure which leads to a substantially different line of work---both semantically and syntactically.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_6" ], "mid": [ "2101420429", "2170723553", "" ], "abstract": [ "The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes. Control theory can be used to steer engineered and natural systems towards a desired state, but a framework to control complex self-organized systems is lacking. Can such networks be controlled? Albert-Laszlo Barabasi and colleagues tackle this question and arrive at precise mathematical answers that amount to 'yes, up to a point'. They develop analytical tools to study the controllability of an arbitrary complex directed network using both model and real systems, ranging from regulatory, neural and metabolic pathways in living organisms to food webs, cell-phone movements and social interactions. They identify the minimum set of driver nodes whose time-dependent control can guide the system's entire dynamics ( http: go.nature.com wd9Ek2 ). Surprisingly, these are not usually located at the network hubs.", "Network controllability has numerous applications in natural and technological systems. Here, develop a theoretical approach and a greedy algorithm to study target control—the ability to efficiently control a preselected subset of nodes—in complex networks.", "" ] }
1512.02062
2291802658
The ability to perform traffic differentiation is a promising feature of the current Medium Access Control (MAC) in Wireless Local Area Networks (WLANs). The Enhanced Distributed Channel Access (EDCA) protocol for WLANs proposes up to four Access Categories (AC) that can be mapped to different traffic priorities. High priority ACs are allowed to transmit more often than low priority ACs, providing a way of prioritising delay sensitive traffic like voice calls or video streaming. Further, EDCA also considers the intricacies related to the management of multiple queues, virtual collisions and traffic differentiation. Nevertheless, EDCA falls short in efficiency when performing in dense WLAN scenarios. Its collision-prone contention mechanism degrades the overall throughput to the point of starving low priority ACs, and produce priority inversions at high number of contenders. Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a compatible MAC protocol for WLANs which is also capable of providing traffic differentiation. Contrary to EDCA, CSMA ECA uses a contention mechanism with a deterministic backoff technique which is capable of constructing collision-free schedules for many nodes with multiple active ACs, extending the network capacity without starving low priority ACs, as experienced in EDCA. This work analyses traffic differentiation with CSMA ECA by describing the mechanisms used to construct collision-free schedules with multiple queues. Additionally, evaluates the performance under different traffic conditions and a growing number of contenders. (arXiv's abstract field is not large enough for the paper's abstract, please download the paper for the complete abstract.)
Great efforts have been directed towards parameter adjustments in EDCA, mostly to ensure QoS for high priority ACs while maintaining low delay and losses @cite_17 @cite_16 @cite_18 . For example, by dynamically adjusting the AIFS for each AC it is possible to maintain traffic differentiation while avoiding the starvation of low priority ACs. This is especially relevant in WLANs where all ACs are required to have effective throughput, like in @cite_16 . Further, by randomising the AIFS values it is possible to increase the channel utilisation in EDCA @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_17" ], "mid": [ "", "2054000789", "1967383368" ], "abstract": [ "", "In this paper, we propose an adaptive tuning algorithm for wireless medical applications to in order to meet medical-grade quality of service (QoS). The conventional IEEE 802.11e protocol is designed to satisfy a certain level of QoS, which is insufficient for medical-grade QoS. In the proposed algorithm, we adaptively tune the AIFS value of the IEEE 802.11e protocol for enhancing the overall network performance while providing the required medical-grade QoS. In our scenario, as representative applications, we consider medical alarms, ECG transmission, and TCP file transfer. We categorize these three applications according to their urgency. Then, we compare the performance of the proposed scheme with the conventional IEEE 802.11e protocol.", "Supporting real-time and interactive traffic in addition to traditional data traffic with a best-effort nature represents a constantly rising need in any kind of telecommunications environment. The IEEE 802.11 based WLAN (Wireless Local Area Network) environment does not represent an exception. This is why at different protocol layers, and primarily at the MAC layer, many efforts are being put by both the research community and the standardization bodies to design effective mechanisms for user QoS (Quality of Service) differentiation. Although early results are coming into sight, such as, for example, the IEEE 802.11e standard release, still a thorough research activity is required. Aim of the present paper is to contribute to the cited research issue by proposing an improvement to the \"static\" traffic prioritisation mechanism foreseen by the IEEE 802.11e MAC (Medium Access Control) protocol. This latter shows a twofold drawback. First, there is no certainty that QoS requirements relevant to a given application are always fulfilled by the \"statically\" associated priority. Second, resource requests of the applications are not adapted to the (usually highly) variable traffic conditions of a distributed WLAN environment. The algorithm we propose is specifically tailored to \"dynamically\" assign 802.11e MAC priorities, depending on both application QoS requirements and observed network congestion conditions. It is carefully designed, implemented into a system simulation tool, and its highly effective behaviour assessed under variable traffic and system conditions." ] }
1512.02062
2291802658
The ability to perform traffic differentiation is a promising feature of the current Medium Access Control (MAC) in Wireless Local Area Networks (WLANs). The Enhanced Distributed Channel Access (EDCA) protocol for WLANs proposes up to four Access Categories (AC) that can be mapped to different traffic priorities. High priority ACs are allowed to transmit more often than low priority ACs, providing a way of prioritising delay sensitive traffic like voice calls or video streaming. Further, EDCA also considers the intricacies related to the management of multiple queues, virtual collisions and traffic differentiation. Nevertheless, EDCA falls short in efficiency when performing in dense WLAN scenarios. Its collision-prone contention mechanism degrades the overall throughput to the point of starving low priority ACs, and produce priority inversions at high number of contenders. Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a compatible MAC protocol for WLANs which is also capable of providing traffic differentiation. Contrary to EDCA, CSMA ECA uses a contention mechanism with a deterministic backoff technique which is capable of constructing collision-free schedules for many nodes with multiple active ACs, extending the network capacity without starving low priority ACs, as experienced in EDCA. This work analyses traffic differentiation with CSMA ECA by describing the mechanisms used to construct collision-free schedules with multiple queues. Additionally, evaluates the performance under different traffic conditions and a growing number of contenders. (arXiv's abstract field is not large enough for the paper's abstract, please download the paper for the complete abstract.)
Time slot reservation techniques are known to provide higher throughput and QoS in TDMA schemes, like LTE @cite_5 . By applying similar concepts (like organising transmissions according to a predefined schedule) to a completely decentralised CSMA, it is possible to reach collision-free operation. Using a Semi-Random Backoff (SRB) @cite_8 after successful transmissions, it is possible to achieve collision-free operation for high number of contenders. Other proposals, like ZC-MAC @cite_23 and L-MAC @cite_20 define virtual cycles known to all users, in which stations allocate transmission slots. The selection of the same slot in future cycles is conditioned to the observed failed transmissions during the past cycle. These are examples of decentralised MAC protocols for WLANs that use the concept of slot reservation to provide collision-free operation.
{ "cite_N": [ "@cite_5", "@cite_20", "@cite_23", "@cite_8" ], "mid": [ "1506432011", "", "1484545291", "2143747785" ], "abstract": [ "The use of the unlicensed spectrum by LTE networks (LTE-U or LAA-LTE) is being considered by mobile operators in order to satisfy increasing traffic demands and to make better use of the licensed spectrum. However, coexistence issues arise when LTE-U coverage overlaps with other technologies currently operating in unlicensed bands, in particular WiFi. Since LTE uses a TDMA OFDMA scheduled approach, coexisting WiFi networks may face starvation if the channel is fully occupied by LTE-U transmissions. In this paper we derive a novel proportional fair allocation scheme that ensures fair coexistence between LTE-U and WiFi. Importantly, we find that the proportional fair allocation is qualitatively different from previously consideredWiFi-only settings and that since the resulting allocation requires only quite limited knowledge of network parameters it is potentially easy to implement in practice, without the need for message-passing between heterogeneous networks.", "", "This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load.", "This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks." ] }
1512.01775
2193517157
While the problem of approximate nearest neighbor search has been well-studied for Euclidean space and @math , few non-trivial algorithms are known for @math when ( @math ). In this paper, we revisit this fundamental problem and present approximate nearest-neighbor search algorithms which give the first non-trivial approximation factor guarantees in this setting.
For Euclidean space, Chan @cite_15 gave a deterministic construction which gives an @math -ANN, in time @math and using polynomial space (see also @cite_7 ). For @math , Neylon @cite_16 gave an @math -ANN structure which runs in @math time and uses @math space.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_7" ], "mid": [ "2024409132", "2018741301", "1998512964" ], "abstract": [ "", "We present a simple and practical algorithm for the c--approximate near neighbor problem (c--NN): given n points P ⊂ Rd and radius R, build a data structure which, given q ∈ Rd, can with probability 1 -- Δ return a point p e P with dist(p, q) ≤ cR if there is any p* e P with dist(p*, q) ≤ R. For c = d + 1, our algorithm deterministically (Δ = 0) preprocesses in time O(nd log d), space O(dn), and answers queries in expected time O(d2); this is the first known algorithm to deterministically guarantee an O(d)---NN solution in constant time with respect to n for all lp metrics. A probabilistic version empirically achieves useful c values (c The key to the algorithm is a locality-sensitive hash: a mapping h: Rd → U with the property that h(x) = h(y) is much more likely for nearby x, y. We introduce a somewhat regular simplex which tessellates Rd, and efficiently hash each point in any simplex of this tessellation to all d + 1 corners; any points in neighboring cells will be hashed to a shared corner and noticed as nearby points. This method is completely independent of dimension reduction, so that additional space and time savings are available by first reducing all input vectors.", "Abstract Given n points in d dimensions, we show how to construct a data structure of space O( d 2 d n ) that approximately answers closest-point queries in time O( d 2 d log n ). The returned point is at most O (d> 1 2 ) times further from the query point than the true closest point. We also show how to construct a data structure of space O( dn log m ), that—with high probability—can answer a sequence of m closest-point queries in time O( d log n log m ) per query, with approximation ratio O (d 3 2 ) . Our data structures are based on quadtrees." ] }
1512.01775
2193517157
While the problem of approximate nearest neighbor search has been well-studied for Euclidean space and @math , few non-trivial algorithms are known for @math when ( @math ). In this paper, we revisit this fundamental problem and present approximate nearest-neighbor search algorithms which give the first non-trivial approximation factor guarantees in this setting.
Stoev al @cite_3 and Andoni @cite_35 both used random variables with max-stability or min-stability to estimate the @math -th moment of a vector, or of the difference between two vectors. Farag ' o @cite_8 presented an elegant oblivious embedding from @math to @math with arbitrarily low distortion, and the existence of an embedding with these properties had been alluded to in Indyk's survey @cite_21 . We observe that one may utilize Farago's embedding to map @math into @math and then compute the nearest neighbor in the embedded space using Indyk's @math structure. This in fact yields an @math -ANN, but in time and space exponential in @math , so approximate Voronoi diagrams are better for this problem.
{ "cite_N": [ "@cite_35", "@cite_21", "@cite_3", "@cite_8" ], "mid": [ "", "1530239281", "2046923124", "2143834173" ], "abstract": [ "", "The author surveys algorithmic results obtained using low-distortion embeddings of metric spaces into (mostly) normed spaces. He shows that low-distortion embeddings provide a powerful and versatile toolkit for solving algorithmic problems. Their fundamental nature makes them applicable in a variety of diverse settings, while their relation to rich mathematical fields (e.g., functional analysis) ensures availability of tools for their construction.", "Consider a set of signals fs : 1, ..., N → [0, ..., M] appearing as a stream of tuples (i, fs (i)) in arbitrary order of i and s. We would like to devise one pass approximate algorithms for estimating various functionals on the dominant signal fmax, defined as fmax = (i, maxs fs (i)), ∀i . For example, the \"worst case influence\" which is the F1-norm of the dominant signal (Cormode and Muthukrishnan, 2003), general Fp-norms, and special types of distances between dominant signals. The only known previous work in this setting are the algorithms of Cormode and Muthukrishnan and Pavan and Tirtha-pura (2005) which can only estimate the F1-norm over fmax-No previous work addressed more general norms or distance estimation. In this work, we use a novel sketch, based on the properties of max-stable distributions, for these more general problems. The max-stable sketch is a significant improvement over previous alternatives in terms of simplicity of implementation, space requirements, and insertion cost, while providing similar approximation guarantees. To assert our statements, we also conduct an experimental evaluation using real datasets.", "We investigate the possibility of embedding an n-point metric space into a constant dimensional vector space with the maximum norm, such that the embedding is almost isometric, that is, the distortion of distances is kept arbitrarily close to 1. When the source metric is generated by any fixed norm on a finite dimensional vector space, we prove that this embedding is always possible, such that the dimension of the target space remains constant, independent of n. While this possibility has been known in the folklore, we present the first fully detailed proof, which, in addition, is significantly simpler and more transparent, then what was available before. Furthermore, our embedding can be computed in deterministic linear time in n, given oracle access to the norm." ] }
1512.01775
2193517157
While the problem of approximate nearest neighbor search has been well-studied for Euclidean space and @math , few non-trivial algorithms are known for @math when ( @math ). In this paper, we revisit this fundamental problem and present approximate nearest-neighbor search algorithms which give the first non-trivial approximation factor guarantees in this setting.
For ANN in general metric spaces, Krauthgamer and Lee @cite_32 showed that the doubling dimension can be used to control the search runtime: For a metric point set @math , they constructed a polynomial-size structure which finds an @math -ANN in time @math , where @math is the aspect ratio of @math , the ratio between the maximum and minimum inter-point distances in @math . The space requirements of this data structure were later improved by Beygelzimer al @cite_20 . Har-Peled and Mendel @cite_6 and Cole and Gottlieb @cite_29 showed how to replace the dependence on @math with dependence on @math . Other related research on nearest neighbor searches have focused on various assumptions concerning the metric space. Clarkson @cite_36 made assumptions concerning the probability distribution from which the database and query points are drawn, and developed two randomized data structures for exact nearest neighbor. Karger and Ruhl @cite_34 introduced the notion of growth-constrained metrics (elsewhere called the KR-dimension) which is a weaker notion than that of the doubling dimension. A survey of proximity searches in metric space appeared in @cite_25 .
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_32", "@cite_6", "@cite_34", "@cite_25", "@cite_20" ], "mid": [ "2031212151", "2144397897", "2034188144", "2045134120", "2169036209", "", "2133296809" ], "abstract": [ "Given a set S of n sites (points), and a distance measure d , the nearestneighborsearching problem is to build a data structure so that given a query point q , the site nearest to q can be found quickly. This paper gives data structures for this problem when the sites and queries are in a metric space. One data structure, D(S) , uses a divide-and-conquer recursion. The other data structure, M(S,Q) , is somewhat like a skiplist. Both are simple and implementable. The data structures are analyzed when the metric space obeys a certain sphere-packing bound, and when the sites and query points are random and have distributions with an exchangeability property. This property implies, for example, that query point q is a random element of ( S q ) . Under these conditions, the preprocessing and space bounds for the algorithms are close to linear in n . They depend also on the sphere-packing bound, and on the logarithm of the distanceratio ( (S) ) of S , the ratio of the distance between the farthest pair of points in S to the distance between the closest pair. The data structure M(S,Q) requires as input data an additional set Q , taken to be representative of the query points. The resource bounds of M(S,Q) have a dependence on the distance ratio of S ( )Q . While M(S,Q) can return wrong answers, its failure probability can be bounded, and is decreasing in a parameter K . Here K≤ |Q| n is chosen when building M(S,Q) . The expected query time for M(S,Q) is O(Klog n)log ( (S Q) ) , and the resource bounds increase linearly in K . The data structure D(S) has expected O( log n) O(1) query time, for fixed distance ratio. The preprocessing algorithm for M(S,Q) can be used to solve the all nearest neighbor problem for S in O(n(log n) 2 (log ϒ(S)) 2 ) expected time.", "We present a new data structure that facilitates approximate nearest neighbor searches on a dynamic set of points in a metric space that has a bounded doubling dimension. Our data structure has linear size and supports insertions and deletions in O(log n) time, and finds a (1+e)-approximate nearest neighbor in time O(log n) + (1 e)O(1). The search and update times hide multiplicative factors that depend on the doubling dimension; the space does not. These performance times are independent of the aspect ratio (or spread) of the points.", "We present a simple deterministic data structure for maintaining a set S of points in a general metric space, while supporting proximity search (nearest neighbor and range queries) and updates to S (insertions and deletions). Our data structure consists of a sequence of progressively finer e-nets of S, with pointers that allow us to navigate easily from one scale to the next.We analyze the worst-case complexity of this data structure in terms of the \"abstract dimensionality\" of the metric S. Our data structure is extremely efficient for metrics of bounded dimension and is essentially optimal in a certain model of distance computation. Finally, as a special case, our approach improves over one recently devised by Karger and Ruhl [KR02].", "We present a near linear time algorithm for constructing hierarchical nets in finite metric spaces with constant doubling dimension. This data-structure is then applied to obtain improved algorithms for the following problems: approximate nearest neighbor search, well-separated pair decomposition, spanner construction, compact representation scheme, doubling measure, and computation of the (approximate) Lipschitz constant of a function. In all cases, the running (preprocessing) time is near linear and the space being used is linear.", "Most research on nearest neighbor algorithms in the literature has been focused on the Euclidean case. In many practical search problems however, the underlying metric is non-Euclidean. Nearest neighbor algorithms for general metric spaces are quite weak, which motivates a search for other classes of metric spaces that can be tractably searched.In this paper, we develop an efficient dynamic data structure for nearest neighbor queries in growth-constrained metrics. These metrics satisfy the property that for any point q and number r the ratio between numbers of points in balls of radius 2r and r is bounded by a constant. Spaces of this kind may occur in networking applications, such as the Internet or Peer-to-peer networks, and vector quantization applications, where feature vectors fall into low-dimensional manifolds within high-dimensional vector spaces.", "", "We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets." ] }
1512.02059
2949409172
We consider the problem of precise range estimation between pairs of moving vehicles using periodic broadcast messages. The transmitter and receivers are not time synchronized and one needs to explicitly account for the clock offset and clock drift in addition to the vehicle motion to obtain accurate range estimates. We develop a range estimation algorithm based on a local polynomial smoothing of the vehicle motion and experimentally verify that the performance is close to that obtained using unicast round-trip time ranging. This broadcast approach is of interest particularly in the context of dedicated short-range communications (DSRC) wherein periodic broadcast safety messages are exchanged between vehicles. We propose to exploit these broadcast messages to perform ranging. Our scheme requires additional timestamp information to be transmitted as part of the DSRC messages, and we develop a novel timestamp compression algorithm to minimize the resulting overhead. We validate our proposed algorithm on experimental data and show that it is able to achieve sub-meter ranging accuracies in vehicular scenarios.
Range estimation has been a topic of research interest for several decades starting with classical RADAR systems @cite_21 and GPS @cite_12 to more recent technologies like WiFi @cite_11 , Bluetooth @cite_7 , and ultra-wide band @cite_1 to name a few. There is still active research in the community to develop inexpensive and precise ranging technologies that can work in various environments. These are enabling applications in the vehicular space @cite_17 @cite_16 @cite_4 @cite_19 , sensor networks @cite_13 @cite_15 @cite_20 , and robotics @cite_3 @cite_24 .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_21", "@cite_1", "@cite_17", "@cite_20", "@cite_3", "@cite_24", "@cite_19", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "1989137771", "2135021336", "", "2133085864", "2154997940", "2137536023", "2936405433", "2162234765", "2044006304", "", "1255049642", "2131152141", "1979154650", "2101764391" ], "abstract": [ "Global Navigation Satellite Systems (GNSS) can be used for navigation purposes in vehicular environments. However, the limited accuracy of GNSS makes it unsuitable for applications such as vehicle collision avoidance. Improving the positioning accuracy in vehicular networks, Cooperative Positioning (CP) algorithms have emerged. CP algorithms are based on data communication among vehicles and estimation of the distance between the nodes of the network. Among the variety of radio ranging techniques, Received Signal Strength (RSS) is very popular due to its simplicity and lower cost compared to other methods like Time of Arrival (TOA), and Time Difference of Arrival (TDOA). The main drawback of RSS- based ranging is its inaccuracy, which mostly originates from the uncertainty of the path loss exponent. Without knowing the environment path loss exponent, which is a time-varying parameter in the mobile networks, RSS is effectively useless for distance estimation. There are many approaches and techniques proposed in the literature for dynamic estimation of the path loss exponent within a certain environment. Most of these methods are not functional for mobile applications or their efficiency decreases dramatically with increasing mobility of the nodes. In this paper, we propose a method for dynamic estimation of the path loss exponent and distance based on the Doppler Effect and RSS. Since this method is fundamentally based on the Doppler Effect, it can be implemented within networks with mobile nodes. The higher the mobility of the nodes, the better performance of the proposed technique. This contribution is important because vehicles will be equipped with Dedicated Short Range Communication (DSRC) in the near future.", "We provide an elaborate discussion on Bluetooth signal parameters with respect to localization, whereby we collectively designate all types of Bluetooth specification parameters that are related to signal strength - such as RSSI, Link Quality, Received and Transmit Power Level - as Bluetooth signal, parameters. According to our analysis and experimental results, \"RSSI\" and \"Transmit Power Level\" turn out to be poor candidates for localization, while \"Link Quality\" has its limitations. On the other hand, \"Received Power Level\" correlates nicely with distance, which makes it the most desirable Bluetooth signal parameter to be used in location systems. We contend that it is vital to choose the appropriate signal parameter in Bluetooth location systems, and we expect our work to provide useful pointers in any future design of such systems. Existing systems can also benefit by adopting the appropriate Bluetooth signal parameter in their systems, and thereby, improve their location accuracy.", "", "A time-of-arrival (ToA)-based ranging scheme using an ultra-wideband (UWB) radio link is proposed. This ranging scheme implements a search algorithm for the detection of a direct path signal in the presence of dense multipath, utilizing generalized maximum-likelihood (GML) estimation. Models for critical parameters in the algorithm are based on statistical analysis of propagation data and the algorithm is tested on another independent set of propagation measurements. The proposed UWB ranging system uses a correlator and a parallel sampler with a high-speed measurement capability in each transceiver to accomplish two-way ranging between them in the absence of a common clock.", "We propose a distributed algorithm that uses inter-vehicle distance estimates, made using a radio-based ranging technology, to localize a vehicle among its neighbours. Given that the inter-vehicle distance estimates contain noise, our algorithm reduces the residuals of the Euclidean distance between the vehicles and their measured distances, allowing it to accurately estimate the position of a vehicle within a cluster. In this paper, we show that our proposed algorithm outperforms previously proposed algorithms and present its performance in a simulated vehicular environment.", "In this paper, we describe a global distributed solution that enables the simultaneous performance of time synchronization and positioning in ultra-wideband (UWB) ad hoc networks. On the one hand, the proposed synchronization scheme basically relies on cooperative two-way-ranging time-of-arrival transactions and a diffusion algorithm that ensures the convergence of clock parameters to average reference values in each node. Although the described solution is generic at first sight, its sensitivity to time-of-arrival accuracy imposes the choice of an impulse-radio ultra-wideband physical layer in the very context. On the other hand, a distributed algorithm coupled with this synchronization scheme mitigates the impact of non-line-of-sight ranging errors on positioning accuracy without any additional protocol hook. More particularly, the realistic UWB ranging error models we use take into account UWB channel effects, as well as detection noises and relative clock drifts. Then, it is demonstrated that a cooperative and distributed maximization of the log-likelihood of range estimates can reduce the uncertainty on estimated positions in comparison with classical distributed weighted least squares approaches. Finally, the proposed distributed maximum log-likelihood algorithm proves to preserve a reasonable level of complexity in each node by approximating asynchronously the positive gradient direction of the log-likelihood function. For both distributed synchronization and positioning algorithms, simulation results are provided to illustrate the relevance of such a solution.", "There has been increased research interest in systems composed of multiple autonomous mobile robots exhibiting cooperative behavior. Groups of mobile robots are constructed, with an aim to studying such issues as group architecture, resource conflict, origin of cooperation, learning, and geometric problems. As yet, few applications of cooperative robotics have been reported, and supporting theory is still in its formative stages. In this paper, we give a critical survey of existing works and discuss open problems in this field, emphasizing the various theoretical issues that arise in the study of cooperative robotics. We describe the intellectual heritages that have guided early research, as well as possible additions to the set of existing motivations.", "A mobile robot we have developed is equipped with sensors to measure range to landmarks and can simultaneously localize itself as well as locate the landmarks. This modality is useful in those cases where environmental conditions preclude measurement of bearing (typically done optically) to landmarks. Here we extend the paradigm to consider the case where the landmarks (nodes of a sensor network) are able to measure range to each other. We show how the two capabilities are complimentary in being able to achieve a map of the landmarks and to provide localization for the moving robot. We present recent results with experiments on a robot operating in a randomly arranged network of nodes that can communicate via radio and range to each other using sonar. We find that incorporation of inter-node measurements helps reduce drift in positioning as well as leads to faster convergence of the map of the nodes. We find that addition of a mobile node makes the SLAM feasible in a sparsely connected network of nodes", "A new kind of ad hoc network is hitting the streets: Vehicular Ad Hoc Networks (VANets). In these networks, vehicles communicate with each other and possibly with a roadside infrastructure to provide a long list of applications varying from transit safety to driver assistance and Internet access. In these networks, knowledge of the real-time position of nodes is an assumption made by most protocols, algorithms, and applications. This is a very reasonable assumption, since GPS receivers can be installed easily in vehicles, a number of which already comes with this technology. But as VANets advance into critical areas and become more dependent on localization systems, GPS is starting to show some undesired problems such as not always being available or not being robust enough for some applications. For this reason, a number of other localization techniques such as Dead Reckoning, Cellular Localization, and Image Video Localization has been used in VANets to overcome GPS limitations. A common procedure in all these cases is to use Data Fusion techniques to compute the accurate position of vehicles, creating a new paradigm for localization in which several known localization techniques are combined into a single solution that is more robust and precise than the individual approaches. In this paper, we further discuss this subject by studying and analyzing the localization requirements of the main VANet applications. We then survey each of the localization techniques that can be used to localize vehicles and, finally, examine how these localization techniques can be combined using Data Fusion techniques to provide the robust localization system required by most critical safety applications in VANets.", "", "ABSTRACT Vehicular network with communication among vehicles and between roadside units and vehicles is a recently emerged field of challenge for researchers. Dedicated Short Range Communication (DSRC) is the nominated communication channel specifically designed for this network. Using DSRC, besides sharing information among vehicles, the distances between the nodes of the network can also be estimated for their positioning solutions. In the literature, many algorithms and strategies are presented for radio ranging with less emphasis on constraints and difficulties of distance estimation in vehicular networks. In this paper, different methods of radio ranging like Received Signal Strength (RSS), Time of Arrival (TOA), and Time Difference of Arrival (TDOA) for distance estimation are introduced with emphasis on some important aspects of these methods which should be taken into account by researchers and engineers. Considering some important concerns, related to ranging and positioning with radio signals, some special aspects of each method are presented and discussed. Finally, with regard to the limitations imposed by vehicular networks and DSRC, the preferred ranging approach is proposed.", "Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions.", "The Global Positioning System (GPS) is a satellite-based navigation and time transfer system developed by the U.S. Department of Defense. It serves marine, airborne, and terrestrial users, both military and civilian. Specifically, GPS includes the Standard Positioning Service (SPS) which provides civilian users with 100 meter accuracy, and it serves military users with the Precise Positioning Service (PPS) which provides 20-m accuracy. Both of these services are available worldwide with no requirement for a local reference station. In contrast, differential operation of GPS provides 2- to 10-m accuracy to users within 1000 km of a fixed GPS reference receiver. Finally, carrier phase comparisons can be used to provide centimeter accuracy to users within 10 km and potentially within 100 km of a reference receiver. This advanced tutorial will describe the GPS signals, the various measurements made by the GPS receivers, and estimate the achievable accuracies. It will not dwell on those aspects of GPS which are well known to those skilled in the radio communications art, such as spread-spectrum or code division multiple access. Rather, it will focus on topics which are more unique to radio navigation or GPS. These include code-carrier divergence, codeless tracking, carrier aiding, and narrow correlator spacing.", "Recently, a few efficient timing synchronization protocols for wireless sensor networks (WSNs) have been proposed with the goal of maximizing the accuracy and minimizing the power utilization. This paper proposes novel clock skew estimators assuming different delay environments to achieve energy-efficient network-wide synchronization for WSNs. The proposed clock skew correction mechanism significantly increases the re-synchronization period, which is a critical factor in reducing the overall power consumption. The proposed synchronization scheme can be applied to the conventional protocols without additional overheads. Moreover, this paper derives the Cramer-Rao lower bounds and the maximum likelihood estimators under different delay models and assumptions. These analytical metrics serves as good benchmarks for the thus far reported experimental results" ] }
1512.02059
2949409172
We consider the problem of precise range estimation between pairs of moving vehicles using periodic broadcast messages. The transmitter and receivers are not time synchronized and one needs to explicitly account for the clock offset and clock drift in addition to the vehicle motion to obtain accurate range estimates. We develop a range estimation algorithm based on a local polynomial smoothing of the vehicle motion and experimentally verify that the performance is close to that obtained using unicast round-trip time ranging. This broadcast approach is of interest particularly in the context of dedicated short-range communications (DSRC) wherein periodic broadcast safety messages are exchanged between vehicles. We propose to exploit these broadcast messages to perform ranging. Our scheme requires additional timestamp information to be transmitted as part of the DSRC messages, and we develop a novel timestamp compression algorithm to minimize the resulting overhead. We validate our proposed algorithm on experimental data and show that it is able to achieve sub-meter ranging accuracies in vehicular scenarios.
RTT-based systems are popular and well studied in various contexts. The IEEE 802.11mc task group defines a Fine-Timing-Measurement (FTM) protocol @cite_11 for RTT ranging in the WiFi signal spectrum. As described before, this eliminates the need for clock synchronization, but is inherently unicast in nature. There are quite a few works in the literature that consider the problem of range and location estimation in the presence of unknown clock offsets @cite_18 @cite_10 @cite_6 . Most of these are in the context of location estimation, wherein the location of a node or a set of nodes needs to be estimated from ranging measurements. These works explore the joint estimation of locations and clock offsets. The problem of joint estimation of clock parameters and distances between pairs of mobile nodes is considered in @cite_5 , which makes use of the local smoothness of the node trajectories to solve for the unknown variables.
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2160625863", "2087834451", "2056445701", "", "2101764391" ], "abstract": [ "Time-based localization approaches attract a lot of interest due to their high accuracy and potentially low cost for wireless sensor networks (WSNs). However, time-based localization is tightly coupled with clock synchronization. Thus, the reliability of timestamps in time-based localization becomes an important yet challenging task to deal with. In this paper, we propose robust time-based localization strategies to locate a target node with the help of anchors (nodes with known positions) in asynchronous networks. Two kinds of asynchronous networks are considered: one only with clock offsets, labeled quasi-synchronous networks, whereas the other with not only clock offsets but also clock skews, labeled fully asynchronous networks. A novel ranging protocol is developed for both networks, namely asymmetric trip ranging (ATR), to reduce the communication load and explore the broadcast property of WSNs. Regardless of the reliability of the timestamp report from the target node, closed-form least-squares (LS) estimators are derived to accurately estimate the target node position. As a result, we counter the uncertainties caused by the target node by ignoring the timestamps from this node. Furthermore, in order to simplify the estimator in fully asynchronous networks, localization and synchronization are decoupled. A simple yet efficient method is proposed to first Calibrate the Clock Skews of the anchors, and then Estimate the Node Position (CCS-ENP). Finally, Cramer-Rao bounds (CRBs) and simulation results corroborate the efficiency of our localization schemes.", "An asynchronous position measurement system is proposed for indoor localization in this paper. The demonstrated system consists of a UWB transmitter and several energy detection receivers whose positions are known. The position measurement process starts with the locator emitting a UWB pulse. Upon arrival, the pulse is amplified and retransmitted by the target to be located. Signals from both the locator and the target are captured by the receivers. No synchronization mechanism is implemented. Instead, the proposed system measures the differential TOA between the direct coupling signal of the locator and the target signal. Together with the knowledge of the locator transmitter and receiver positions, the absolute range that the pulse travels can be calculated. The sum of transmitter-target range and target-receiver range defines an ellipse and the target resides on the intersections of several such ellipses. It is shown via the Cramer-Rao lower bound that the absolute-range-based elliptical localization is potentially more accurate than the relative-range-based hyperbolic localization. Our proposed system is able to achieve positioning error bound comparable to synchronous absolute-range-based localization systems while eliminating the cost of synchronization.", "Synchronization and localization are critical challenges for the coherent functioning of a wireless network, which are conventionally solved independently. Recently, various estimators have been proposed for pairwise synchronization between immobile nodes, based on time stamp exchanges via two-way communication. In this paper, we consider a network of mobile nodes for which a novel joint time-range model is presented, treating both unsynchronized clocks and the pairwise distances as a polynomial functions of true time. For a pair of nodes, a least squares solution is proposed for estimating the pairwise range parameters between the nodes, in addition to estimating the clock offsets and clock skews. Extending these pairwise solutions to network-wide ranging and clock synchronization, we present a central data fusion based global least squares algorithm. A unique solution is nonexistent without a constraint on the cost function e.g., a clock reference node. Ergo, a constrained framework is proposed and a new Constrained Cramer–Rao Bound (CCRB) is derived for the joint time-range model. In addition, to alleviate the need for a single clock reference, various clock constraints are presented and their benefits are investigated using the proposed solutions. Simulations are conducted and the algorithms are shown to approach the theoretical limits.", "", "Recently, a few efficient timing synchronization protocols for wireless sensor networks (WSNs) have been proposed with the goal of maximizing the accuracy and minimizing the power utilization. This paper proposes novel clock skew estimators assuming different delay environments to achieve energy-efficient network-wide synchronization for WSNs. The proposed clock skew correction mechanism significantly increases the re-synchronization period, which is a critical factor in reducing the overall power consumption. The proposed synchronization scheme can be applied to the conventional protocols without additional overheads. Moreover, this paper derives the Cramer-Rao lower bounds and the maximum likelihood estimators under different delay models and assumptions. These analytical metrics serves as good benchmarks for the thus far reported experimental results" ] }
1512.01693
2195446438
A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by "soft" and "hard" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.
Despite impressive results achieved by the Google DeepMind's intelligent agent, there are a number of elements to be improved in the existing algorithm. In particular, Hausknecht and Stone @cite_14 pointed out that in practice, DQN decides on the next optimal action based on the visual information corresponding to the last four game states encountered by the agent. Therefore, the algorithm cannot master those games that require a player to remember events more distant than four screens in the past. It is for this reason that Hausknecht and Stone proposed the Deep Recurrent Q-Network (DRQN), a combination of LSTM and DQN in which (i) the fully connected layer in the latter is replaced for a LSTM one, and (ii) only the last visual frame at each timestep is used as DQN's input. The authors report that despite seeing only one visual frame, DRQN is still capable integrating relevant information across the frames. Nonetheless, no systematic improvement in Atari game scores over the results of @cite_7 was observed.
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "2121092017", "2145339207" ], "abstract": [ "Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action." ] }
1512.01693
2195446438
A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by "soft" and "hard" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.
Another drawback of DQN is its long training time, which is a critical component to the researchers' ability to carry out experiments with different network architectures and algorithm's parameter settings. According to @cite_7 , it takes 12-14 days on a GPU to train the network. @cite_13 proposed a new massively parallel version of the algorithm geared to address this problem. They report that its performance surpassed non-distributed DQN in 41 of the 49 games. However, extensive parallelization is not the only and, probably, not the most efficient remedy to the problem.
{ "cite_N": [ "@cite_13", "@cite_7" ], "mid": [ "1658008008", "2145339207" ], "abstract": [ "We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action." ] }
1512.01693
2195446438
A deep learning approach to reinforcement learning led to a general learner able to train on visual input to play a variety of arcade games at the human and superhuman levels. Its creators at the Google DeepMind's team called the approach: Deep Q-Network (DQN). We present an extension of DQN by "soft" and "hard" attention mechanisms. Tests of the proposed Deep Attention Recurrent Q-Network (DARQN) algorithm on multiple Atari 2600 games show level of performance superior to that of DQN. Moreover, built-in attention mechanisms allow a direct online monitoring of the training process by highlighting the regions of the game screen the agent is focusing on when making decisions.
Recent achievements of visual attention models in caption generation @cite_16 , object tracking @cite_8 @cite_2 , and machine translation @cite_3 have induced the authors of this paper to conduct a series of experiments so as to assess possible benefits from incorporating attention mechanisms into the structure of the DRQN algorithm. The main advantage of utilizing these mechanisms is that DRQN acquires the ability to select and then focus on relatively small informative regions of an input image, thus helping to reduce the total number of parameters in the deep neural network and computational operations needed for training and testing it. In contrast to DRQN, in this case, LSTM layer stores the data used not only for making decision on the next action, but also for choosing the next region of attention. In addition to computational speedups, attention-based models can also add some degree of interpretability to the Deep Q-Learning process by providing researchers with an opportunity to visualize where'' and what'' the agent's attention is focusing on.
{ "cite_N": [ "@cite_16", "@cite_2", "@cite_3", "@cite_8" ], "mid": [ "2950178297", "", "2133564696", "2154071538" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain." ] }
1512.01362
2266274440
In the last couple of decades, there has been major advancements in the domain of missing data imputation. The techniques in the domain include amongst others: Expectation Maximization, Neural Networks with Evolutionary Algorithms or optimization techniques and K-Nearest Neighbor approaches to solve the problem. The presence of missing data entries in databases render the tasks of decision-making and data analysis nontrivial. As a result this area has attracted a lot of research interest with the aim being to yield accurate and time efficient and sensitive missing data imputation techniques especially when time sensitive applications are concerned like power plants and winding processes. In this article, considering arbitrary and monotone missing data patterns, we hypothesize that the use of deep neural networks built using autoencoders and denoising autoencoders in conjunction with genetic algorithms, swarm intelligence and maximum likelihood estimator methods as novel data imputation techniques will lead to better imputed values than existing techniques. Also considered are the missing at random, missing completely at random and missing not at random missing data mechanisms. We also intend to use fuzzy logic in tandem with deep neural networks to perform the missing data imputation tasks, as well as different building blocks for the deep neural networks like Stacked Restricted Boltzmann Machines and Deep Belief Networks to test our hypothesis. The motivation behind this article is the need for missing data imputation techniques that lead to better imputed values than existing methods with higher accuracies and lower errors.
In this section, we present some of the work that has been done by researchers to address the problem of missing data. In @cite_12 , it is suggested that information within incomplete cases, that is, instances with missing values be used when estimating missing values. A nonparametric iterative imputation algorithm (NIIA) is proposed that leads to a root mean squared error value of at least 0.5 on the imputation of continuous values and a classification accuracy of at most 87.3
{ "cite_N": [ "@cite_12" ], "mid": [ "2050767557" ], "abstract": [ "This paper proposes to utilize information within incomplete instances (instances with missing values) when estimating missing values. Accordingly, a simple and efficient nonparametric iterative imputation algorithm, called the NIIA method, is designed for iteratively imputing missing target values. The NIIA method imputes each missing value several times until the algorithm converges. In the first iteration, all the complete instances are used to estimate missing values. The information within incomplete instances is utilized since the second imputation iteration. We conduct some experiments for evaluating the efficiency, and demonstrate: (1) the utilization of information within incomplete instances is of benefit to easily capture the distribution of a dataset; and (2) the NIIA method outperforms the existing methods in accuracy, and this advantage is clearly highlighted when datasets have a high missing ratio." ] }
1512.01715
2193011350
This paper presents a restricted visual Turing test (VTT) for story-line based deep understanding in long-term and multi-camera captured videos. Given a set of videos of a scene (such as a multi-room office, a garden, and a parking lot.) and a sequence of story-line based queries, the task is to provide answers either simply in binary form "true false" (to a polar query) or in an accurate natural language description (to a non-polar query). Queries, polar or non-polar, consist of view-based queries which can be answered from a particular camera view and scene-centered queries which involves joint inference across different cameras. The story lines are collected to cover spatial, temporal and causal understanding of input videos. The data and queries distinguish our VTT from recently proposed visual question answering in images and video captioning. A vision system is proposed to perform joint video and query parsing which integrates different vision modules, a knowledge base and a query engine. The system provides unified interfaces for different modules so that individual modules can be reconfigured to test a new method. We provide a benchmark dataset and a toolkit for ontology guided story-line query generation which consists of about 93.5 hours videos captured in four different locations and 3,426 queries split into 127 story lines. We also provide a baseline implementation and result analyses.
Inspired by the generic Turing test principle in AI @cite_7 , Geman al proposed a visual Turing test @cite_31 for object detection tasks in images which organizes queries into story lines, within which queries are connected and the complexities are increased gradually -- similar to conversations between human beings. In a similar spirit, Malinowski and Fritz @cite_25 @cite_43 proposed a multi-word method to address factual queries of scene images. In the dataset and evaluation framework proposed in this paper, we adopt similar evaluation structure to @cite_31 , but focus on a more complex scenario which features videos and overlapping cameras to facilitate a broader scope of vision tasks.
{ "cite_N": [ "@cite_43", "@cite_31", "@cite_25", "@cite_7" ], "mid": [ "300525892", "1983927101", "2951619830", "2001771035" ], "abstract": [ "As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area.", "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.", "We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framework. Our approach can handle human questions of high complexity about realistic scenes and replies with range of answer like counts, object classes, instances and lists of them. The system is directly trained from question-answer pairs. We establish a first benchmark for this task that can be seen as a modern attempt at a visual turing test.", "" ] }
1512.01436
2271413003
Modeling power transmission networks is an important area of research with applications such as vulnerability analysis, study of cascading failures, and location of measurement devices. Graph-theoretic approaches have been widely used to solve these problems, but are subject to several limitations. One of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph-theoretic formulation. In this paper, we propose a network-of-networks approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model. This model distinguishes between different components of the network that operate at different voltage ratings, and also captures the intra and inter-network connectivity patterns. By building the graph in this fashion we present a novel, and fundamentally different, perspective of power transmission networks. Consequently, this novel approach will have a significant impact on the graph-theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks.
Many researchers have explored the use of random graphs to model power grid @cite_11 @cite_13 @cite_2 @cite_6 @cite_7 @cite_10 , but in each case the resulting random graph does not fully model the structure of interest. In most cases, authors attempt to match graph characteristics from power grid networks, such as degree distribution, average path length, and clustering coefficient, in their random models.
{ "cite_N": [ "@cite_7", "@cite_10", "@cite_6", "@cite_2", "@cite_13", "@cite_11" ], "mid": [ "", "1970727760", "2113146118", "1972716516", "2025761485", "1571113954" ], "abstract": [ "", "An electrical power grid is a critical infrastructure. Its reliable, robust, and efficient operation inevitably depends on underlying telecommunication networks. In order to design an efficient communication scheme and examine the efficiency of any networked control architecture, we need to characterize statistically its information source, namely the power grid itself. In this paper we studied both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. We made several interesting discoveries: the power grids are sparsely connected and the average nodal degree is very stable regardless of network size; the nodal degrees distribution has exponential tails, which can be approximated with a shifted Geometric distribution; the algebraic connectivity scales as a power function of network size with the power index lying between that of one-dimensional and two-dimensional lattice; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a Double Pareto LogNormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random power grids featuring the same topology and electrical characteristics we found from the real data.", "The statistical tools of Complex Network Analysis are of useful to understand salient properties of complex systems, may these be natural or pertaining human engineered infrastructures. One of these that is receiving growing attention for its societ al relevance is that of electricity distribution. In this paper, we present a survey of the most relevant scientific studies investigating the properties of different Power Grids infrastructures using Complex Network Analysis techniques and methodologies. We categorize and explore the most relevant literature works considering general topological properties, physical properties, and differences between the various graph-related indicators and reliability aspects. We also trace the evolution in such field of the approach of study during the years to see the improvement achieved in the analysis.", "Numerous recent papers have found important relationships between network structure and risks within networks. These results indicate that network structure can dramatically affect the relative effectiveness of risk identification and mitigation methods. With this in mind this paper provides a comparative analysis of the topological and electrical structure of the IEEE 300 bus and the Eastern United States power grids. Specifically we compare the topology of these grids with that of random [1], preferential-attachment [2] and small-world [3] networks of equivalent sizes and find that power grids differ substantially from these abstract models in degree distribution, clustering, diameter and assortativity, and thus conclude that these abstract models do not provide substantial utility for modeling power grids. To better represent the topological properties of power grids we introduce a new graph generating algorithm, the minimum distance graph, that produces networks with properties that more nearly match those of known power grids. While these topological comparisons are useful, they do not account for the physical laws that govern flows in electricity networks. To elucidate the electrical structure of power grids, we propose a new method for representing electrical structure as a weighted graph. This analogous representation is based on electrical distance rather than topological connections. A comparison of these two representations of the test power grids reveals dramatic differences between the electrical and topological structure of electrical power systems.", "In this article, we model electric power delivery networks as graphs, and conduct studies of two power transmission grids, i.e., the Nordic and the western states (U.S.) transmission grid. We calculate values of topological (structural) characteristics of the networks and compare their error and attack tolerance (structural vulnerability), i.e., their performance when vertices are removed, with two frequently used theoretical reference networks (the Erdos-Renyi random graph and the Barabasi-Albert scale-free network). Further, we perform a structural vulnerability analysis of a fictitious electric power network with simple structure. In this analysis, different strategies to decrease the vulnerability of the system are evaluated. Finally, we present a discussion on the practical applicability of graph modeling.", "The topological (graph) structure of complex networks often provides valuable information about the performance and vulnerability of the network. However, there are multiple ways to represent a given network as a graph. Electric power transmission and distribution networks have a topological structure that is straightforward to represent and analyze as a graph. However, simple graph models neglect the comprehensive connections between components that result from Ohm's and Kirchhoff's laws. This paper describes the structure of the three North American electric power interconnections, from the perspective of both topological and electrical connectivity. We compare the simple topology of these networks with that of random, preferential-attachment, and small-world networks of equivalent sizes and find that power grids differ substantially from these abstract models in degree distribution, clustering, diameter and assortativity, and thus conclude that these topological forms may be misleading as models of power systems. To study the electrical connectivity of power systems, we propose a new method for representing electrical structure using electrical distances rather than geographic connections. Comparisons of these two representations of the North American power networks reveal notable differences between the electrical and topological structures of electric power networks." ] }
1512.01436
2271413003
Modeling power transmission networks is an important area of research with applications such as vulnerability analysis, study of cascading failures, and location of measurement devices. Graph-theoretic approaches have been widely used to solve these problems, but are subject to several limitations. One of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph-theoretic formulation. In this paper, we propose a network-of-networks approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model. This model distinguishes between different components of the network that operate at different voltage ratings, and also captures the intra and inter-network connectivity patterns. By building the graph in this fashion we present a novel, and fundamentally different, perspective of power transmission networks. Consequently, this novel approach will have a significant impact on the graph-theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks.
The authors of @cite_2 also show that the topological properties of the Eastern United States power grid and the IEEE 300 system differ significantly from random networks created using preferential-attachment, small-world, and ER models. They provide a new model, the minimum distance graph, which more closely matches measured topological characteristics like degree distribution, clustering coefficient, diameter, and assortativity of the real networks. We build on these ideas in our work. Extensions of the minimum-distance model are discussed in .
{ "cite_N": [ "@cite_2" ], "mid": [ "1972716516" ], "abstract": [ "Numerous recent papers have found important relationships between network structure and risks within networks. These results indicate that network structure can dramatically affect the relative effectiveness of risk identification and mitigation methods. With this in mind this paper provides a comparative analysis of the topological and electrical structure of the IEEE 300 bus and the Eastern United States power grids. Specifically we compare the topology of these grids with that of random [1], preferential-attachment [2] and small-world [3] networks of equivalent sizes and find that power grids differ substantially from these abstract models in degree distribution, clustering, diameter and assortativity, and thus conclude that these abstract models do not provide substantial utility for modeling power grids. To better represent the topological properties of power grids we introduce a new graph generating algorithm, the minimum distance graph, that produces networks with properties that more nearly match those of known power grids. While these topological comparisons are useful, they do not account for the physical laws that govern flows in electricity networks. To elucidate the electrical structure of power grids, we propose a new method for representing electrical structure as a weighted graph. This analogous representation is based on electrical distance rather than topological connections. A comparison of these two representations of the test power grids reveals dramatic differences between the electrical and topological structure of electrical power systems." ] }
1512.01436
2271413003
Modeling power transmission networks is an important area of research with applications such as vulnerability analysis, study of cascading failures, and location of measurement devices. Graph-theoretic approaches have been widely used to solve these problems, but are subject to several limitations. One of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph-theoretic formulation. In this paper, we propose a network-of-networks approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model. This model distinguishes between different components of the network that operate at different voltage ratings, and also captures the intra and inter-network connectivity patterns. By building the graph in this fashion we present a novel, and fundamentally different, perspective of power transmission networks. Consequently, this novel approach will have a significant impact on the graph-theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks.
In a more recent paper by some of the same authors @cite_2 they look more specifically at both the electrical and topological connectivity of three North American power infrastructures. They again compare these with random, preferential-attachment, and small-world networks and see that these random networks differ greatly from the real power networks. They propose to represent electrical connectivity of power systems using electrical distances rather than physical connectivity and geographic connections. In particular, they propose a distance based on sensitivity between active power transfers and nodal phase angle differences. Electrical distance is calculated as a positive value for all pairs of vertices which then yields a complete weighted graph. Therefore, to make it more comparable to a geographic network the authors use a threshold value and keep those edges which are below the threshold.
{ "cite_N": [ "@cite_2" ], "mid": [ "1972716516" ], "abstract": [ "Numerous recent papers have found important relationships between network structure and risks within networks. These results indicate that network structure can dramatically affect the relative effectiveness of risk identification and mitigation methods. With this in mind this paper provides a comparative analysis of the topological and electrical structure of the IEEE 300 bus and the Eastern United States power grids. Specifically we compare the topology of these grids with that of random [1], preferential-attachment [2] and small-world [3] networks of equivalent sizes and find that power grids differ substantially from these abstract models in degree distribution, clustering, diameter and assortativity, and thus conclude that these abstract models do not provide substantial utility for modeling power grids. To better represent the topological properties of power grids we introduce a new graph generating algorithm, the minimum distance graph, that produces networks with properties that more nearly match those of known power grids. While these topological comparisons are useful, they do not account for the physical laws that govern flows in electricity networks. To elucidate the electrical structure of power grids, we propose a new method for representing electrical structure as a weighted graph. This analogous representation is based on electrical distance rather than topological connections. A comparison of these two representations of the test power grids reveals dramatic differences between the electrical and topological structure of electrical power systems." ] }
1512.01436
2271413003
Modeling power transmission networks is an important area of research with applications such as vulnerability analysis, study of cascading failures, and location of measurement devices. Graph-theoretic approaches have been widely used to solve these problems, but are subject to several limitations. One of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph-theoretic formulation. In this paper, we propose a network-of-networks approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model. This model distinguishes between different components of the network that operate at different voltage ratings, and also captures the intra and inter-network connectivity patterns. By building the graph in this fashion we present a novel, and fundamentally different, perspective of power transmission networks. Consequently, this novel approach will have a significant impact on the graph-theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks.
Pagani and Aiello, in @cite_6 , present a survey of the most relevant work using complex network analysis to study the power grid. In this survey they remark that most of the networks studied were High Voltage from either North America, Europe, or China. They remark that most of the surveyed work show that degree distribution is exponential. The geography of the country seems to be important as the results differ somewhat between countries with differing geographies. One point of agreement between all studies is that the power grid is resilient to random failures but extremely vulnerable to attacks targeting nodes with high degree or high betweenness scores.
{ "cite_N": [ "@cite_6" ], "mid": [ "2113146118" ], "abstract": [ "The statistical tools of Complex Network Analysis are of useful to understand salient properties of complex systems, may these be natural or pertaining human engineered infrastructures. One of these that is receiving growing attention for its societ al relevance is that of electricity distribution. In this paper, we present a survey of the most relevant scientific studies investigating the properties of different Power Grids infrastructures using Complex Network Analysis techniques and methodologies. We categorize and explore the most relevant literature works considering general topological properties, physical properties, and differences between the various graph-related indicators and reliability aspects. We also trace the evolution in such field of the approach of study during the years to see the improvement achieved in the analysis." ] }
1512.01436
2271413003
Modeling power transmission networks is an important area of research with applications such as vulnerability analysis, study of cascading failures, and location of measurement devices. Graph-theoretic approaches have been widely used to solve these problems, but are subject to several limitations. One of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph-theoretic formulation. In this paper, we propose a network-of-networks approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model. This model distinguishes between different components of the network that operate at different voltage ratings, and also captures the intra and inter-network connectivity patterns. By building the graph in this fashion we present a novel, and fundamentally different, perspective of power transmission networks. Consequently, this novel approach will have a significant impact on the graph-theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks.
Finally, we cite the work in @cite_10 in which the authors characterize many graph measures in the context of power grid graphs. As an example, they show that power grids are sparsely connected, the degree distribution has an exponential tail, and the line impedance has a heavy-tailed distribution. Based on their findings they propose an algorithm to generate random power grids which features the same topology and electrical characteristics discovered from real data. They, like us, take a hierarchical approach to generating synthetic power networks. However, their approach differs by looking at geographic zones, whereas our approach is to break up the network by voltage level. Our approach requires no geographic knowledge of the system and leads to a systematic approach for annotating the nodes and edges with different electrical properties such as voltage ratings.
{ "cite_N": [ "@cite_10" ], "mid": [ "1970727760" ], "abstract": [ "An electrical power grid is a critical infrastructure. Its reliable, robust, and efficient operation inevitably depends on underlying telecommunication networks. In order to design an efficient communication scheme and examine the efficiency of any networked control architecture, we need to characterize statistically its information source, namely the power grid itself. In this paper we studied both the topological and electrical characteristics of power grid networks based on a number of synthetic and real-world power systems. We made several interesting discoveries: the power grids are sparsely connected and the average nodal degree is very stable regardless of network size; the nodal degrees distribution has exponential tails, which can be approximated with a shifted Geometric distribution; the algebraic connectivity scales as a power function of network size with the power index lying between that of one-dimensional and two-dimensional lattice; the line impedance has a heavy-tailed distribution, which can be captured quite accurately by a Double Pareto LogNormal distribution. Based on the discoveries mentioned above, we propose an algorithm that generates random power grids featuring the same topology and electrical characteristics we found from the real data." ] }
1512.01325
2189390184
The human visual system can spot an abnormal image, and reason about what makes it strange. This task has not received enough attention in computer vision. In this paper we study various types of atypicalities in images in a more comprehensive way than has been done before. We propose a new dataset of abnormal images showing a wide range of atypicalities. We design human subject experiments to discover a coarse taxonomy of the reasons for abnormality. Our experiments reveal three major categories of abnormality: object-centric, scene-centric, and contextual. Based on this taxonomy, we propose a comprehensive computational model that can predict all different types of abnormality in images and outperform prior arts in abnormality recognition.
* -5pt * -10pt Among all possible reasons for abnormality, previous work has mainly focused on isolated and specific kinds of abnormalities; for example, rooted in either objects @cite_17 or context @cite_32 . In this paper we first investigate the different reasons for abnormality, and provide evidence that supports our proposed clusters of reasons. Following this grouping, we introduce a comprehensive framework for detecting and classifying different types of atypicalities in images.
{ "cite_N": [ "@cite_32", "@cite_17" ], "mid": [ "145548212", "2019524107" ], "abstract": [ "Contextual modeling is a critical issue in scene understanding. Object detection accuracy can be improved by exploiting tendencies that are common among object configurations. However, conventional contextual models only exploit the tendencies of normal objects; abnormal objects that do not follow the same tendencies are hard to detect through contextual model. This paper proposes a novel generative model that detects abnormal objects by meeting four proposed criteria of success. This model generates normal as well as abnormal objects, each following their respective tendencies. Moreover, this generation is controlled by a latent scene variable. All latent variables of the proposed model are predicted through optimization via population-based Markov Chain Monte Carlo, which has a relatively short convergence time. We present a new abnormal dataset classified into three categories to thoroughly measure the accuracy of the proposed model for each category; the results demonstrate the superiority of our proposed approach over existing methods.", "When describing images, humans tend not to talk about the obvious, but rather mention what they find interesting. We argue that abnormalities and deviations from typicalities are among the most important components that form what is worth mentioning. In this paper we introduce the abnormality detection as a recognition problem and show how to model typicalities and, consequently, meaningful deviations from prototypical properties of categories. Our model can recognize abnormalities and report the main reasons of any recognized abnormality. We also show that abnormality predictions can help image categorization. We introduce the abnormality detection dataset and show interesting results on how to reason about abnormalities." ] }
1512.01691
2193844101
In this paper we present a framework for secure identification using deep neural networks, and apply it to the task of template protection for face authentication. We use deep convolutional neural networks (CNNs) to learn a mapping from face images to maximum entropy binary (MEB) codes. The mapping is robust enough to tackle the problem of exact matching, yielding the same code for new samples of a user as the code assigned during training. These codes are then hashed using any hash function that follows the random oracle model (like SHA-512) to generate protected face templates (similar to text based password protection). The algorithm makes no unrealistic assumptions and offers high template security, cancelability, and state-of-the-art matching performance. The efficacy of the approach is shown on CMU-PIE, Extended Yale B, and Multi-PIE face databases. We achieve high ( 95 ) genuine accept rates (GAR) at zero false accept rate (FAR) with up to 1024 bits of template security.
On the image recognition side, deep CNNs algorithms like Deepface @cite_22 have shown exceptional performance and hold the current state-of-the-art results for face recognition. There is also some recent work that seeks to map data to binary codes using deep neural networks like @cite_7 . Although mapping to binary codes (or learning hash functions) in this manner may seem similar to our approach, these methods are fundamentally different from what we are trying to achieve. Algorithms such as @cite_7 seek to learn a natural binary representation of the data and thus, the binary codes they map to are correlated to the data distribution. Our MEB codes have no correlation to the original data distribution. This gives us the template security we seek, but also makes it a more challenging problem since the mapping function we seek to learn is more complex.
{ "cite_N": [ "@cite_22", "@cite_7" ], "mid": [ "2145287260", "1956333070" ], "abstract": [ "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts." ] }
1512.01283
2265200751
The music industry is a $130 billion industry. Predicting whether a song catches the pulse of the audience impacts the industry. In this paper we analyze language inside the lyrics of the songs using several computational linguistic algorithms and predict whether a song would make to the top or bottom of the billboard rankings based on the language features. We trained and tested an SVM classifier with a radial kernel function on the linguistic features. Results indicate that we can classify whether a song belongs to top and bottom of the billboard charts with a precision of 0.76.
@cite_1 implemented a multimodal music emotion classification (MEC) for classifying 14 kinds of emotions from music and song lyrics of western music genre. Their dataset consisted of 3500 songs with emotions mood such as sad, high, groovy, happy, lonely, sexy, energetic, romantic, angry, sleepy, nostalgic, funny, jazzy, and calm. They used AdaBoost with decision stumps for classification of the music and language features of the lyrics into their respective emotion categories. They have an accuracy of 0.78 using language as well as surface features of the audio. The authors claim that the language features played a more important role compared to the music features in classification.
{ "cite_N": [ "@cite_1" ], "mid": [ "2123169053" ], "abstract": [ "AutoTutor simulates a human tutor by holding a conversation with the learner in natural language. The dialogue is augmented by an animated conversational agent and three-dimensional (3-D) interactive simulations in order to enhance the learner's engagement and the depth of the learning. Grounded in constructivist learning theories and tutoring research, AutoTutor achieves learning gains of approximately 0.8 sigma (nearly one letter grade), depending on the learning measure and comparison condition. The computational architecture of the system uses the .NET framework and has simplified deployment for classroom trials." ] }
1512.01283
2265200751
The music industry is a $130 billion industry. Predicting whether a song catches the pulse of the audience impacts the industry. In this paper we analyze language inside the lyrics of the songs using several computational linguistic algorithms and predict whether a song would make to the top or bottom of the billboard rankings based on the language features. We trained and tested an SVM classifier with a radial kernel function on the linguistic features. Results indicate that we can classify whether a song belongs to top and bottom of the billboard charts with a precision of 0.76.
@cite_6 also indicated that the language features outperformed audio features for music mood classification. They have shown that language features extracted from the songs fit well with Russel's valence(negative-positive) and arousal(inactive-active) model . Several cross-cultural studies show evidence for universal emotional cues in music and language across different cultures and traditions .
{ "cite_N": [ "@cite_6" ], "mid": [ "2160660844" ], "abstract": [ "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques." ] }
1512.01283
2265200751
The music industry is a $130 billion industry. Predicting whether a song catches the pulse of the audience impacts the industry. In this paper we analyze language inside the lyrics of the songs using several computational linguistic algorithms and predict whether a song would make to the top or bottom of the billboard rankings based on the language features. We trained and tested an SVM classifier with a radial kernel function on the linguistic features. Results indicate that we can classify whether a song belongs to top and bottom of the billboard charts with a precision of 0.76.
@cite_2 used LIWC and surface music components of all the phrases present in a small collection of songs as a dataset for identifying the emotions in that phrase. Each of the phrases was annotated for emotions. Using SVM classifier they obtained an accuracy of 0.87 using just the language features. They observed that the language components gave a higher accuracy than music features in predicting emotions. The accuracy is higher as they are looking at emotions in a phrase, where the chance of having multiple emotions inside such a small text is very low.
{ "cite_N": [ "@cite_2" ], "mid": [ "2403416428" ], "abstract": [ "We present in this paper the design of DeepTutor, the first dialoguebased intelligent tutoring system based on Learning Progressions, and its implications for developing the Generalized Framework for Intelligent Tutoring. We also present the design of SEMILAR, a semantic similarity toolkit, that helps researchers investigate and author semantic similarity models for evaluating natural language student inputs in conversatioanl ITSs. DeepTutor has been developed as a web service while SEMILAR is a Java library. Based on our experience with developing DeepTutor and SEMILAR, we contrast three different models for developing a standardized architecture for intelligent tutoring systems: (1) a single-entry web service coupled with XML protocols for queries and data, (2) a bundle of web services, and (3) library-API. Based on the analysis of the three models, recommendations are" ] }
1512.00994
2184614974
Multi-instance learning (MIL) has a wide range of applications due to its distinctive characteristics. Although many state-of-the-art algorithms have achieved decent performances, a plurality of existing methods solve the problem only in instance level rather than excavating relations among bags. In this paper, we propose an efficient algorithm to describe each bag by a corresponding feature vector via comparing it with other bags. In other words, the crucial information of a bag is extracted from the similarity between that bag and other reference bags. In addition, we apply extensions of Hausdorff distance to representing the similarity, to a certain extent, overcoming the key challenge of MIL problem, the ambiguity of instances' labels in positive bags. Experimental results on benchmarks and text categorization tasks show that the proposed method outperforms the previous state-of-the-art by a large margin.
Multi-instance learning (MIL) has received a lot of attentions since it helps to solve a range of real applications. Till now, lots of MIL methods have been proposed to either develop effective MIL solvers or apply MIL to solve application problems. Firstly, we briefly review a few popular MIL solvers. The EM-DD method @cite_21 uses EM to infer instance space with many instances from different positive bags and few instances from negative bags. Instead of adopting simple instance space, the miRPCA method @cite_19 utilizes robust PCA model to build a instance model robust to outliers. Besides of generative instance models, discriminative models are more popular as instance model. For example, both MILBoost @cite_13 and miSVM use discriminative methods, Boosting and SVM respectively, as instance models, and iteratively select positive instances to train models. Furthermore, miGraph @cite_4 represents bag as graph and explicitly model the relationships between instances within a bag; while @cite_16 models the relationships between different bags using conditional random field. Recent work @cite_20 studies the problem if there are infinite number of instances in a bag.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_19", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2133288557", "2163474322", "132498914", "1531742118", "2166010828", "2222844749" ], "abstract": [ "Previous studies on multi-instance learning typically treated instances in the bags as independently and identically distributed. The instances in a bag, however, are rarely independent in real tasks, and a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits relations among instances. In this paper, we propose two simple yet effective methods. In the first method, we explicitly map every bag to an undirected graph and design a graph kernel for distinguishing the positive and negative bags. In the second method, we implicitly construct graphs by deriving affinity matrices and propose an efficient graph kernel considering the clique information. The effectiveness of the proposed methods are validated by experiments.", "We present a new multiple-instance (MI) learning technique (EM-DD) that combines EM with the diverse density (DD) algorithm. EM-DD is a general-purpose MI algorithm that can be applied with boolean or real-value labels and makes real-value predictions. On the boolean Musk benchmarks, the EM-DD algorithm without any tuning significantly outperforms all previous algorithms. EM-DD is relatively insensitive to the number of relevant attributes in the data set and scales up well to large bag sizes. Furthermore, EM-DD provides a new framework for MI learning, in which the MI problem is converted to a single-instance setting by using EM to estimate the instance responsible for the label of the bag.", "Principal component analysis (PCA), as a key component in statistical learning, has been adopted in a wide variety of applications in computer vision and machine learning. From a different angle, weakly supervised learning, more specifically multiple instance learning (MIL), allows fine-grained information to be exploited from coarsely-grained label information. In this paper, we propose an algorithm using the robust PCA (RPCA) [1] in a iterative way to perform simultaneous common object discovery and model learning under a one-class multiple instance learning setting. We show the advantage of our method on common object discovery and model learning, which needs no fine coarse alignment in the input data; in addition, it achieves comparable results with standard two-class MIL learning algorithms but our method is learning from one-class data only.", "We present MI-CRF, a conditional random field (CRF) model for multiple instance learning (MIL). MI-CRF models bags as nodes in a CRF with instances as their states. It combines discriminative unary instance classifiers and pairwise dissimilarity measures. We show that both forces improve the classification performance. Unlike other approaches, MI-CRF considers all bags jointly during training as well as during testing. This makes it possible to classify test bags in an imputation setup. The parameters of MI-CRF are learned using constraint generation. Furthermore, we show that MI-CRF can incorporate previous MIL algorithms to improve on their results. MI-CRF obtains competitive results on five standard MIL datasets.", "A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MIL-Boost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier.", "In many machine learning applications, labeling every instance of data is burdensome. Multiple Instance Learning (MIL), in which training data is provided in the form of labeled bags rather than labeled instances, is one approach for a more relaxed form of supervised learning. Though much progress has been made in analyzing MIL problems, existing work considers bags that have a finite number of instances. In this paper we argue that in many applications of MIL (e.g. image, audio, etc.) the bags are better modeled as low dimensional manifolds in high dimensional feature space. We show that the geometric structure of such manifold bags affects PAC-learnability. We discuss how a learning algorithm that is designed for finite sized bags can be adapted to learn from manifold bags. Furthermore, we propose a simple heuristic that reduces the memory requirements of such algorithms. Our experiments on real-world data validate our analysis and show that our approach works well." ] }
1512.00215
2257516113
Network Function Virtualization (NFV) has recently received significant attention as an innovative way of deploying network services. By decoupling network functions from the physical equipment on which they run, NFV has been proposed as passage towards service agility, better time-to-market, and reduced Capital Expenses (CAPEX) and Operating Expenses (OPEX). One of the main selling points of NFV is its promise for better energy efficiency resulting from consolidation of resources as well as their more dynamic utilization. However, there are currently no studies or implementations which attach values to energy savings that can be expected, which could make it hard for Telecommunication Service Providers (TSPs) to make investment decisions. In this paper, we utilize Bell Labs' GWATT tool to estimate the energy savings that could result from the three main NFV use cases Virtualized Evolved Packet Core (VEPC), Virtualized Customer Premises Equipment (VCPE) and Virtualized Radio Access Network (VRAN). We determine that the part of the mobile network with the highest energy utilization prospects is the Evolved Packet Core (EPC) where virtualization of functions leads to a 22 reduction in energy consumption and a 32 enhancement in energy efficiency.
China Mobile recently published @cite_16 their experiences in deploying a C-RAN . One of the tests was performed on their 2G and 3G networks, where it was observed that by centralizing the RAN, power consumption could be reduced by 41 DROP @cite_5 is a middleware platform which was originally aimed at creating software routers on top of commodity servers, while hiding the complexity of its modular architecture to control-plane applications and system administrators. It has been recently extended to DROPv2 @cite_3 which is focused on implementing more efficient power management in SDN. The idea of DROPv2 is to periodically calculate, based on total traffic, the number of forwarding elements that should be put into a standby state, and how the remaining ones should share the available traffic. Both these research activities do not specifically look at virtualized NF .
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_3" ], "mid": [ "2090228175", "", "2017623265" ], "abstract": [ "In this paper, our main objective is to explore how Linux Software Routers (SRs) can deploy advanced and flexible paradigms for supporting novel control-plane functionalities and applications. To this end, we investigate and study a new open-source software (SW) framework: the Distributed SW ROuter Project (DROP), which aims to develop and enable a novel cooperative middleware for distributed IP-router control and management. DROP allows logical network nodes to be built through the aggregation of multiple SRs based on the Linux operating system and commodity hardware, which can be devoted to packet forwarding or control operations. In addition to the original ForCES design, DROP aims to extend router distribution and aggregation concepts by moving them to a network-wide scale to enable and support value-added services for next-generation networks.", "", "Future Internet devices and network infrastructures need to be significantly more energy-efficient, scalable, and flexible in order to realize the extremely virtualized and optimized ICT network infrastructures. In this respect, this article presents a recent extension of an open source software framework, the Distributed Router Open Platform (DROP), to enable a novel distributed paradigm for network function virtualization through the integration of software defined network and information technology (IT) platforms, as well as for the control management of flexible IP router platforms. To answer the need for increased energy efficiency of the network function virtualization paradigms, DROP includes sophisticated power management mechanisms, which are exposed by means of the green abstraction layer (GAL), under consideration for standardization in ETSI. Moreover, the DROP architecture has been specifically designed to act as ?glue? among a large number of the most promising and well-known open source software projects, providing network dataor control-plane capabilities." ] }
1512.00168
2952983874
Over the years, different meanings have been associated to the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred. In this paper we aim to fill the void in literature, by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength”, which we believe will reveal useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes. The scope of this paper is restricted to non-transactional semantics, i.e., those that apply to single storage object operations. As such, our paper complements the existing surveys done in the context of transactional, database consistency semantics. ar X iv :1 51 2. 00 16 8v 4 [ cs .D C ] 1 2 A pr 2 01 6
Several subsequent works developed uniform frameworks and notations to represent consistency semantics defined in literature @cite_52 @cite_58 @cite_43 . Most notably, Steinke.Nutt:04 provide a unified theory of consistency models for shared memory systems based on the composition of few fundamental declarative properties. In turn, this declarative and compositional approach outlines a partial ordering over consistency semantics. Similarly, a treatment of composability of consistency conditions had been carried out in @cite_118 .
{ "cite_N": [ "@cite_43", "@cite_118", "@cite_58", "@cite_52" ], "mid": [ "1556900653", "2116569164", "1490923611", "2165365531" ], "abstract": [ "Memory accesses form a well understood paradigm for developing concurrent applications. Distributed Shared Memory systems, enable the creation of distributed applications based on shared memory accesses. A DSM system is characterized by the memory model it uses to perform memory accesses. There have been numerous models proposed over the years, and until recently there has been little attempt to provide a common formal framework to study their properties. DSM models can be roughly classified into synchronized (those which, in addition to usual read-write accesses, use special synchronization operations) and non-synchronized. In this paper we focus on the formalization of synchronized DSM models, extending a previous work on non-synchronized models.", "This paper presents a formal framework, which is based on the notion of a serialization set , that enables to compose a set of consistency conditions into a more restrictive one. To exemplify the utility of this framework, a list of very basic consistency conditions is identified, and it is shown that various compositions of the basic conditions yield some of the most commonly used consistency conditions, such as sequential consistency, causal memory , and Pipelined RAM. The paper also lists several applications that can benefit from even weaker semantics than Pipelined RAM that can be expressed as a composition of a small subset of the basic conditions.", "A shared memory built on top of a distributed system constitutes a distributed shared memory (DSM). If a lot of protocols implementing DSMS in various contexts have been proposed, no set of homogeneous definitions has been given for the many semantics offered by these implementations. This paper provides a suite of such definitions for atomic, sequential, causal, PRAM and a few others consistency criteria. These definitions are based on a unique framework : a parallel computation is defined as a partial order on the set of read and write operations invoked by processes, and a consistency criterion is defined as a constraint on this partial order. Such an approach provides a simple classification of consistency criteria, from the more to the less constrained one. This paper can also be considered as a survey on consistency criteria for DSMS.", "The authors present a data-race-free-1, shared-memory model that unifies four earlier models: weak ordering, release consistency (with sequentially consistent special operations), the VAX memory model, and data-race-free-0. Data-race-free-1 unifies the models of weak ordering, release consistency, the VAX, and data-race-free-0 by formalizing the intuition that if programs synchronize explicitly and correctly, then sequential consistency can be guaranteed with high performance in a manner that retains the advantages of each of the four models. Data-race-free-1 expresses the programmer's interface more explicitly and formally than weak ordering and the VAX, and allows an implementation not allowed by weak ordering, release consistency, or data-race-free-0. The implementation proposal for data-race-free-1 differs from earlier implementations by permitting the execution of all synchronization operations of a processor even while previous data operations of the processor are in progress. To ensure sequential consistency, two sychronizing processors exchange information to delay later operations of the second processor that conflict with an incomplete data operation of the first processor. >" ] }
1512.00168
2952983874
Over the years, different meanings have been associated to the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred. In this paper we aim to fill the void in literature, by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength”, which we believe will reveal useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes. The scope of this paper is restricted to non-transactional semantics, i.e., those that apply to single storage object operations. As such, our paper complements the existing surveys done in the context of transactional, database consistency semantics. ar X iv :1 51 2. 00 16 8v 4 [ cs .D C ] 1 2 A pr 2 01 6
* Distributed storage systems In more recent years, researchers have been proposing categorizations of the most influential consistency models for modern storage systems. Namely, Tanenbaum.Steen:07 proposed the client-centric versus data-centric classification, while Bermbach.Kuhlenkamp:13 , expanded such classification and provided descriptions for the most popular models. While practical and instrumental in attaining a good understanding of the consistency spectrum, these works propose informal treatments based on a simple dichotomous categorization which falls short of capturing some important consistency semantics. With this survey we aim at improving over these works, as we adopt a formal model based on first-order logic predicates and graph theory. We derived this model from the one proposed in @cite_38 , which we modified and expanded in order to enable the definition of a wider and richer range of consistency semantics. Moreover, whereas Burckhardt:14 focuses mostly on session and eventual semantics, we cover a broader ground including more than 50 different consistency semantics.
{ "cite_N": [ "@cite_38" ], "mid": [ "2046654145" ], "abstract": [ "In globally distributed systems, shared state is never perfect. When communication is neither fast nor reliable, we cannot achieve strong consistency, low latency, and availability at the same time. Unfortunately, abandoning strong consistency has wide ramifications. Eventual consistency, though attractive from a performance viewpoint, is challenging to understand and reason about, both for system architects and programmers. To provide robust abstractions, we need not just systems, but also principles: we need the ability to articulate what a consistency protocol is supposed to guarantee, and the ability to prove or refute such claims.In this tutorial, we carefully examine both the what and the how of consistency in distributed systems. First, we deconstruct consistency into individual guarantees relating the data type, the conflict resolution, and the ordering, and then reassemble them into a hierarchy of consistency models that starts with linearizability and gradually descends into sequential, causal, eventual, and quiescent consistency. Second, we present a collection of consistency protocols that illustrate common techniques, and include templates for implementations of arbitrary replicated data types that are fully available under partitions. Third, we demonstrate that our formalizations serve their purpose of enabling proofs and refutations, by proving both positive results (the correctness of the protocols) and a negative result (a version of the CAP theorem for sequential consistency)." ] }
1512.00168
2952983874
Over the years, different meanings have been associated to the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred. In this paper we aim to fill the void in literature, by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength”, which we believe will reveal useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes. The scope of this paper is restricted to non-transactional semantics, i.e., those that apply to single storage object operations. As such, our paper complements the existing surveys done in the context of transactional, database consistency semantics. ar X iv :1 51 2. 00 16 8v 4 [ cs .D C ] 1 2 A pr 2 01 6
* Measuring consistency A concurrent research trend has been straining to design uniform and rigorous frameworks to measure consistency in both shared memory systems and, more recently, in distributed storage systems. Namely, while some works have proposed metrics to assess consistency @cite_121 @cite_7 , others have devised methods to verify, given an execution, whether it satisfies a certain consistency model @cite_59 @cite_103 @cite_120 . Finally, due to the loose definitions and opaque implementations of eventual consistency, recent research has tried to quantify its inherent anomalies as perceived from a client-side perspective @cite_26 @cite_2 @cite_70 @cite_130 @cite_33 . In this regard, our work provides a more comprehensive and structured overview of the metrics that can be adopted to evaluate consistency.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_7", "@cite_70", "@cite_120", "@cite_121", "@cite_130", "@cite_59", "@cite_2", "@cite_103" ], "mid": [ "2057583008", "2087536500", "2141255371", "2055908644", "2296013760", "2115806861", "1530638770", "2070356883", "2131492297", "1986463648" ], "abstract": [ "A new class of data storage systems, called NoSQL (Not Only SQL), have emerged to complement traditional database systems, with rejection of general ACID transactions as one common feature. Different platforms, and indeed different primitives within one NoSQL platform, can offer various consistency properties, from Eventual Consistency to single-entity ACID. For the platform provider, weaker consistency should allow better availability, lower latency, and other benefits. This paper investigates what consumers observe of the consistency and performance properties of various offerings. We find that many platforms seem in practice to offer more consistency than they promise; we also find cases where the platform offers consumers a choice between stronger and weaker consistency, but there is no observed benefit from accepting weaker consistency properties.", "Replicated storage for large Web services faces a trade-off between stronger forms of consistency and higher performance properties. Stronger consistency prevents anomalies, i.e., unexpected behavior visible to users, and reduces programming complexity. There is much recent work on improving the performance properties of systems with stronger consistency, yet the flip-side of this trade-off remains elusively hard to quantify. To the best of our knowledge, no prior work does so for a large, production Web service. We use measurement and analysis of requests to Facebook's TAO system to quantify how often anomalies happen in practice, i.e., when results returned by eventually consistent TAO differ from what is allowed by stronger consistency models. For instance, our analysis shows that 0.0004 of reads to vertices would return different results in a linearizable system. This in turn gives insight into the benefits of stronger consistency; 0.0004 of reads are potential anomalies that a linearizable system would prevent. We directly study local consistency models---i.e., those we can analyze using requests to a sample of objects---and use the relationships between models to infer bounds on the others. We also describe a practical consistency monitoring system that tracks p-consistency, a new consistency metric ideally suited for health monitoring. In addition, we give insight into the increased programming complexity of weaker consistency by discussing bugs our monitoring uncovered, and anti-patterns we teach developers to avoid.", "Eventually consistent storage systems give up the ACID semantics of conventional databases in order to gain better scalability, higher availability, and lower latency. A side-effect of this design decision is that application developers must deal with stale or out of order data. As a result, substantial intellectual effort has been devoted to studying the behavior of eventually consistent systems, in particular finding quantitative answers to the questions \"how eventual\" and \"how consistent\"?", "Over the last few years, Cloud storage systems and so-called NoSQL datastores have found widespread adoption. In contrast to traditional databases, these storage systems typically sacrifice consistency in favor of latency and availability as mandated by the CAP theorem, so that they only guarantee eventual consistency. Existing approaches to benchmark these storage systems typically omit the consistency dimension or did not investigate eventuality of consistency guarantees. In this work we present a novel approach to benchmark staleness in distributed datastores and use the approach to evaluate Amazon's Simple Storage Service (S3). We report on our unexpected findings.", "Many key-value stores have recently been proposed as platforms for always-on, globally-distributed, Internet-scale applications. To meet their needs, these stores often sacrifice consistency for availability. Yet, few tools exist that can verify the consistency actually provided by a key-value store, and quantify the violations if any. How can a user check if a storage system meets its promise of consistency? If a system only promises eventual consistency, how bad is it really? In this paper, we present efficient algorithms that help answer these questions. By analyzing the trace of interactions between the client machines and a key-value store, the algorithms can report whether the trace is safe, regular, or atomic, and if not, how many violations there are in the trace. We run these algorithms on traces of our eventually consistent key-value store called Pahoehoe and find few or no violations, thus showing that it often behaves like a strongly consistent system during our tests.", "The tradeoffs between consistency, performance, and availability are well understood. Traditionally, however, designers of replicated systems have been forced to choose from either strong consistency guarantees or none at all. This paper explores the semantic space between traditional strong and optimistic consistency models for replicated services. We argue that an important class of applications can tolerate relaxed consistency, but benefit from bounding the maximum rate of inconsistent access in an application-specific manner. Thus, we develop a conit-based continuous consistency model to capture the consistency spectrum using three application-independent metrics, numerical error, order error, and staleness. We then present the design and implementation of TACT, a middleware layer that enforces arbitrary consistency bounds among replicas using these metrics. We argue that the TACT consistency model can simultaneously achieve the often conflicting goals of generality and practicality by describing how a broad range of applications can express their consistency semantics using TACT and by demonstrating that application-independent algorithms can efficiently enforce target consistency levels. Finally, we show that three replicated applications running across the Internet demonstrate significant semantic and performance benefits from using our framework.", "Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partition-tolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency, which--to this point--has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy. In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra.", "The problem of concurrent accesses to registers by asynchronous components is considered. A set of axioms about the values in a register during concurrent accesses is proposed. It is shown that if these axioms are met by a register, then concurrent accesses to it may be viewed as nonconcurrent, thus making it possible to analyze asynchronous algorithms without elaborate timing analysis of operations. These axioms are shown, in a certain sense, to be the weakest. Motivation for this work came from analyzing low-level hardware components in a VLSI chip which concurrently accesses a flip-flop.", "Inspired by Google's BigTable, a variety of scalable, semi-structured, weak-semantic table stores have been developed and optimized for different priorities such as query speed, ingest speed, availability, and interactivity. As these systems mature, performance benchmarking will advance from measuring the rate of simple workloads to understanding and debugging the performance of advanced features such as ingest speed-up techniques and function shipping filters from client to servers. This paper describes YCSB++, a set of extensions to the Yahoo! Cloud Serving Benchmark (YCSB) to improve performance understanding and debugging of these advanced features. YCSB++ includes multi-tester coordination for increased load and eventual consistency measurement, multi-phase workloads to quantify the consequences of work deferment and the benefits of anticipatory configuration optimization such as B-tree pre-splitting or bulk loading, and abstract APIs for explicit incorporation of advanced features in benchmark tests. To enhance performance debugging, we customized an existing cluster monitoring tool to gather the internal statistics of YCSB++, table stores, system services like HDFS, and operating systems, and to offer easy post-test correlation and reporting of performance behaviors. YCSB++ features are illustrated in case studies of two BigTable-like table stores, Apache HBase and Accumulo, developed to emphasize high ingest rates and finegrained security.", "Sequential consistency is the most widely used correctness condition for multiprocessor memory systems. This paper studies the problem of testing shared-memory multiprocessors to determine if they are indeed providing a sequentially consistent memory. It presents the first formal study of this problem, which has applications to testing new memory system designs and realizations, providing run-time fault tolerance, and detecting bugs in parallel programs. A series of results are presented for testing an execution of a shared memory under various scenarios, comparing sequential consistency with linearizability, another well-known correctness condition. Linearizability imposes additional restrictions on the shared memory, beyond that of sequential consistency; these restrictions are shown to be useful in testing such memories." ] }
1512.00168
2952983874
Over the years, different meanings have been associated to the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred. In this paper we aim to fill the void in literature, by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength”, which we believe will reveal useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes. The scope of this paper is restricted to non-transactional semantics, i.e., those that apply to single storage object operations. As such, our paper complements the existing surveys done in the context of transactional, database consistency semantics. ar X iv :1 51 2. 00 16 8v 4 [ cs .D C ] 1 2 A pr 2 01 6
* Transactional systems Readers interested in pursuing a formal treatment of the most important consistency models for transactional storage systems may refer to @cite_127 . Similarly, other works by Harris.ea:10 and by Dziuma.Fatourou.ea:14 complement this survey with overviews on models specifically designed for transactional memory systems. Finally, some recent research @cite_79 @cite_64 adopted variants of the same framework used in this paper to propose axiomatic specifications of transactional consistency models.
{ "cite_N": [ "@cite_79", "@cite_127", "@cite_64" ], "mid": [ "2131460078", "2101027550", "2293215590" ], "abstract": [ "When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees that programmers can understand and systems can provide effectively. To this end, we propose a novel consistency model based on eventually consistent transactions. Unlike serializable transactions, eventually consistent transactions are ordered by two order relations (visibility and arbitration) rather than a single order relation. To demonstrate that eventually consistent transactions can be effectively implemented, we establish a handful of simple operational rules for managing replicas, versions and updates, based on graphs called revision diagrams. We prove that these rules are sufficient to guarantee correct implementation of eventually consistent transactions. Finally, we present two operational models (single server and server pool) of systems that provide eventually consistent transactions.", "", "Modern distributed systems often rely on databases that achieve scalability by providing only weak guarantees about the consistency of distributed transaction processing. The semantics of programs interacting with such a database depends on its consistency model, defining these guarantees. Unfortunately, consistency models are usually stated informally or using disparate formalisms, often tied to the database internals. To deal with this problem, we propose a framework for specifying a variety of consistency models for transactions uniformly and declaratively. Our specifications are given in the style of weak memory models, using structures of events and relations on them. The specifications are particularly concise because they exploit the property of atomic visibility guaranteed by many consistency models: either all or none of the updates by a transaction can be visible to another one. This allows the specifications to abstract from individual events inside transactions. We illustrate the use of our framework by specifying several existing consistency models. To validate our specifications, we prove that they are equivalent to alternative operational ones, given as algorithms closer to actual implementations. Our work provides a rigorous foundation for developing the metatheory of the novel form of concurrency arising in weakly consistent large-scale databases." ] }
1512.00196
2262804946
Flexible business processes can often be modelled more easily using a declarative rather than a procedural modelling approach. Process mining aims at automating the discovery of business process models. Existing declarative process mining approaches either suffer performance issues with real-life event logs or limit their expressiveness to a specific set of constaint types. Lately, with RelationalXES a relational database architecture for storing event log data has been introduced. In this technical report, we introduce a mining approach that directly works on relational event data by querying the log with conventional SQL. We provide a list of SQL queries for discovering a set of commonly used and mined process constraints.
In an event log, every process instance corresponds to a sequence () of recorded entries, namely, . We require that events contain an explicit reference to the enacted activity. For discovering resource-related aspects we additionally require an explicit reference to the operating resource. Both conditions are commonly contained in real-world event logs @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "1490658580" ], "abstract": [ "The first to cover this missing link between data mining and process modeling, this book provides real-world techniques for monitoring and analyzing processes in real time. It is a powerful new tool destined to play a key role in business process management." ] }
1512.00307
2185170185
An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.
Recent general circulations models, specifically the Model for Predication Across Scales (MPAS) @cite_4 @cite_6 @cite_2 , has focused on the use of locally refined, unstructured polygonal grids to achieve multi-resolution representations of both atmospheric and oceanic dynamics. The current study explores the development of an algorithm for the generation of high resolution spheroidal Delaunay triangulations and associated Voronoi diagrams appropriate for such climate models. Note that in addition to the high-resolution requirements, the sensitivity of the underlying numerical schemes employed in general circulation models necessitates the construction of meshes with near-optimal mesh quality.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_2" ], "mid": [ "2076690115", "2020699102", "2156219937" ], "abstract": [ "AbstractThe formulation of a fully compressible nonhydrostatic atmospheric model called the Model for Prediction Across Scales–Atmosphere (MPAS-A) is described. The solver is discretized using centroidal Voronoi meshes and a C-grid staggering of the prognostic variables, and it incorporates a split-explicit time-integration technique used in many existing nonhydrostatic meso- and cloud-scale models. MPAS can be applied to the globe, over limited areas of the globe, and on Cartesian planes. The Voronoi meshes are unstructured grids that permit variable horizontal resolution. These meshes allow for applications beyond uniform-resolution NWP and climate prediction, in particular allowing embedded high-resolution regions to be used for regional NWP and regional climate applications. The rationales for aspects of this formulation are discussed, and results from tests for nonhydrostatic flows on Cartesian planes and for large-scale flow on the sphere are presented. The results indicate that the solver is as acc...", "Abstract A new global ocean model (MPAS-Ocean) capable of using enhanced resolution in selected regions of the ocean domain is described and evaluated. Three simulations using different grids are presented. The first grid is a uniform high-resolution (15 km) mesh; the second grid has similarly high resolution (15 km) in the North Atlantic (NA), but coarse resolution elsewhere; the third grid is a variable resolution grid like the second but with higher resolution (7.5 km) in the NA. Simulation results are compared to observed sea-surface height (SSH), SSH variance and selected current transports. In general, the simulations produce subtropical and subpolar gyres with peak SSH amplitudes too strong by between 0.25 and 0.40 m. The mesoscale eddy activity within the NA is, in general, well simulated in both structure and amplitude. The uniform high-resolution simulation produces reasonable representations of mesoscale activity throughout the global ocean. Simulations using the second variable-resolution grid are essentially identical to the uniform case within the NA region. The third case with higher NA resolution produces a simulation that agrees somewhat better in the NA with observed SSH, SSH variance and transports than the two 15 km simulations. The actual throughput, including I O, for the x1-15 km simulation is the same as the structured grid Parallel Ocean Program ocean model in its standard high-resolution 0.1° configuration. Our overall conclusion is that this ocean model is a viable candidate for multi-resolution simulations of the global ocean system on climate-change time scales.", "During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high-quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the po tential benefits of this multiresolution method and the challenges ahead." ] }
1512.00103
2185720331
We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than language-specific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning "from scratch" in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.
One important feature of our work is the use of byte inputs. Character-level inputs have been used with some success for tasks like NER @cite_24 , parallel text alignment @cite_21 , and authorship attribution @cite_16 as an effective way to deal with n-gram sparsity while still capturing some aspects of word choice and morphology. Such approaches often combine character and word features and have been especially useful for handling languages with large character sets @cite_17 . However, there is almost no work that explicitly uses bytes -- one exception uses byte n-grams to identify source code authorship @cite_10 -- but there is nothing, to the best of our knowledge, that exploits bytes as a cross-lingual representation of language. Work on multilingual parsing using Neural Networks that share some subset of the parameters across languages @cite_1 seems to benefit the low-resource languages; however, we are sharing all the parameters among all languages.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_24", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2115755647", "2251324968", "2041614298", "2013568550", "2054126054", "2474646533" ], "abstract": [ "There have been a number of recent papers on aligning parallel texts at the sentence level, e.g., (1991), Gale and Church (to appear), Isabelle (1992), Kay and Rosenschein (to appear), (1992), Warwick-Armstrong and Russell (1990). On clean inputs, such as the Canadian Hansards, these methods have been very successful (at least 96 correct by sentence). Unfortunately, if the input is noisy (due to OCR and or unknown markup conventions), then these methods tend to break down because the noise can make it difficult to find paragraph boundaries, let alone sentences. This paper describes a new program, char_align, that aligns texts at the character level rather than at the sentence paragraph level, based on the cognate approach proposed by", "Training a high-accuracy dependency parser requires a large treebank. However, these are costly and time-consuming to build. We propose a learning method that needs less data, based on the observation that there are underlying shared structures across languages. We exploit cues from a different source language in order to guide the learning process. Our model saves at least half of the annotation effort to reach the same accuracy compared with using the purely supervised method.", "We discuss two named-entity recognition models which use characters and character n-grams either exclusively or as an important part of their data representation. The first model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substantially richer context features. Our best model achieves an overall F1 of 86.07 on the English test data (92.31 on the development data). This number represents a 25 error reduction over the same model without word-internal (substring) features.", "We present a method for computer-assisted authorship attribution based on character-level n-gram language models. Our approach is based on simple information theoretic principles, and achieves improved performance across a variety of languages without requiring extensive pre-processing or feature selection. To demonstrate the effectiveness and language independence of our approach, we present experimental results on Greek, English, and Chinese data. We show that our approach achieves state of the art performance in each of these cases. In particular, we obtain a 18 accuracy improvement over the best published results for a Greek data set, while using a far simpler technique than previous investigations.", "Source code author identification deals with the task of identifying the most likely author of a computer program, given a set of predefined author candidates. This is usually .based on the analysis of other program samples of undisputed authorship by the same programmer. There are several cases where the application of such a method could be of a major benefit, such as authorship disputes, proof of authorship in court, tracing the source of code left in the system after a cyber attack, etc. We present a new approach, called the SCAP (Source Code Author Profiles) approach, based on byte-level n-gram profiles in order to represent a source code author's style. Experiments on data sets of different programming-language (Java or C++) and varying difficulty (6 to 30 candidate authors) demonstrate the effectiveness of the proposed approach.A comparison with a previous source code authorship identification study based on more complicated information shows that the SCAP approach is language independent and that n-gram author profiles are better able to capture the idiosyncrasies of the source code authors. Moreover, the SCAP approach is able to deal surprisingly well with cases where only a limited amount of very short programs per programmer is available for training. It is also demonstrated that the effectiveness of the proposed model is not affected by the absence of comments in the source code, a condition usually met in cyber-crime cases.", "" ] }
1512.00103
2185720331
We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than language-specific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning "from scratch" in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.
Recent work has shown that modeling the sequence of characters in each token with an LSTM can more effectively handle rare and unknown words than independent word embeddings @cite_2 @cite_3 . Similarly, language modeling, especially for morphologically complex languages, benefits from a Convolutional Neural Network (CNN) over characters to generate word embeddings @cite_15 . Rather than decompose words into characters, Rohan and Denero encode rare words with Huffman codes, allowing a neural translation model to learn something about word subcomponents. In contrast to this line of research, our work has no explicit notion of tokens and operates on bytes rather than characters.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_2" ], "mid": [ "2951559648", "2951336364", "2949563612" ], "abstract": [ "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish)." ] }
1512.00177
2185490708
Artificial neural networks are powerful models, which have been widely applied into many aspects of machine translation, such as language modeling and translation modeling. Though notable improvements have been made in these areas, the reordering problem still remains a challenge in statistical machine translations. In this paper, we present a novel neural reordering model that directly models word pairs and alignment. By utilizing LSTM recurrent neural networks, much longer context could be learned for reordering prediction. Experimental results on NIST OpenMT12 Arabic-English and Chinese-English 1000-best rescoring task show that our LSTM neural reordering feature is robust and achieves significant improvements over various baseline systems.
proposed to use a recursive auto-encoder (RAE) to map each phrase pairs into continuous vectors, and handle reordering problems with a classifier. Also, they suggested that by both including current and previous phrase pairs to determine the phrase orientations could achieve further improvements in reordering accuracy @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2137463156" ], "abstract": [ "While lexicalized reordering models have been widely used in phrase-based translation systems, they suffer from three drawbacks: context insensitivity, ambiguity, and sparsity. We propose a neural reordering model that conditions reordering probabilities on the words of both the current and previous phrase pairs. Including the words of previous phrase pairs significantly improves context sensitivity and reduces reordering ambiguity. To alleviate the data sparsity problem, we build one classifier for all phrase pairs, which are represented as continuous space vectors. Experiments on the NIST Chinese-English datasets show that our neural reordering model achieves significant improvements over state-of-the-art lexicalized reordering models." ] }
1511.09080
2258248446
Many exact and approximate solution methods for Markov Decision Processes (MDPs) attempt to exploit structure in the problem and are based on factorization of the value function. Especially multiagent settings, however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are restricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of multiagent systems, exploiting a property that can be thought of as "anonymous influence" in the factored MDP. Anonymous influence summarizes joint variable effects efficiently whenever the explicit representation of variable identity in the problem can be avoided. We show how representational benefits from anonymity translate into computational efficiencies, both for general variable elimination in a factor graph but in particular also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear programming to factored MDPs that were previously unsolvable. Our results are shown for the control of a stochastic disease process over a densely connected graph with 50 nodes and 25 agents.
Many recent algorithms tackle domains with large (structured) state spaces. For exact planning in factored domains, SPUDD exploits a decision diagram-based representation @cite_28 . Monte Carlo tree search (MCTS) has been a popular online approximate planning method to scale to large domains @cite_20 . These methods do not apply to exponential action spaces without further approximations. Ho+al15:CDC ( Ho+al15:CDC ), for example, evaluated MCTS with three agents for a targeted version of the disease control problem. Recent variants that exploit factorization @cite_19 may be applicable.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_20" ], "mid": [ "1530444831", "329015778", "2097778153" ], "abstract": [ "Recently, structured methods for solving factored Markov decisions processes (MDPs) with large state spaces have been proposed recently to allow dynamic programming to be applied without the need for complete state enumeration. We propose and examine a new value iteration algorithm for MDPs that uses algebraic decision diagrams (ADDs) to represent value functions and policies, assuming an ADD input representation of the MDP. Dynamic programming is implemented via ADD manipulation. We demonstrate our method on a class of large MDPs (up to 63 million states) and show that significant gains can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions).", "Online, sample-based planning algorithms for POMDPs have shown great promise in scaling to problems with large state spaces, but they become intractable for large action and observation spaces. This is particularly problematic in multiagent POMDPs where the action and observation space grows exponentially with the number of agents. To combat this intractability, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. This approach applies not only in the planning case, but also in the Bayesian reinforcement learning setting. Experimental results show that we are able to provide high quality solutions to large multiagent planning and learning problems.", "We present a reinforcement learning architecture, Dyna-2, that encompasses both sample-based learning and sample-based search, and that generalises across states during both learning and search. We apply Dyna-2 to high performance Computer Go. In this domain the most successful planning methods are based on sample-based search algorithms, such as UCT, in which states are treated individually, and the most successful learning methods are based on temporal-difference learning algorithms, such as Sarsa, in which linear function approximation is used. In both cases, an estimate of the value function is formed, but in the first case it is transient, computed and then discarded after each move, whereas in the second case it is more permanent, slowly accumulating over many moves and games. The idea of Dyna-2 is for the transient planning memory and the permanent learning memory to remain separate, but for both to be based on linear function approximation and both to be updated by Sarsa. To apply Dyna-2 to 9x9 Computer Go, we use a million binary features in the function approximator, based on templates matching small fragments of the board. Using only the transient memory, Dyna-2 performed at least as well as UCT. Using both memories combined, it significantly outperformed UCT. Our program based on Dyna-2 achieved a higher rating on the Computer Go Online Server than any handcrafted or traditional search based program." ] }
1511.09080
2258248446
Many exact and approximate solution methods for Markov Decision Processes (MDPs) attempt to exploit structure in the problem and are based on factorization of the value function. Especially multiagent settings, however, are known to suffer from an exponential increase in value component sizes as interactions become denser, meaning that approximation architectures are restricted in the problem sizes and types they can handle. We present an approach to mitigate this limitation for certain types of multiagent systems, exploiting a property that can be thought of as "anonymous influence" in the factored MDP. Anonymous influence summarizes joint variable effects efficiently whenever the explicit representation of variable identity in the problem can be avoided. We show how representational benefits from anonymity translate into computational efficiencies, both for general variable elimination in a factor graph but in particular also for the approximate linear programming solution to factored MDPs. The latter allows to scale linear programming to factored MDPs that were previously unsolvable. Our results are shown for the control of a stochastic disease process over a densely connected graph with 50 nodes and 25 agents.
Our work is based on earlier contributions of Guestrin03:PhD ( Guestrin03:PhD ) on exploiting factored value functions to scale to large factored action spaces. Similar assumptions can be exploited by inference-based approaches to planning which have been introduced for MASs where policies are represented as finite state controllers @cite_26 . There are no assumptions about the policy in our approach. The variational framework of Cheng+al13:NIPS ( Cheng+al13:NIPS ) uses belief propagation (BP) and is exponential in the cluster size of the graph. Their results are shown for 20-node graphs with out-degree 3 and a restricted class of chain graphs. Our method remains exponential in tree-width but exploits anonymous influence in the graph to scale to random graphs with denser connectivity.
{ "cite_N": [ "@cite_26" ], "mid": [ "1769870091" ], "abstract": [ "Multiagent planning has seen much progress with the development of formal models such as Dec-POMDPs. However, the complexity of these models--NEXP-Complete even for two agents-- has limited scalability. We identify certain mild conditions that are sufficient to make multiagent planning amenable to a scalable approximation w.r.t. the number of agents. This is achieved by constructing a graphical model in which likelihood maximization is equivalent to plan optimization. Using the Expectation-Maximization framework for likelihood maximization, we show that the necessary inference can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We derive a global update rule that combines these local inferences to monotonically increase the overall solution quality. Experiments on a large multiagent planning benchmark confirm the benefits of the new approach in terms of runtime and scalability." ] }
1511.09295
2279233619
Recently, Multipath TCP (MPTCP) has been proposed as an alternative transport approach for datacenter networks. MPTCP provides the ability to split a flow into multiple paths thus providing better performance and resilience to failures. Usually, MPTCP is combined with flow-based Equal-Cost Multi-Path Routing (ECMP), which uses random hashing to split the MPTCP subflows over different paths. However, random hashing can be suboptimal as distinct subflows may end up using the same paths, while other available paths remain unutilized. In this paper, we explore an MPTCP-aware SDN controller that facilitates an alternative routing mechanism for the MPTCP subflows. The controller uses packet inspection to provide deterministic subflow assignment to paths. Using the controller, we show that MPTCP can deliver significantly improved performance when connections are not limited by the access links of hosts. To lessen the effect of throughput limitation due to access links, we also investigate the usage of multiple interfaces at the hosts. We demonstrate, using our modification of the MPTCP Linux Kernel, that using multiple subflows per pair of IP addresses can yield improved performance in multi-interface settings.
@cite_6 demonstrated that the use of MPTCP in datacenters can be beneficial in terms of performance and robustness. The authors used flow-based ECMP to route the MPTCP subflows, and proposed using a dual-homed variant of the FatTree topology to improve performance. In this paper, we consider an MPTCP-aware SDN approach for routing the subflows, which can exploit more efficiently the available paths. Furthermore, using our modification of the MPTCP Linux Kernel at the hosts, we explore a dual-homed variant of the Jellyfish topology that provides more path diversity than FatTree.
{ "cite_N": [ "@cite_6" ], "mid": [ "2142480021" ], "abstract": [ "The latest large-scale data centers offer higher aggregate bandwidth and robustness by creating multiple paths in the core of the net- work. To utilize this bandwidth requires different flows take different paths, which poses a challenge. In short, a single-path transport seems ill-suited to such networks. We propose using Multipath TCP as a replacement for TCP in such data centers, as it can effectively and seamlessly use available bandwidth, giving improved throughput and better fairness on many topologies. We investigate what causes these benefits, teasing apart the contribution of each of the mechanisms used by MPTCP. Using MPTCP lets us rethink data center networks, with a different mindset as to the relationship between transport protocols, rout- ing and topology. MPTCP enables topologies that single path TCP cannot utilize. As a proof-of-concept, we present a dual-homed variant of the FatTree topology. With MPTCP, this outperforms FatTree for a wide range of workloads, but costs the same. In existing data centers, MPTCP is readily deployable leveraging widely deployed technologies such as ECMP. We have run MPTCP on Amazon EC2 and found that it outperforms TCP by a factor of three when there is path diversity. But the biggest benefits will come when data centers are designed for multipath transports." ] }
1511.09295
2279233619
Recently, Multipath TCP (MPTCP) has been proposed as an alternative transport approach for datacenter networks. MPTCP provides the ability to split a flow into multiple paths thus providing better performance and resilience to failures. Usually, MPTCP is combined with flow-based Equal-Cost Multi-Path Routing (ECMP), which uses random hashing to split the MPTCP subflows over different paths. However, random hashing can be suboptimal as distinct subflows may end up using the same paths, while other available paths remain unutilized. In this paper, we explore an MPTCP-aware SDN controller that facilitates an alternative routing mechanism for the MPTCP subflows. The controller uses packet inspection to provide deterministic subflow assignment to paths. Using the controller, we show that MPTCP can deliver significantly improved performance when connections are not limited by the access links of hosts. To lessen the effect of throughput limitation due to access links, we also investigate the usage of multiple interfaces at the hosts. We demonstrate, using our modification of the MPTCP Linux Kernel, that using multiple subflows per pair of IP addresses can yield improved performance in multi-interface settings.
There have been a few recent studies demonstrating the merits of integrating MPTCP with SDN. The most relevant to our work is the one by @cite_17 , who proposed an SDN controller that distributes subflows belonging to the same MPTCP connection over distinct disjoint paths. The controller requires storing the network topology for each MPTCP connection, and cannot exploit settings where hosts have multiple interfaces. Specifically, the controller treats each subflow between two hosts the same, irrespectively of the subflow's source-destination IP addresses (interfaces). Our controller, does not store a topology for each MPTCP connection, but only a set of paths, which is a significantly smaller subset of the topology. More importantly, our controller is designed for multi-interface settings, where we show, using our MPTCP Linux Kernel modification, that the creation of multiple subflows per pair of IP addresses can significantly improve performance. Further, @cite_17 considered only small toy topologies where host access links were not bottlenecks. Here, we focus on datacenter topologies that have specific structural characteristics, and explore the usage of multiple interfaces in cases where access links can be bottlenecks.
{ "cite_N": [ "@cite_17" ], "mid": [ "2067655170" ], "abstract": [ "This paper focuses on evaluating the use of MPTCP to forward sub flows in Open Flow networks. MPTCP is a network protocol designed to forward sub flows through disjointed paths. Modern networks commonly use Equal-Cost Multipath Protocol (ECMP) to split flows through distinct paths. However, even with ECMP enabled, sub flows may be forwarded through the same path. MPTCP improves the multipath routing by setting sub flows to be forwarded through distinct paths. As a consequence, the amount of sub flows must be considered to evaluate the network throughput. In this paper, we design Multiflow to use MPTCP in Open Flow networks. Our proposal is to improve the throughput in shared bottlenecks by forwarding sub flows from a same MPTCP connection through multiple paths. We validate our approach in a test bed where shared bottlenecks occur in the link at the endpoints. The Multiflow improvement of the network performance is evaluated in experiments about resilience and end-to-end throughput." ] }
1511.09295
2279233619
Recently, Multipath TCP (MPTCP) has been proposed as an alternative transport approach for datacenter networks. MPTCP provides the ability to split a flow into multiple paths thus providing better performance and resilience to failures. Usually, MPTCP is combined with flow-based Equal-Cost Multi-Path Routing (ECMP), which uses random hashing to split the MPTCP subflows over different paths. However, random hashing can be suboptimal as distinct subflows may end up using the same paths, while other available paths remain unutilized. In this paper, we explore an MPTCP-aware SDN controller that facilitates an alternative routing mechanism for the MPTCP subflows. The controller uses packet inspection to provide deterministic subflow assignment to paths. Using the controller, we show that MPTCP can deliver significantly improved performance when connections are not limited by the access links of hosts. To lessen the effect of throughput limitation due to access links, we also investigate the usage of multiple interfaces at the hosts. We demonstrate, using our modification of the MPTCP Linux Kernel, that using multiple subflows per pair of IP addresses can yield improved performance in multi-interface settings.
@cite_2 study the use of multiple host interfaces in datacenters. The authors do not use multihoming (hosts being connected to more than one switch) because it increases the overall costs due to extra switches. Instead, the authors proposed GRIN, a datacenter architecture based on VL2 @cite_8 , which interconnects free interfaces between hosts. Intuitively, a host in GRIN can utilize another host's interface when that interface is not being utilized, thus opportunistically increasing its bandwidth. By design, GRIN provides significant performance improvements only when a considerable percentage of hosts are not heavily utilizing their interfaces. On the other hand, we demonstrate that our dual-homed solution (DH-Jellyfish) can achieve significant performance improvements when all hosts are fully utilizing their interfaces. Note that our approach can deliver the full merits of multihoming without increasing costs (same number of switches).
{ "cite_N": [ "@cite_8", "@cite_2" ], "mid": [ "2157614013", "1441884673" ], "abstract": [ "To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds - sustaining a rate that is 94 of the maximum possible.", "Various full bisection designs have been proposed for datacenter networks. They are provisioned for the worst case in which every server sends flat out and there is no congestion anywhere in the network. However, these topologies are prone to considerable underutilisation in the average case encountered in practice. To utilise spare capacity we propose GRIN, a simple, cheap and easily deployable solution that simply wires up any free ports datacenter servers may have. GRIN allows each server to use up to a maximum amount of bandwidth dependent on the number of available ports and the distribution of idle uplinks in the network. Our evaluation found significant benefits for bandwidth-hungry applications running over our testbed, as well as on 1000 EC2 instances. GRIN can be used to augment any existing datacenter network, with a small initial effort and no additional maintenance costs." ] }
1511.09422
2953128684
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently. For example, when the objective is evaluated on a CPU and the constraints are evaluated independently on a GPU. These problems require an acquisition function that can be separated into the contributions of the individual function evaluations. We develop one such acquisition function and call it Predictive Entropy Search with Constraints (PESC). PESC is an approximation to the expected information gain criterion and it compares favorably to alternative approaches based on improvement in several synthetic and real-world problems. In addition to this, we consider problems with a mix of functions that are fast and slow to evaluate. These problems require balancing the amount of time spent in the meta-computation of PESC and in the actual evaluation of the target objective. We take a bounded rationality approach and develop partial update for PESC which trades off accuracy against speed. We then propose a method for adaptively switching between the partial and full updates for PESC. This allows us to interpolate between versions of PESC that are efficient in terms of function evaluations and those that are efficient in terms of wall-clock time. Overall, we demonstrate that PESC is an effective algorithm that provides a promising direction towards a unified solution for constrained Bayesian optimization.
In the constrained setting, the incumbent @math can be defined as the minimum expected objective value subject to all the constraints being satisfied at the corresponding location. However, we can never guarantee that all the constraints will be satisfied when they are only observed through noisy evaluations. To circumvent this problem, @cite_10 define @math as the lowest expected objective value subject to all the constraints being satisfied with posterior probability larger than the threshold @math , where @math is a small number such as @math . However, this value for @math still cannot be computed when there is no point in the search space that satisfies the constraints with posterior probability higher than @math . For example, because of lack of data for the constraints. In this case, Gelbart2014 change the original acquisition function given by eq:background:constrained-bo:EIC and ignore the factor @math in that expression. This allows them to search only for a feasible location, ignoring the objective @math entirely and just optimizing the constraint satisfaction probability. However, this can lead to inefficient optimization in practice because the data collected for the objective @math is not used to make optimal decisions.
{ "cite_N": [ "@cite_10" ], "mid": [ "1701825639" ], "abstract": [ "Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions. Many real-world optimization problems of interest also have constraints which are unknown a priori. In this paper, we study Bayesian optimization for constrained problems in the general case that noise may be present in the constraint functions, and the objective and constraints may be evaluated independently. We provide motivating practical examples, and present a general framework to solve such problems. We demonstrate the effectiveness of our approach on optimizing the performance of online latent Dirichlet allocation subject to topic sparsity constraints, tuning a neural network given test-time memory constraints, and optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed time, subject to passing standard convergence diagnostics." ] }
1511.08956
2185514017
Many classification approaches first represent a test sample using the training samples of all the classes. This collaborative representation is then used to label the test sample. It was a common belief that sparseness of the representation is the key to success for this classification scheme. However, more recently, it has been claimed that it is the collaboration and not the sparseness that makes the scheme effective. This claim is attractive as it allows to relinquish the computationally expensive sparsity constraint over the representation. In this paper, we first extend the analysis supporting this claim and then show that sparseness explicitly contributes to improved classification, hence it should not be completely ignored for computational gains. Inspired by this result, we augment a dense collaborative representation with a sparse representation and propose an efficient classification method that capitalizes on the resulting representation. The augmented representation and the classification method work together meticulously to achieve higher accuracy and lower computational time compared to state-of-the-art collaborative representation based classification approaches. Experiments on benchmark face, object and action databases show the efficacy of our approach.
Algorithm presents the base-line scheme used by popular approaches (e.g. @cite_30 , @cite_16 , @cite_5 , @cite_1 , @cite_21 , @cite_8 ) that exploit collaborative representation in multi-class classification. The algorithm performs three key steps of (1) optimizing @math 's representation over a given dictionary, (2) computing class-specific reconstruction residuals @math , @math and (3) labeling @math using the computed residuals. In step (2), @math comprises the coefficients of @math corresponding to the @math class only. Hence, in step (3), @math is assigned the label of the class that results in the smallest reconstruction residual. We can treat different approaches as special cases of the presented algorithm.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_21", "@cite_1", "@cite_5", "@cite_16" ], "mid": [ "2129812935", "", "129703402", "2080296254", "2132467081", "2144583419" ], "abstract": [ "We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.", "", "Empirically, we find that, despite the class-specific features owned by the objects appearing in the images, the objects from different categories usually share some common patterns, which do not contribute to the discrimination of them. Concentrating on this observation and under the general dictionary learning (DL) framework, we propose a novel method to explicitly learn a common pattern pool (the commonality) and class-specific dictionaries (the particularity) for classification. We call our method DL-COPAR, which can learn the most compact and most discriminative class-specific dictionaries used for classification. The proposed DL-COPAR is extensively evaluated both on synthetic data and on benchmark image databases in comparison with existing DL-based classification methods. The experimental results demonstrate that DL-COPAR achieves very promising performances in various applications, such as face recognition, handwritten digit recognition, scene classification and object recognition.", "Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple £2 approach to the face recognition problem is not only significantly more accurate than the state-of-the-art approach, it is also more robust, and much faster. These results are demonstrated on the publicly available YaleB and AR face datasets but have implications for the application of Compressive Sensing more broadly.", "As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.", "The success of sparse representation based classification (SRC) has largely boosted the research of sparsity based face recognition in recent years. A prevailing view is that the sparsity based face recognition performs well only when the training images have been carefully controlled and the number of samples per class is sufficiently large. This paper challenges the prevailing view by proposing a prototype plus variation'' representation model for sparsity based face recognition. Based on the new model, a Superposed SRC (SSRC), in which the dictionary is assembled by the class centroids and the sample-to-centroid differences, leads to a substantial improvement on SRC. The experiments results on AR, FERET and FRGC databases validate that, if the proposed prototype plus variation representation model is applied, sparse coding plays a crucial role in face recognition, and performs well even when the dictionary bases are collected under uncontrolled conditions and only a single sample per classes is available." ] }
1511.08956
2185514017
Many classification approaches first represent a test sample using the training samples of all the classes. This collaborative representation is then used to label the test sample. It was a common belief that sparseness of the representation is the key to success for this classification scheme. However, more recently, it has been claimed that it is the collaboration and not the sparseness that makes the scheme effective. This claim is attractive as it allows to relinquish the computationally expensive sparsity constraint over the representation. In this paper, we first extend the analysis supporting this claim and then show that sparseness explicitly contributes to improved classification, hence it should not be completely ignored for computational gains. Inspired by this result, we augment a dense collaborative representation with a sparse representation and propose an efficient classification method that capitalizes on the resulting representation. The augmented representation and the classification method work together meticulously to achieve higher accuracy and lower computational time compared to state-of-the-art collaborative representation based classification approaches. Experiments on benchmark face, object and action databases show the efficacy of our approach.
In SRC @cite_30 , @math in Eq. ), which encourages the computed representation @math to be sparse. In Superposed-SRC (SSRC), @cite_16 modified the residual computation step of SRC. For SSRC, @math consists of class centroids and sample-to-centroid differences. While computing the residuals, SSRC keeps the coefficients of @math corresponding to the sample-to-centroid differences fixed in each @math . The CR-based classifier proposed by @cite_5 uses @math and solves Eq. ) using the Regularized Least Squares (RLS) method, hence denoted as CRC-RLS. @cite_1 used @math in Eq. ) and solved it as the standard least squares problem for face recognition. Chi and Porikli @cite_31 used a linear combination of a CR-based classifier and a nearest subspace classifier @cite_7 for improved classification performance.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_1", "@cite_5", "@cite_31", "@cite_16" ], "mid": [ "2129812935", "2125874614", "2080296254", "2132467081", "", "2144583419" ], "abstract": [ "We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.", "Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.", "Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple £2 approach to the face recognition problem is not only significantly more accurate than the state-of-the-art approach, it is also more robust, and much faster. These results are demonstrated on the publicly available YaleB and AR face datasets but have implications for the application of Compressive Sensing more broadly.", "As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.", "", "The success of sparse representation based classification (SRC) has largely boosted the research of sparsity based face recognition in recent years. A prevailing view is that the sparsity based face recognition performs well only when the training images have been carefully controlled and the number of samples per class is sufficiently large. This paper challenges the prevailing view by proposing a prototype plus variation'' representation model for sparsity based face recognition. Based on the new model, a Superposed SRC (SSRC), in which the dictionary is assembled by the class centroids and the sample-to-centroid differences, leads to a substantial improvement on SRC. The experiments results on AR, FERET and FRGC databases validate that, if the proposed prototype plus variation representation model is applied, sparse coding plays a crucial role in face recognition, and performs well even when the dictionary bases are collected under uncontrolled conditions and only a single sample per classes is available." ] }
1511.08956
2185514017
Many classification approaches first represent a test sample using the training samples of all the classes. This collaborative representation is then used to label the test sample. It was a common belief that sparseness of the representation is the key to success for this classification scheme. However, more recently, it has been claimed that it is the collaboration and not the sparseness that makes the scheme effective. This claim is attractive as it allows to relinquish the computationally expensive sparsity constraint over the representation. In this paper, we first extend the analysis supporting this claim and then show that sparseness explicitly contributes to improved classification, hence it should not be completely ignored for computational gains. Inspired by this result, we augment a dense collaborative representation with a sparse representation and propose an efficient classification method that capitalizes on the resulting representation. The augmented representation and the classification method work together meticulously to achieve higher accuracy and lower computational time compared to state-of-the-art collaborative representation based classification approaches. Experiments on benchmark face, object and action databases show the efficacy of our approach.
Collaborative representation is also commonly used by discriminative dictionary learning techniques, e.g. @cite_21 , @cite_8 . Although such approaches a dictionary instead of directly using the training data as @math , explicit correspondence between the learned dictionary atoms and the class labels allows them to exploit the CR-based classification scheme. For instance, the Global Classifier (GC) used by Kong and Wang @cite_21 is the same variant of Algorithm that is used by SSRC @cite_16 . The dictionary learned by the DL-COPAR algorithm @cite_21 consists of COmmon atoms for all classes and PARticular atoms specific to each class. The particular atoms behave like class centroids whereas the common atoms act as centroid-to-sample differences in SSRC. Similarly, the GC used in the Fisher Discriminant Dictionary Learning (FDDL) @cite_8 is a direct variant of CRC-RLS @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_16", "@cite_8" ], "mid": [ "2132467081", "129703402", "2144583419", "" ], "abstract": [ "As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.", "Empirically, we find that, despite the class-specific features owned by the objects appearing in the images, the objects from different categories usually share some common patterns, which do not contribute to the discrimination of them. Concentrating on this observation and under the general dictionary learning (DL) framework, we propose a novel method to explicitly learn a common pattern pool (the commonality) and class-specific dictionaries (the particularity) for classification. We call our method DL-COPAR, which can learn the most compact and most discriminative class-specific dictionaries used for classification. The proposed DL-COPAR is extensively evaluated both on synthetic data and on benchmark image databases in comparison with existing DL-based classification methods. The experimental results demonstrate that DL-COPAR achieves very promising performances in various applications, such as face recognition, handwritten digit recognition, scene classification and object recognition.", "The success of sparse representation based classification (SRC) has largely boosted the research of sparsity based face recognition in recent years. A prevailing view is that the sparsity based face recognition performs well only when the training images have been carefully controlled and the number of samples per class is sufficiently large. This paper challenges the prevailing view by proposing a prototype plus variation'' representation model for sparsity based face recognition. Based on the new model, a Superposed SRC (SSRC), in which the dictionary is assembled by the class centroids and the sample-to-centroid differences, leads to a substantial improvement on SRC. The experiments results on AR, FERET and FRGC databases validate that, if the proposed prototype plus variation representation model is applied, sparse coding plays a crucial role in face recognition, and performs well even when the dictionary bases are collected under uncontrolled conditions and only a single sample per classes is available.", "" ] }
1511.08956
2185514017
Many classification approaches first represent a test sample using the training samples of all the classes. This collaborative representation is then used to label the test sample. It was a common belief that sparseness of the representation is the key to success for this classification scheme. However, more recently, it has been claimed that it is the collaboration and not the sparseness that makes the scheme effective. This claim is attractive as it allows to relinquish the computationally expensive sparsity constraint over the representation. In this paper, we first extend the analysis supporting this claim and then show that sparseness explicitly contributes to improved classification, hence it should not be completely ignored for computational gains. Inspired by this result, we augment a dense collaborative representation with a sparse representation and propose an efficient classification method that capitalizes on the resulting representation. The augmented representation and the classification method work together meticulously to achieve higher accuracy and lower computational time compared to state-of-the-art collaborative representation based classification approaches. Experiments on benchmark face, object and action databases show the efficacy of our approach.
Another interesting direction of discriminative dictionary learning techniques, e.g. Label Consistent K-SVD (LC-KSVD) @cite_4 and Discriminative K-SVD (D-KSVD) @cite_17 , is also related to CR-based classification. Such techniques learn collaborative dictionaries from the training data without enforcing strict correspondence between the class labels and the dictionary atoms. Due to the lack of such correspondence, the label of a test sample is chosen by maximizing a weighted sum of the coefficients of @math , where the @math -dimensional @math weight-vectors are also learned during dictionary optimization. Among these weight-vectors, the @math vector generally assigns large weights to the coefficients of @math corresponding to the dictionary atoms used commonly in representing the training data of the @math class. The above mentioned discriminative dictionary learning approaches classify a test sample using its representation over a collaborative set of features, learned directly from the training data. Therefore, they are considered to be instances of CR-based classification.
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "1963932623", "2027805700" ], "abstract": [ "A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called \"discriminative sparse-code error\" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.", "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition." ] }
1511.08913
2278199290
Multi-object tracking remains challenging due to frequent occurrence of occlusions and outliers. In order to handle this problem, we propose an Approximation-Shrink Scheme for sequential optimization. This scheme is realized by introducing an Ambiguity-Clearness Graph to avoid conflicts and maintain sequence independent, as well as a sliding window optimization framework to constrain the size of state space and guarantee convergence. Based on this window-wise framework, the states of targets are clustered in a self-organizing manner. Moreover, we show that the traditional online and batch tracking methods can be embraced by the window-wise framework. Experiments indicate that with only a small window, the optimization performance can be much better than online methods and approach to batch methods.
Different from the past tracking methods @cite_8 @cite_11 , TBD reconstructs trajectories of targets by associating detections provided by the object detectors. Most of the researchers exploits the TBD framework to design their algorithms in MOT, which can be categorized as online and batch approaches.
{ "cite_N": [ "@cite_11", "@cite_8" ], "mid": [ "2150440166", "2127923214" ], "abstract": [ "The problem of associating data with targets in a cluttered multi-target environment is discussed and applied to passive sonar tracking. The probabilistic data association (PDA) method, which is based on computing the posterior probability of each candidate measurement found in a validation gate, assumes that only one real target is present and all other measurements are Poisson-distributed clutter. In this paper, a new theoretical result is presented: the joint probabilistic data association (JPDA) algorithm, in which joint posterior association probabilities are computed for multiple targets (or multiple discrete interfering sources) in Poisson clutter. The algorithm is applied to a passive sonar tracking problem with multiple sensors and targets, in which a target is not fully observable from a single sensor. Targets are modeled with four geographic states, two or more acoustic states, and realistic (i.e., low) probabilities of detection at each sample time. A simulation result is presented for two heavily interfering targets illustrating the dramatic tracking improvements obtained by estimating the targets' states using joint association probabilities.", "An algorithm for tracking multiple targets in a cluttered enviroment is developed. The algorithm is capable of initiating tracks, accounting for false or missing reports, and processing sets of dependent reports. As each measurement is received, probabilities are calculated for the hypotheses that the measurement came from previously known targets in a target file, or from a new target, or that the measurement is false. Target states are estimated from each such data-association hypothesis using a Kalman filter. As more measurements are received, the probabilities of joint hypotheses are calculated recursively using all available information such as density of unknown targets, density of false targets, probability of detection, and location uncertainty. This branching technique allows correlation of a measurement with its source based on subsequent, as well as previous, data. To keep the number of hypotheses reasonable, unlikely hypotheses are eliminated and hypotheses with similar target estimates are combined. To minimize computational requirements, the entire set of targets and measurements is divided into clusters that are solved independently. In an illustrative example of aircraft tracking, the algorithm successfully tracks targets over a wide range of conditions." ] }
1511.08971
2272823194
Real world complex networks are scale free and possess meso-scale properties like core-periphery and community structure. We study evolution of the core over time in real world networks. This paper proposes evolving models for both unweighted and weighted scale free networks having local and global core-periphery as well as community structure. Network evolves using topological growth, self growth, and weight distribution function. To validate the correctness of proposed models, we use K-shell and S-shell decomposition methods. Simulation results show that the generated unweighted networks follow power law degree distribution with droop head and heavy tail. Similarly, generated weighted networks follow degree, strength, and edge-weight power law distributions. We further study other properties of complex networks, such as clustering coefficient, nearest neighbor degree, and strength degree correlation.
In 1999, Barabasi-Albert studied the dynamics of growing networks, where new nodes arrive and make connections with existing nodes. They empirically observed that most of the real world networks follow a power law degree distribution. They proposed a preferential attachment model based on phenomenon, which results in a scale free network having few hubs @cite_4 . This work added a new dimension to the study of complex networks.
{ "cite_N": [ "@cite_4" ], "mid": [ "2008620264" ], "abstract": [ "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems." ] }