aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1508.06708 | 2949812103 | This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration. | Rodr ' guez @cite_26 represents the score function between word labels and images as the dot-product between the word-label feature and an image embedding, and trains a structured SVM (SSVM) to learn the weights to map the bag-of-words image features to the image embedding. Dhungel al @cite_6 uses structured learning and deep networks to segment mammograms. First, a network is trained to generate a unary potential function. Next, a linear SSVM score function is trained on the output of the deep network, as well as other potential functions. Osadchy al @cite_12 apply structured learning and CNN for face detection and face pose estimation. The CNN was trained to map the face image to a manually-designed face pose space. A per-sample cost function is defined with only one global minimum so that the ground-truth pose has minimum energy. In contrast to @cite_6 @cite_26 @cite_12 , we learn the feature embedding and score prediction jointly within a maximum-margin framework. | {
"cite_N": [
"@cite_26",
"@cite_12",
"@cite_6"
],
"mid": [
"2091526670",
"2616465717",
"1546431092"
],
"abstract": [
"A system and method for comparing a text image and a character string are provided. The method includes embedding a character string into a vectorial space by extracting a set of features from the character string and generating a character string representation based on the extracted features, such as a spatial pyramid bag of characters (SPBOC) representation. A text image is embedded into a vectorial space by extracting a set of features from the text image and generating a text image representation based on the text image extracted features. A compatibility between the text image representation and the character string representation is computed, which includes computing a function of the text image representation and character string representation.",
"We describe a novel method for simultaneously detecting faces and estimating their pose in real time. The method employs a convolutional network to map images of faces to points on a low-dimensional manifold parametrized by pose, and images of non-faces to points far away from that manifold. Given an image, detecting a face and estimating its pose is viewed as minimizing an energy function with respect to the face non-face binary variable and the continuous pose parameters. The system is trained to minimize a loss function that drives correct combinations of labels and pose to be associated with lower energy values than incorrect ones. The system is designed to handle very large range of poses without retraining. The performance of the system was tested on three standard data sets---for frontal views, rotated faces, and profiles---is comparable to previous systems that are designed to handle a single one of these data sets. We show that a system trained simuiltaneously for detection and pose estimation is more accurate on both tasks than similar systems trained for each task separately.",
"In this paper, we present a novel method for the segmentation of breast masses from mammograms exploring structured and deep learning. Specifically, using structured support vector machine (SSVM), we formulate a model that combines different types of potential functions, including one that classifies image regions using deep learning. Our main goal with this work is to show the accuracy and efficiency improvements that these relatively new techniques can provide for the segmentation of breast masses from mammograms. We also propose an easily reproducible quantitative analysis to assess the performance of breast mass segmentation methodologies based on widely accepted accuracy and running time measurements on public datasets, which will facilitate further comparisons for this segmentation problem. In particular, we use two publicly available datasets (DDSM-BCRP and INbreast) and propose the computation of the running time taken for the methodology to produce a mass segmentation given an input image and the use of the Dice index to quantitatively measure the segmentation accuracy. For both databases, we show that our proposed methodology produces competitive results in terms of accuracy and running time."
]
} |
1508.06184 | 2949227810 | Community-based question answering (CQA) platforms are crowd-sourced services for sharing user expertise on various topics, from mechanical repairs to parenting. While they naturally build-in an online social network infrastructure, they carry a very different purpose from Facebook-like social networks, where users "hang-out" with their friends and tend to share more personal information. It is unclear, thus, how the privacy concerns and their correlation with user behavior in an online social network translate into a CQA platform. This study analyzes one year of recorded traces from a mature CQA platform to understand the association between users' privacy concerns as manifested by their account settings and their activity in the platform. The results show that privacy preference is correlated with behavior in the community in terms of engagement, retention, accomplishments and deviance from the norm. We find privacy-concerned users have higher qualitative and quantitative contributions, show higher retention, report more abuses, have higher perception on answer quality and have larger social circles. However, at the same time, these users also exhibit more deviant behavior than the users with public profiles. | Community-based Question Answering has attracted much research interest from diverse communities as web science, HCI and information retrieval. We divide research on CQA in four categories: content perspective, user perspective, system perspective and social network perspective. Content perspective research focuses on various aspects of questions and answers such as answerability of questions @cite_1 @cite_30 , question classification (e.g., factual or conversational) @cite_36 @cite_7 , quality of questions @cite_10 @cite_29 and answers @cite_37 @cite_5 . @cite_12 investigate the influence of gender, age, education level, and topic on sentiments of questions and answers. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_7",
"@cite_36",
"@cite_29",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2126908276",
"2037858832",
"2147144521",
"2126104150",
"1969085038",
"2280386769",
"2057415299",
"2116875384",
"2145456657"
],
"abstract": [
"Synchronous social Q&A systems exist on the Web and in the enterprise to connect people with questions to people with answers in real-time. In such systems, askers' desire for quick answers is in tension with costs associated with interrupting numerous candidate answerers per question. Supporting users of synchronous social Q&A systems at various points in the question lifecycle (from conception to answer) helps askers make informed decisions about the likelihood of question success and helps answerers face fewer interruptions. For example, predicting that a question will not be well answered may lead the asker to rephrase or retract the question. Similarly, predicting that an answer is not forthcoming during the dialog can prompt system behaviors such as finding other answerers to join the conversation. As another example, predictions of asker satisfaction can be assigned to completed conversations and used for later retrieval. In this paper, we use data from an instant-messaging-based synchronous social Q&A service deployed to an online community of over two thousand users to study the prediction of: (i) whether a question will be answered, (ii) the number of candidate answerers that the question will be sent to, and (iii) whether the asker will be satisfied by the answer received. Predictions are made at many points of the question lifecycle (e.g., when the question is entered, when the answerer is located, halfway through the asker-answerer dialog, etc.). The findings from our study show that we can learn capable models for these tasks using a broad range of features derived from user profiles, system interactions, question setting, and the dialog between asker and answerer. Our research can lead to more sophisticated and more useful real-time Q&A support.",
"The quality of user-generated content varies drastically from excellent to abuse and spam. As the availability of such content increases, the task of identifying high-quality content sites based on user contributions --social media sites -- becomes increasingly important. Social media in general exhibit a rich variety of information sources: in addition to the content itself, there is a wide array of non-content information available, such as links between items and explicit quality ratings from members of the community. In this paper we investigate methods for exploiting such community feedback to automatically identify high quality content. As a test case, we focus on Yahoo! Answers, a large community question answering portal that is particularly rich in the amount and types of content and social interactions available in it. We introduce a general classification framework for combining the evidence from different sources of information, that can be tuned automatically for a given social media type and quality definition. In particular, for the community question answering domain, we show that our system is able to separate high-quality items from the rest with an accuracy close to that of humans",
"Social question and answer (Q&A) Web sites field a remarkable variety of questions: while one user seeks highly technical information, another looks to start a social exchange. Prior work in the field has adopted informal taxonomies of question types as a mechanism for interpreting user behavior and community outcomes. In this work, we contribute a formal taxonomy of question types to deepen our understanding of the nature and intent of questions that are asked online. Our taxonomy is grounded in Aristotelian rhetorical theory, and complemented by contributions of leading twentieth century rhetorical theorists. This taxonomy offers a way to differentiate between similar-sounding questions, while remaining flexible enough to encompass the wide range of questions asked online. To ground the taxonomy in reality, we code questions drawn from three popular social Q&A sites, and report on the distributions of several objective and subjective measures.",
"Tens of thousands of questions are asked and answered every day on social question and answer (Q&A) Web sites such as Yahoo Answers. While these sites generate an enormous volume of searchable data, the problem of determining which questions and answers are archival quality has grown. One major component of this problem is the prevalence of conversational questions, identified both by Q&A sites and academic literature as questions that are intended simply to start discussion. For example, a conversational question such as \"do you believe in evolution?\" might successfully engage users in discussion, but probably will not yield a useful web page for users searching for information about evolution. Using data from three popular Q&A sites, we confirm that humans can reliably distinguish between these conversational questions and other informational questions, and present evidence that conversational questions typically have much lower potential archival value than informational questions. Further, we explore the use of machine learning techniques to automatically classify questions as conversational or informational, learning in the process about categorical, linguistic, and social differences between different question types. Our algorithms approach human performance, attaining 89.7 classification accuracy in our experiments.",
"At community question answering services, users are usually encouraged to rate questions by votes. The questions with the most votes are then recommended and ranked on the top when users browse questions by category. As users are not obligated to rate questions, usually only a small proportion of questions eventually gets rating. Thus, in this paper, we are concerned with learning to recommend questions from user ratings of a limited size. To overcome the data sparsity, we propose to utilize questions without users rating as well. Further, as there exist certain noises within user ratings (the preference of some users expressed in their ratings diverges from that of the majority of users), we design a new algorithm called 'majority-based perceptron algorithm' which can avoid the influence of noisy instances by emphasizing its learning over data instances from the majority users. Experimental results from a large collection of real questions confirm the effectiveness of our proposals.",
"All askers who post questions in Community-based Question Answering (CQA) sites such as Yahoo! Answers, Quora or Baidu's Zhidao, expect to receive an answer, and are frustrated when their questions remain unanswered. We propose to provide a type of \"heads up\" to askers by predicting how many answers, if at all, they will get. Giving a preemptive warning to the asker at posting time should reduce the frustration effect and hopefully allow askers to rephrase their questions if needed. To the best of our knowledge, this is the first attempt to predict the actual number of answers, in addition to predicting whether the question will be answered or not. To this effect, we introduce a new prediction model, specifically tailored to hierarchically structured CQA sites.We conducted extensive experiments on a large corpus comprising 1 year of answering activity on Yahoo! Answers, as opposed to a single day in previous studies. These experiments show that the F1 we achieved is 24 better than in previous work, mostly due the structure built into the novel model.",
"Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality.",
"Users tend to ask and answer questions in community question answering (CQA) services to seek information and share knowledge. A corollary is that myriad of questions and answers appear in CQA service. Accordingly, volumes of studies have been taken to explore the answer quality so as to provide a preliminary screening for better answers. However, to our knowledge, less attention has so far been paid to question quality in CQA. Knowing question quality provides us with finding and recommending good questions together with identifying bad ones which hinder the CQA service. In this paper, we are conducting two studies to investigate the question quality issue. The first study analyzes the factors of question quality and finds that the interaction between askers and topics results in the differences of question quality. Based on this finding, in the second study we propose a Mutual Reinforcement-based Label Propagation (MRLP) algorithm to predict question quality. We experiment with Yahoo! Answers data and the results demonstrate the effectiveness of our algorithm in distinguishing high-quality questions from low-quality ones.",
"Sentiment extraction from online web documents has recently been an active research topic due to its potential use in commercial applications. By sentiment analysis, we refer to the problem of assigning a quantitative positive negative mood to a short bit of text. Most studies in this area are limited to the identification of sentiments and do not investigate the interplay between sentiments and other factors. In this work, we use a sentiment extraction tool to investigate the influence of factors such as gender, age, education level, the topic at hand, or even the time of the day on sentiments in the context of a large online question answering site. We start our analysis by looking at direct correlations, e.g., we observe more positive sentiments on weekends, very neutral ones in the Science & Mathematics topic, a trend for younger people to express stronger sentiments, or people in military bases to ask the most neutral questions. We then extend this basic analysis by investigating how properties of the (asker, answerer) pair affect the sentiment present in the answer. Among other things, we observe a dependence on the pairing of some inferred attributes estimated by a user's ZIP code. We also show that the best answers differ in their sentiments from other answers, e.g., in the Business & Finance topic, best answers tend to have a more neutral sentiment than other answers. Finally, we report results for the task of predicting the attitude that a question will provoke in answers. We believe that understanding factors influencing the mood of users is not only interesting from a sociological point of view, but also has applications in advertising, recommendation, and search."
]
} |
1508.06184 | 2949227810 | Community-based question answering (CQA) platforms are crowd-sourced services for sharing user expertise on various topics, from mechanical repairs to parenting. While they naturally build-in an online social network infrastructure, they carry a very different purpose from Facebook-like social networks, where users "hang-out" with their friends and tend to share more personal information. It is unclear, thus, how the privacy concerns and their correlation with user behavior in an online social network translate into a CQA platform. This study analyzes one year of recorded traces from a mature CQA platform to understand the association between users' privacy concerns as manifested by their account settings and their activity in the platform. The results show that privacy preference is correlated with behavior in the community in terms of engagement, retention, accomplishments and deviance from the norm. We find privacy-concerned users have higher qualitative and quantitative contributions, show higher retention, report more abuses, have higher perception on answer quality and have larger social circles. However, at the same time, these users also exhibit more deviant behavior than the users with public profiles. | User perspective research sheds light on why users contribute content: that is, why users ask questions (askers are failed searchers, in that, they use CQA sites when web search fails @cite_14 ) and why they answer questions (e.g., they refrain from answering sensitive questions to avoid being reported for abuse and potentially lose access to the community @cite_17 ). Moreover, @cite_33 explore the factors that influence users' answering behavior in (e.g., when users tend to answer and how they choose questions). @cite_40 investigate truthfulness of users and offer a quantitative proof that users post sensitive and accurate information to fulfill specific information needs. | {
"cite_N": [
"@cite_40",
"@cite_14",
"@cite_33",
"@cite_17"
],
"mid": [
"1973882848",
"2019432678",
"1517137109",
"2032599523"
],
"abstract": [
"Internet users notoriously take an assumed identity or masquerade as someone else, for reasons such as financial profit or social benefit. But often the converse is also observed, where people choose to reveal true features of their identity, including deeply intimate details. This work attempts to explore several of the conditions that allow this to happen by analyzing the content generated by these users. We examine multiple social media on the Web, specifically focusing on Yahoo! Answers, encompassing more than a billion answers posted since 2006. Our analysis covers discussions of personal topics such as body measurements and income, and of socially sensitive subjects such as sexual behaviors. We offer quantitative proof that people are aware of the fact that they are posting sensitive information, and yet provide accurate information to fulfill specific information needs. Our analysis further reveals that on community question answering sites, when users are truthful, their expectation of an accurate answer is met.",
"While Web search has become increasingly effective over the last decade, for many users' needs the required answers may be spread across many documents, or may not exist on the Web at all. Yet, many of these needs could be addressed by asking people via popular Community Question Answering (CQA) services, such as Baidu Knows, Quora, or Yahoo! Answers. In this paper, we perform the first large-scale analysis of how searchers become askers. For this, we study the logs of a major web search engine to trace the transformation of a large number of failed searches into questions posted on a popular CQA site. Specifically, we analyze the characteristics of the queries, and of the patterns of search behavior that precede posting a question; the relationship between the content of the attempted queries and of the posted questions; and the subsequent actions the user performs on the CQA site. Our work develops novel insights into searcher intent and behavior that lead to asking questions to the community, providing a foundation for more effective integration of automated web search and social information seeking.",
"A key functionality in Collaborative Question Answering (CQA) systems is the assignment of the questions from information seekers to the potential answerers. An attractive solution is to automatically recommend the questions to the potential answerers with expertise or interest in the question topic. However, previous work has largely ignored a key problem in question recommendation - namely, whether the potential answerer is likely to accept and answer the recommended questions in a timely manner. This paper explores the contextual factors that influence the answerer behavior in a large, popular CQA system, with the goal to inform the construction of question routing and recommendation systems. Specifically, we consider when users tend to answer questions in a large-scale CQA system, and how answerers tend to choose the questions to answer. Our results over a dataset of more than 1 million questions draw from a real CQA system could help develop more realistic evaluation methods for question recommendation, and inform the design of future question recommender systems.",
"Posing a question to an online question and answer community does not guarantee a response. Significant prior work has explored and identified members' motivations for contributing to communities of collective action (e.g., Yahoo! Answers); in contrast it is not well understood why members choose to not answer a question they have already read. To explore this issue, we surveyed 135 active members of Yahoo! Answers. We show that top and regular contributors experience the same reasons to not answer a question: subject nature and composition of the question; perception of how the questioner will receive, interpret and react to their response; and a belief that their response will lose its meaning and get lost in the crowd if too many responses have already been given. Informed by our results, we discuss opportunities to improve the efficacy of the question and answer process, and to encourage greater contributions through improved design."
]
} |
1508.06184 | 2949227810 | Community-based question answering (CQA) platforms are crowd-sourced services for sharing user expertise on various topics, from mechanical repairs to parenting. While they naturally build-in an online social network infrastructure, they carry a very different purpose from Facebook-like social networks, where users "hang-out" with their friends and tend to share more personal information. It is unclear, thus, how the privacy concerns and their correlation with user behavior in an online social network translate into a CQA platform. This study analyzes one year of recorded traces from a mature CQA platform to understand the association between users' privacy concerns as manifested by their account settings and their activity in the platform. The results show that privacy preference is correlated with behavior in the community in terms of engagement, retention, accomplishments and deviance from the norm. We find privacy-concerned users have higher qualitative and quantitative contributions, show higher retention, report more abuses, have higher perception on answer quality and have larger social circles. However, at the same time, these users also exhibit more deviant behavior than the users with public profiles. | System perspective research develops techniques and tools to improve platform usability. It includes routing questions to expert users @cite_25 @cite_35 , extracting factual answers from QA archives @cite_24 and reusing the repository of past answers to answer new open questions @cite_31 . @cite_22 derive tips'' (a self-contained bit of non-obvious answer) from to address how-to'' queries. Social network perspective research attempts to understand the interplay between users' social connections and Q &A activities such as analyzing the social network of Quora @cite_8 , using social network properties and contribution behavior for content abusers detection @cite_16 . | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_31",
"@cite_16",
"@cite_25"
],
"mid": [
"2171258484",
"1991872311",
"1730818938",
"2126776599",
"2151280665",
"2197211043",
"2164491644"
],
"abstract": [
"What makes a good question recommendation system for community question-answering sites? First, to maintain the health of the ecosystem, it needs to be designed around answerers, rather than exclusively for askers. Next, it needs to scale to many questions and users, and be fast enough to route a newly-posted question to potential answerers within the few minutes before the asker's patience runs out. It also needs to show each answerer questions that are relevant to his or her interests. We have designed and built such a system for Yahoo! Answers, but realized, when testing it with live users, that it was not enough. We found that those drawing-board requirements fail to capture user's interests. The feature that they really missed was diversity. In other words, showing them just the main topics they had previously expressed interest in was simply too dull. Adding the spice of topics slightly outside the core of their past activities significantly improved engagement. We conducted a large-scale online experiment in production in Yahoo! Answers that showed that recommendations driven by relevance alone perform worse than a control group without question recommendations, which is the current behavior. However, an algorithm promoting both diversity and freshness improved the number of answers by 17 , daily session length by 10 , and had a significant positive impact on peripheral activities such as voting.",
"We investigate the problem of mining \"tips\" from Yahoo! Answers and displaying those tips in response to related web queries. Here, a \"tip\" is a short, concrete and self-contained bit of non-obvious advice such as \"To zest a lime if you don't have a zester : use a cheese grater.\" First, we estimate the volume of web queries with \"how-to\" intent, which could be potentially addressed by a tip. Second, we analyze how to detect such queries automatically without solely relying on literal \"how to *\" patterns. Third, we describe how to derive potential tips automatically from Yahoo! Answers, and we develop machine-learning techniques to remove low-quality tips. Finally, we discuss how to match web queries with \"how-to\" intent to tips. We evaluate both the quality of these direct displays as well as the size of the query volume that can be addressed by serving tips.",
"Efforts such as Wikipedia have shown the ability of user communities to collect, organize and curate information on the Internet. Recently, a number of question and answer (Q&A) sites have successfully built large growing knowledge repositories, each driven by a wide range of questions and answers from its users community. While sites like Yahoo Answers have stalled and begun to shrink, one site still going strong is Quora, a rapidly growing service that augments a regular Q&A system with social links between users. Despite its success, however, little is known about what drives Quora's growth, and how it continues to connect visitors and experts to the right questions as it grows. In this paper, we present results of a detailed analysis of Quora using measurements. We shed light on the impact of three different connection networks (or graphs) inside Quora, a graph connecting topics to users, a social graph connecting users, and a graph connecting related questions. Our results show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. One drives the attention and activity of users, and the other directs them to a small set of popular and interesting questions.",
"Community Question Answering has emerged as a popular and effective paradigm for a wide range of information needs. For example, to find out an obscure piece of trivia, it is now possible and even very effective to post a question on a popular community QA site such as Yahoo! Answers, and to rely on other users to provide answers, often within minutes. The importance of such community QA sites is magnified as they create archives of millions of questions and hundreds of millions of answers, many of which are invaluable for the information needs of other searchers. However, to make this immense body of knowledge accessible, effective answer retrieval is required. In particular, as any user can contribute an answer to a question, the majority of the content reflects personal, often unsubstantiated opinions. A ranking that combines both relevance and quality is required to make such archives usable for factual information retrieval. This task is challenging, as the structure and the contents of community QA archives differ significantly from the web setting. To address this problem we present a general ranking framework for factual information retrieval from social media. Results of a large scale evaluation demonstrate that our method is highly effective at retrieving well-formed, factual answers to questions, as evaluated on a standard factoid QA benchmark. We also show that our learning framework can be tuned with the minimum of manual labeling. Finally, we provide result analysis to gain deeper understanding of which features are significant for social media search and retrieval. Our system can be used as a crucial building block for combining results from a variety of social media content with general web search results, and to better integrate social media content for effective information access.",
"Community-based Question Answering sites, such as Yahoo! Answers or Baidu Zhidao, allow users to get answers to complex, detailed and personal questions from other users. However, since answering a question depends on the ability and willingness of users to address the asker's needs, a significant fraction of the questions remain unanswered. We measured that in Yahoo! Answers, this fraction represents 15 of all incoming English questions. At the same time, we discovered that around 25 of questions in certain categories are recurrent, at least at the question-title level, over a period of one year. We attempt to reduce the rate of unanswered questions in Yahoo! Answers by reusing the large repository of past resolved questions, openly available on the site. More specifically, we estimate the probability whether certain new questions can be satisfactorily answered by a best answer from the past, using a statistical model specifically trained for this task. We leverage concepts and methods from query-performance prediction and natural language processing in order to extract a wide range of features for our model. The key challenge here is to achieve a level of quality similar to the one provided by the best human answerers. We evaluated our algorithm on offline data extracted from Yahoo! Answers, but more interestingly, also on online data by using three \"live\" answering robots that automatically provide past answers to new questions when a certain degree of confidence is reached. We report the success rate of these robots in three active Yahoo! Answers categories in terms of both accuracy, coverage and askers' satisfaction. This work presents a first attempt, to the best of our knowledge, of automatic question answering to questions of social nature, by reusing past answers of high quality.",
"Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 .",
"User-Interactive Question Answering (QA) communities such as Yahoo! Answers are growing in popularity. However, as these QA sites always have thousands of new questions posted daily, it is difficult for users to find the questions that are of interest to them. Consequently, this may delay the answering of the new questions. This gives rise to question recommendation techniques that help users locate interesting questions. In this paper, we adopt the Probabilistic Latent Semantic Analysis (PLSA) model for question recommendation and propose a novel metric to evaluate the performance of our approach. The experimental results show our recommendation approach is effective."
]
} |
1508.06184 | 2949227810 | Community-based question answering (CQA) platforms are crowd-sourced services for sharing user expertise on various topics, from mechanical repairs to parenting. While they naturally build-in an online social network infrastructure, they carry a very different purpose from Facebook-like social networks, where users "hang-out" with their friends and tend to share more personal information. It is unclear, thus, how the privacy concerns and their correlation with user behavior in an online social network translate into a CQA platform. This study analyzes one year of recorded traces from a mature CQA platform to understand the association between users' privacy concerns as manifested by their account settings and their activity in the platform. The results show that privacy preference is correlated with behavior in the community in terms of engagement, retention, accomplishments and deviance from the norm. We find privacy-concerned users have higher qualitative and quantitative contributions, show higher retention, report more abuses, have higher perception on answer quality and have larger social circles. However, at the same time, these users also exhibit more deviant behavior than the users with public profiles. | A number of studies @cite_19 @cite_38 @cite_13 on social networks like Facebook have shown the correlation between users' self-reported privacy concerns and their self-reported behavior. For example, @cite_2 showed that users who express concerns on Facebook privacy controls and find it difficult to comprehend sharing practices also report less engagement such as visiting, commenting, and liking content. At the same time, users who report more control and comprehension of privacy settings and their consequences are more engaged with the platform. Similarly, the frequency of visits, type of use, and general Internet skills are shown to be related to the personalization of the default privacy settings @cite_19 . Acquisti and Gross' @cite_3 survey on Facebook finds that a user's privacy concerns are only a weak predictor of his joining the network: that is, despite expressing privacy concerns, users join the network and reveal great amounts of personal information. @cite_39 used surveys and interviews on Facebook users to show that Internet privacy concerns and information revelation are negatively correlated. Tufekci's study @cite_27 on a small sample (704) of college students shows that students on Facebook and Myspace manage privacy concerns by adjusting profile visibility but not by restricting the profile information. | {
"cite_N": [
"@cite_38",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_13"
],
"mid": [
"",
"2150045067",
"2003020611",
"2132577882",
"2103088651",
"2079089193",
""
],
"abstract": [
"",
"Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network's members and non-members; we analyze the impact of privacy concerns on members' behavior; we compare members' stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual's privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members' misconceptions about the online community's actual size and composition, and about the visibility of members' profiles.",
"Despite concerns raised about the disclosure of personal information on social network sites, research has demonstrated that users continue to disclose personal information. The present study employs surveys and interviews to examine the factors that influence university students to disclose personal information on Facebook. Moreover, we study the strategies students have developed to protect themselves against privacy threats. The results show that personal network size was positively associated with information revelation, no association was found between concern about unwanted audiences and information revelation and finally, students' Internet privacy concerns and information revelation were negatively associated. The privacy protection strategies employed most often were the exclusion of personal information, the use of private email messages, and altering the default privacy settings. Based on our findings, we propose a model of information revelation and draw conclusions for theories of identity expression.",
"With over 500 million users, the decisions that Facebook makes about its privacy settings have the potential to influence many people. While its changes in this domain have often prompted privacy advocates and news media to critique the company, Facebook has continued to attract more users to its service. This raises a question about whether or not Facebook's changes in privacy approaches matter and, if so, to whom. This paper examines the attitudes and practices of a cohort of 18- and 19-year-olds surveyed in 2009 and again in 2010 about Facebook's privacy settings. Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy. We find that, while not universal, modifications to privacy settings have increased during a year in which Facebook's approach to privacy was hotly contested. We also find that both frequency and type of Facebook use as well as Internet skill are correlated with making modifications to privacy settings. In contrast, we observe few gender differences in how young adults approach their Facebook privacy settings, which is notable given that gender differences exist in so many other domains online. We discuss the possible reasons for our findings and their implications.",
"The prevailing paradigm in Internet privacy literature, treating privacy within a context merely of rights and violations, is inadequate for studying the Internet as a social realm. Following Goffman on self-presentation and Altman's theorizing of privacy as an optimization between competing pressures for disclosure and withdrawal, the author investigates the mechanisms used by a sample (n = 704) of college students, the vast majority users of Facebook and Myspace, to negotiate boundaries between public and private. Findings show little to no relationship between online privacy concerns and information disclosure on online social network sites. Students manage unwanted audience concerns by adjusting profile visibility and using nicknames but not by restricting the information within the profile. Mechanisms analogous to boundary regulation in physical space, such as walls, locks, and doors, are favored; little adaptation is made to the Internet's key features of persistence, searchability, and cross-indexa...",
"We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.",
""
]
} |
1508.06184 | 2949227810 | Community-based question answering (CQA) platforms are crowd-sourced services for sharing user expertise on various topics, from mechanical repairs to parenting. While they naturally build-in an online social network infrastructure, they carry a very different purpose from Facebook-like social networks, where users "hang-out" with their friends and tend to share more personal information. It is unclear, thus, how the privacy concerns and their correlation with user behavior in an online social network translate into a CQA platform. This study analyzes one year of recorded traces from a mature CQA platform to understand the association between users' privacy concerns as manifested by their account settings and their activity in the platform. The results show that privacy preference is correlated with behavior in the community in terms of engagement, retention, accomplishments and deviance from the norm. We find privacy-concerned users have higher qualitative and quantitative contributions, show higher retention, report more abuses, have higher perception on answer quality and have larger social circles. However, at the same time, these users also exhibit more deviant behavior than the users with public profiles. | 's @cite_34 demographic study on privacy concerns among American, Chinese, and Indian social network users shows that American respondents are the most privacy concerned, followed by Chinese. However, there has been no research on privacy concerns and user behavior in CQA platforms. Our previous work @cite_0 on cultures in used Geert Hofstede's cultural dimensions @cite_41 , such as individualism index, and showed that users from higher individualism index countries exhibit higher level of concern about their privacy compared to the users from collectivistic countries. In this study, we focus on understanding how the users' behavior, characterized by broad engagement, accomplishments and deviance metrics, relates to their privacy concerns. | {
"cite_N": [
"@cite_0",
"@cite_41",
"@cite_34"
],
"mid": [
"2949913501",
"2144446652",
""
],
"abstract": [
"CQA services are collaborative platforms where users ask and answer questions. We investigate the influence of national culture on people's online questioning and answering behavior. For this, we analyzed a sample of 200 thousand users in Yahoo Answers from 67 countries. We measure empirically a set of cultural metrics defined in Geert Hofstede's cultural dimensions and Robert Levine's Pace of Life and show that behavioral cultural differences exist in community question answering platforms. We find that national cultures differ in Yahoo Answers along a number of dimensions such as temporal predictability of activities, contribution-related behavioral patterns, privacy concerns, and power inequality.",
"This book reveals: * The unexamined rules behind the thoughts and emotions of people of different cultures * Ways in which cultures differ in the areas of collectivism individualism, assertiveness modesty, tolerance for ambiguity, and deferment of gratification * How organizational cultures differ from national cultures, and how they can be managed",
""
]
} |
1508.06367 | 1903104520 | Network processing elements in virtual machines, also known as Network Function Virtualization (NFV) often face CPU bottlenecks at the virtualization interface. Even highly optimized paravirtual device interfaces fall short of the throughput requirements of modern devices. Passthrough devices, together with SR-IOV support for multiple device virtual functions (VF) and IOMMU support, mitigate this problem somewhat, by allowing a VM to directly control a device partition bypassing the virtualization stack. However, device passthrough requires high-end (expensive and power-hungry) hardware, places scalability limits on consolidation ratios, and does not support efficient switching between multiple VMs on the same host. We present a paravirtual interface that securely exposes an I O device directly to the guest OS running inside the VM, and yet allows that device to be securely shared among multiple VMs and the host. Compared to the best-known paravirtualization interfaces, our paravirtual interface supports up to 2x higher throughput, and is closer in performance to device passthrough. Unlike device passthrough however, we do not require SR-IOV or IOMMU support, and allow fine-grained dynamic resource allocation, significantly higher consolidation ratios, and seamless VM migration. Our security mechanism is based on a novel approach called dynamic binary opcode subtraction. | * Software Techniques for Security The closest competing technique to DBOS, is perhaps dynamic binary translation (DBT). Unlike DBOS, DBT incurs large overheads for indirect jumps and interrupts exceptions @cite_28 @cite_8 . BTKernel @cite_2 optimizes DBT for interrupts exceptions and indirect branches; however, BTKernel cannot provide the security guarantees required for our application. For example, BTKernel's approach of leaving code-cache address on return stacks, and jumping directly to them can be used to launch a security attack in our case. DBOS is a low-overhead mechanism for ensuring security, and usually results in much lower overheads than DBT for similar security guarantees. Conversely, DBOS is not as powerful as DBT, and cannot be used for several other DBT applications. | {
"cite_N": [
"@cite_28",
"@cite_2",
"@cite_8"
],
"mid": [
"2117648703",
"2042043230",
""
],
"abstract": [
"Until recently, the x86 architecture has not permitted classical trap-and-emulate virtualization. Virtual Machine Monitors for x86, such as VMware ® Workstation and Virtual PC, have instead used binary translation of the guest kernel code. However, both Intel and AMD have now introduced architectural extensions to support classical virtualization.We compare an existing software VMM with a new VMM designed for the emerging hardware support. Surprisingly, the hardware VMM often suffers lower performance than the pure software VMM. To determine why, we study architecture-level events such as page table updates, context switches and I O, and find their costs vastly different among native, software VMM and hardware VMM execution.We find that the hardware support fails to provide an unambiguous performance advantage for two primary reasons: first, it offers no support for MMU virtualization; second, it fails to co-exist with existing software techniques for MMU virtualization. We look ahead to emerging techniques for addressing this MMU virtualization problem in the context of hardware-assisted virtualization.",
"Dynamic binary translation (DBT) is a powerful technique with several important applications. System-level binary translators have been used for implementing a Virtual Machine Monitor [2] and for instrumentation in the OS kernel [10]. In current designs, the performance overhead of binary translation on kernel-intensive workloads is high. e.g., over 10x slowdowns were reported on the syscall nanobenchmark in [2], 2-5x slowdowns were reported on lmbench microbenchmarks in [10]. These overheads are primarily due to the extra work required to correctly handle kernel mechanisms like interrupts, exceptions, and physical CPU concurrency. We present a kernel-level binary translation mechanism which exhibits near-native performance even on applications with large kernel activity. Our translator relaxes transparency requirements and aggressively takes advantage of kernel invariants to eliminate sources of slowdown. We have implemented our translator as a loadable module in unmodified Linux, and present performance and scalability experiments on multiprocessor hardware. Although our implementation is Linux specific, our mechanisms are quite general; we only take advantage of typical kernel design patterns, not Linux-specific features. For example, our translator performs 3x faster than previous kernel-level DBT implementations while running the Apache web server.",
""
]
} |
1508.06173 | 2280210143 | The blocking problem naturally arises in transportation systems as multiple vehicles with different itineraries share available resources. In this paper, we investigate the impact of the blocking problem to the waiting time at the intersections of transportation systems. We assume that different vehicles, depending on their Internet connection capabilities, may communicate their intentions (e.g., whether they will turn left or right or continue straight) to intersections (specifically to devices attached to traffic lights). We consider that information collected by these devices are transmitted to and processed in a cloud-based traffic control system. Thus, a cloud-based system, based on the intention information, can calculate average waiting times at intersections. We consider this problem as a queuing model, and we characterize average waiting times by taking into account (i) blocking probability, and (ii) vehicles' ability to communicate their intentions. Then, by using average waiting times at intersection, we develop a shortest delay algorithm that calculates the routes with shortest delays between two points in a transportation network. Our simulation results confirm our analysis, and demonstrate that our shortest delay algorithm significantly improves over baselines that are unaware of the blocking problem. | Analyzing waiting times and modeling transportation systems using queueing theory have a long history (more than 50 years) @cite_0 . @cite_23 , @cite_15 , @cite_12 considered one-lane queues and calculated the expected queue length and arrivals using probability generation functions. These models focus on fixed-cycle traffic signals, and they calculate the steady-state delays and queue lengths under the assumption that the arriving process does not change over time @cite_1 . Time-dependent arrivals have also been considered in @cite_2 , @cite_16 , @cite_9 , @cite_25 , @cite_18 , @cite_14 , @cite_7 , @cite_21 , @cite_5 , @cite_4 . Different modeling strategies are also studied; such as the queuing network model @cite_8 , cell transmission model @cite_17 , store-and-forward @cite_24 , and petri-nets @cite_26 . As compared to this line of work, our work considers (i) that some vehicles can communicate their intentions, which affects the delay analysis, and (ii) blocking problem. | {
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_17",
"@cite_26",
"@cite_7",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_24"
],
"mid": [
"2061823767",
"1496467067",
"",
"",
"2144488633",
"",
"",
"",
"",
"",
"",
"",
"2057054498",
"",
"",
"",
"",
"",
"2131398657"
],
"abstract": [
"Most existing discussions regarding the time-dependent distribution of queue length was undertaken in the context of isolated intersections. However, computing queue length distributions for a signalized network with generic topology is very challenging because such process involves convolution and nonlinear transformation of random variables, which is analytically intractable. To address such issue, this study proposes a stochastic queue model considering the strong interdependence relations between adjacent intersections using the probability generating function as a mathematical tool. Various traffic flow phenomena, including queue formation and dissipation, platoon dispersion, flow merging and diverging, queue spillover, and downstream blockage, are formulated as stochastic events, and their distributions are iteratively computed through a stochastic network loading procedure. Both theoretical derivation and numerical investigations are presented to demonstrate the effectiveness of the proposed approach in analyzing the delay and queues of signalized networks under different congestion levels.",
"Optimization Policies for Adaptive Control (OPAC) is a computational strategy for real-time demand-responsive traffic signal control. It has the following features: (a) It provides performance results that approach the theoretical optimum, (b) it requires on-line data that can be readily obtained from upstream link detectors, (c) it is suitable for implementation on existing microprocessors, and (d) it forms a building block for demand-responsive decentralized control in a network. Studies undertaken in the development of this strategy and the testing of its performance via the NETSIM simulation model are described.",
"",
"",
"The paper presents the design approach, the objectives, the development, the advantages, and some application results of the traffic-responsive urban control (TUC) strategy. Based on a store-and-forward modelling of the urban network traffic and using the linear-quadratic regulator theory, the design of TUC leads to a multivariable regulator for traffic-responsive co-ordinated network-wide signal control that is particularly suitable also for saturated traffic conditions. Simulation investigations demonstrate the efficiency of the proposed approach. Results of TUC's first field implementation and evaluation are also presented. Finally, summarising conclusions are drawn and future work is outlined.",
"",
"",
"",
"",
"",
"",
"",
"This paper proposes a delay model for signalized intersections that is suitable for variable demand conditions. The model is applicable to the entire range of expected operations, including highly oversaturated conditions with initial queues at the start of the analysis period. The proposed model clarifies several issues related to the determination of the peak flow period, as well as the periods immediately preceding and following the peak. Separate formulas are provided for estimating delay in each of the designated flow periods as well as in the total flow period. Formulas are also provided to estimate the duration of the oversaturation period where applicable. The strength of the model lies in the use of simple rules for determining flow rates within and outside the peak, using the peak flow factor, a generalization of the well-known peak hour factor parameter. Simple rules are also provided for the identification of the location and duration of the peak flow period from observations of the demand profile. Such information is considered vital from an intersection design and evaluation viewpoint. Application of the model to a variety of operating conditions indicates that the estimated delay for vehicles arriving in the peak flow period is an acceptable predictor of the average delay incurred during the total flow period, even when oversaturation persists beyond the total flow period. On the other hand, the use of the average degree of saturation with no consideration of peaking can lead to significant underestimation of delay, particularly when operating at or near capacity conditions. These findings were confirmed by comparing the model results with other models found in the literature. The significant contribution of this work is not simply in the development of improved delay estimates, but, more important, in providing an integrated framework for an estimation process that incorporates (a) the peaking characteristics in the demand flow pattern, (b) the designation of flow-specific periods within the total flow period in accordance with the observed peaking and (c) the estimation of performance parameters associated within each flow period and in combination with other periods. A revised delay formula for the U.S. Highway Capacity Manual (HCM) is proposed. The revised formula has no constraints on the peak flow period degree of saturation, unlike the current HCM formula. It is also recommended that a simple formula for estimating the duration of oversaturation be used in conjunction with the revised delay formula.",
"",
"",
"",
"",
"",
"The problem of designing network-wide traffic signal control strategies for large-scale congested urban road networks is considered. One known and two novel methodologies, all based on the store-and-forward modeling paradigm, are presented and compared. The known methodology is a linear multivariable feedback regulator derived through the formulation of a linear-quadratic optimal control problem. An alternative, novel methodology consists of an open-loop constrained quadratic optimal control problem, whose numerical solution is achieved via quadratic programming. Yet a different formulation leads to an open-loop constrained nonlinear optimal control problem, whose numerical solution is achieved by use of a feasible-direction algorithm. A preliminary simulation-based investigation of the signal control problem for a large-scale urban road network using these methodologies demonstrates the comparative efficiency and real-time feasibility of the developed signal control methods."
]
} |
1508.06395 | 1918899819 | Two parties wish to carry out certain distributed computational tasks, and they are given access to a source of correlated random bits. It allows the parties to act in a correlated manner, which can be quite useful. But what happens if the shared randomness is not perfect? In this work, we initiate the study of the power of different sources of shared randomness in communication complexity. This is done in the setting of simultaneous message passing (SMP) model of communication complexity, which is one of the most suitable models for studying the resource of shared randomness. Toward characterising the power of various sources of shared randomness, we introduce a measure for the quality of a source - we call it collision complexity. Our results show that the collision complexity tightly characterises the power of a (shared) randomness resource in the SMP model. Of independent interest is our demonstration that even the weakest sources of shared randomness can in some cases increase the power of SMP substantially: the equality function can be solved very efficiently with virtually any nontrivial shared randomness. | In this work we generally view poly-logarithmic multiplicative factors as insignificant in the context of communication complexity, and therefore for our needs SMP model is the most suitable model to consider. On the other hand, in a parallel work, @cite_9 study the effects of shared randomness in the one- and two-way settings in the sub-logarithmic regime'' and obtain rather interesting and unexpected results. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2962713836"
],
"abstract": [
"The communication complexity of many fundamental problems reduces greatly when the communicating parties share randomness that is independent of the inputs to the communication task. Natural communication processes (say between humans) however often involve large amounts of shared correlations among the communicating players, but rarely allow for perfect sharing of randomness. Can the communication complexity benefit from shared correlations as well as it does from shared randomness? This question was considered mainly in the context of simultaneous communication by [1]. In this work we study this problem in the standard interactive setting and give some general results. In particular, we show that every problem with communication complexity of k bits with perfectly shared randomness has a protocol using imperfectly shared randomness with complexity 2Ω(k) bits. We also show that this is best possible by exhibiting a promise problem with complexity k bits with perfectly shared randomness which requires 2Ω(k) bits when the randomness is imperfectly shared. Along the way we also highlight some other basic problems such as compression, and agreement distillation, where shared randomness plays a central role and analyze the complexity of these problems in the imperfectly shared randomness model. The technical highlight of this work is the lower bound that goes into the result showing the tightness of our general connection. This result builds on the intuition that communication with imperfectly shared randomness needs to be less sensitive to its random inputs than communication with perfectly shared randomness. The formal proof invokes results about the small-set expansion of the noisy hypercube and an invariance principle to convert this intuition to a proof, thus giving a new application domain for these fundamental results."
]
} |
1508.06395 | 1918899819 | Two parties wish to carry out certain distributed computational tasks, and they are given access to a source of correlated random bits. It allows the parties to act in a correlated manner, which can be quite useful. But what happens if the shared randomness is not perfect? In this work, we initiate the study of the power of different sources of shared randomness in communication complexity. This is done in the setting of simultaneous message passing (SMP) model of communication complexity, which is one of the most suitable models for studying the resource of shared randomness. Toward characterising the power of various sources of shared randomness, we introduce a measure for the quality of a source - we call it collision complexity. Our results show that the collision complexity tightly characterises the power of a (shared) randomness resource in the SMP model. Of independent interest is our demonstration that even the weakest sources of shared randomness can in some cases increase the power of SMP substantially: the equality function can be solved very efficiently with virtually any nontrivial shared randomness. | The direct precursor to this work is a work of Gavinsky, Ito and Wang @cite_20 , which, to our knowledge, was the first that studied different forms of shared randomness'' in communication complexity. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2953122989"
],
"abstract": [
"We study shared randomness in the context of multi-party number-in-hand communication protocols in the simultaneous message passing model. We show that with three or more players, shared randomness exhibits new interesting properties that have no direct analogues in the two-party case. First, we demonstrate a hierarchy of modes of shared randomness, with the usual shared randomness where all parties access the same random string as the strongest form in the hierarchy. We show exponential separations between its levels, and some of our bounds may be of independent interest. For example, we show that the equality function can be solved by a protocol of constant length using the weakest form of shared randomness, which we call \"XOR-shared randomness.\" Second, we show that quantum communication cannot replace shared randomness in the k-party case, where k >= 3 is any constant. We demonstrate a promise function GP_k that can be computed by a classical protocol of constant length when (the strongest form of) shared randomness is available, but any quantum protocol without shared randomness must send n^Omega(1) qubits to compute it. Moreover, the quantum complexity of GP_k remains n^Omega(1) even if the \"second strongest\" mode of shared randomness is available. While a somewhat similar separation was already known in the two-party case, in the multi-party case our statement is qualitatively stronger: * In the two-party case, only a relational communication problem with similar properties is known. * In the two-party case, the gap between the two complexities of a problem can be at most exponential, as it is known that 2^(O(c)) log n qubits can always replace shared randomness in any c-bit protocol. Our bounds imply that with quantum communication alone, in general, it is not possible to simulate efficiently even a three-bit three-party classical protocol that uses shared randomness."
]
} |
1508.06336 | 1858789266 | We consider the problem of computing the Walsh-Hadamard Transform (WHT) of some @math -length input vector in the presence of noise, where the @math -point Walsh spectrum is @math -sparse with @math scaling sub-linearly in the input dimension @math for some @math . Over the past decade, there has been a resurgence in research related to the computation of Discrete Fourier Transform (DFT) for some length- @math input signal that has a @math -sparse Fourier spectrum. In particular, through a sparse-graph code design, our earlier work on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes the @math -sparse DFT in time @math by taking @math noiseless samples. Inspired by the coding-theoretic design framework, proposed the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly computes the @math -sparse WHT in the absence of noise using @math samples in time @math . However, the SparseFHT algorithm explicitly exploits the noiseless nature of the problem, and is not equipped to deal with scenarios where the observations are corrupted by noise. Therefore, a question of critical interest is whether this coding-theoretic framework can be made robust to noise. Further, if the answer is yes, what is the extra price that needs to be paid for being robust to noise? In this paper, we show, quite interestingly, that there is no extra price that needs to be paid for being robust to noise other than a constant factor. In other words, we can maintain the same sample complexity @math and the computational complexity @math as those of the noiseless case, using our SParse Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm. | Due to the similarities between the DFT and the WHT, we give a brief account of previous work on reducing the sample and computational complexity of obtaining a @math -sparse @math -point DFT. The most related research thread in the literature is the computation of sparse DFT using theoretical computer science techniques such as sketching and hashing (see @cite_13 @cite_17 @cite_26 @cite_16 @cite_20 ). Most of these algorithms aim at minimizing the approximation error of the DFT coefficients using an @math -norm metric instead of exact support recovery (i.e., @math -norm). | {
"cite_N": [
"@cite_26",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2031425559",
"2113204179",
"2012365979",
"2146509513",
"2589910859"
],
"abstract": [
"We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length N≫k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem ( in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime sampling bounds to the current best randomized Fourier method ( in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).",
"We present a randomized algorithm that interpolates a sparse polynomial in polynomial time in the bit complexity model. The algorithm can be also applied to approximate polynomials that can be approximated by sparse polynomials (the approximation is in the @math norm).",
"(MATH) We give an algorithm for finding a Fourier representation R of B terms for a given discrete signal signal A of length N, such that @math is within the factor (1 +e) of best possible @math . Our algorithm can access A by reading its values on a sample set T ⊆[0,N), chosen randomly from a (non-product) distribution of our choice, independent of A. That is, we sample non-adaptively. The total time cost of the algorithm is polynomial in B log(N)log(M)e (where M is the ratio of largest to smallest numerical quantity encountered), which implies a similar bound for the number of samples.",
"This article describes a computational method, called the Fourier sampling algorithm. The algorithm takes a small number of (correlated) random samples from a signal and processes them efficiently to produce an approximation of the DFT of the signal. The algorithm offers provable guarantees on the number of samples, the running time, and the amount of storage. As we will see, these requirements are exponentially better than the FFT for some cases of interest.",
""
]
} |
1508.06336 | 1858789266 | We consider the problem of computing the Walsh-Hadamard Transform (WHT) of some @math -length input vector in the presence of noise, where the @math -point Walsh spectrum is @math -sparse with @math scaling sub-linearly in the input dimension @math for some @math . Over the past decade, there has been a resurgence in research related to the computation of Discrete Fourier Transform (DFT) for some length- @math input signal that has a @math -sparse Fourier spectrum. In particular, through a sparse-graph code design, our earlier work on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes the @math -sparse DFT in time @math by taking @math noiseless samples. Inspired by the coding-theoretic design framework, proposed the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly computes the @math -sparse WHT in the absence of noise using @math samples in time @math . However, the SparseFHT algorithm explicitly exploits the noiseless nature of the problem, and is not equipped to deal with scenarios where the observations are corrupted by noise. Therefore, a question of critical interest is whether this coding-theoretic framework can be made robust to noise. Further, if the answer is yes, what is the extra price that needs to be paid for being robust to noise? In this paper, we show, quite interestingly, that there is no extra price that needs to be paid for being robust to noise other than a constant factor. In other words, we can maintain the same sample complexity @math and the computational complexity @math as those of the noiseless case, using our SParse Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm. | Among these works, the most recent progress in this direction is the sFFT (Sparse FFT) algorithm developed in the series of papers @cite_2 @cite_0 @cite_19 . Most of these algorithms are based on first isolating (i.e., hashing) the non-zero DFT coefficients into different bins, using specific filters or windows that have good' (concentrated) support in both time and frequency. The non-zero DFT coefficients are then recovered iteratively, one at a time. The filters or windows used for the binning operation are typically of length @math . As a result, the sample complexity is typically @math or more, with potentially large big-Oh constants as demonstrated in @cite_1 . Then, @cite_19 further improved the @math -D DFT algorithm for the special case of @math , which reduces the sample complexity to @math and the computational complexity to @math , albeit with a constant failure probability that does not vanish as the signal dimension @math grows. On this front, the deterministic algorithm in @cite_26 is shown to guarantee zero errors but with complexities of @math . More recently, @cite_4 develops a deterministic algorithm for computing a sparse WHT in time @math with an arbitrary constant @math . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_2"
],
"mid": [
"2031425559",
"2950308398",
"1999763281",
"2107625832",
"2112094394",
"1501755821"
],
"abstract": [
"We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length N≫k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem ( in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime sampling bounds to the current best randomized Fourier method ( in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).",
"For every fixed constant @math , we design an algorithm for computing the @math -sparse Walsh-Hadamard transform of an @math -dimensional vector @math in time @math . Specifically, the algorithm is given query access to @math and computes a @math -sparse @math satisfying @math , for an absolute constant @math , where @math is the transform of @math and @math is its best @math -sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to @math (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive @math compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time @math (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to @math reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter @math ). Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to @math .",
"A plug (20) has a base (22) in which a plurality of plug-side terminal fittings (50) are mounted and a lid portion (23) openably and closably mountable on the base (22). The base portion (22) includes a base-side shell (30) and a resin-made base-side housing (60) integrally formed with the base-side shell (30) by insert molding. Since the base-side housing (60) and the base-side shell (30) support each other while being held in close contact, a sufficient strength can be secured. Further, since it is not necessary to assemble the base-side housing (60) and the base-side shell (30), an assembling process can be simplified.",
"We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show: An O(k log n)-time randomized algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and An O(k log n log(n k))-time randomized algorithm for general input signals. Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k=o(n). They are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = nΩ(1). We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least Ω(k log (n k) log log n) signal samples, even if it is allowed to perform adaptive sampling.",
"We present the first sample-optimal sublinear time algorithms for the sparse Discrete Fourier Transform over a two-dimensional √n × √n grid. Our algorithms are analyzed for the average case signals. For signals whose spectrum is exactly sparse, we present algorithms that use O(k) samples and run in O(k log k) time, where k is the expected sparsity of the signal. For signals whose spectrum is approximately sparse, we have an algorithm that uses O(k log n) samples and runs in O(k log2 n) time, for k = Θ(√n). All presented algorithms match the lower bounds on sample complexity for their respective signal models.",
"We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio image video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating \"large\" coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in \"one shot\", in a manner akin to sketching streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice."
]
} |
1508.06110 | 1906205923 | Large-scale collection of contextual information is often essential in order to gather statistics, train machine learning models, and extract knowledge from data. The ability to do so in a privacy-preserving way -- i.e., without collecting fine-grained user data -- enables a number of additional computational scenarios that would be hard, or outright impossible, to realize without strong privacy guarantees. In this paper, we present the design and implementation of practical techniques for privately gathering statistics from large data streams. We build on efficient cryptographic protocols for private aggregation and on data structures for succinct data representation, namely, Count-Min Sketch and Count Sketch. These allow us to reduce the communication and computation complexity incurred by each data source (e.g., end-users) from linear to logarithmic in the size of their input, while introducing a parametrized upper-bounded error that does not compromise the quality of the statistics. We then show how to use our techniques, efficiently, to instantiate real-world privacy-friendly systems, supporting recommendations for media streaming services, prediction of user locations, and computation of median statistics for Tor hidden services. | @cite_65 propose a new homomorphic encryption to allow intermediate wireless sensor nodes to aggregate encrypted data gathered from other nodes. @cite_62 combine private aggregation with differential privacy supporting the aggregation of encrypted perturbed readings reported by the meters. Individual amounts of random noise cancel each other out during aggregation, except for a specific amount that guarantees computational differential privacy. Their protocol is also so that encryption keys sum up to zero but, unlike ours, requires solving a discrete logarithm and the presence of a trusted dealer. @cite_14 propose a privacy-friendly aggregation scheme with robustness against missing user inputs, by including additional authorities that facilitate the protocol but do not learn any secrets or inputs. However, at least one of the authorities has to be honest, i.e., if all collude, the protocol does not provide any privacy guarantee. @cite_12 also provides fault tolerance by extending @cite_62 's protocol, however, with a poly-logarithmic penalty. Additional, more loosely related, private aggregation schemes include @cite_65 @cite_46 @cite_56 . | {
"cite_N": [
"@cite_14",
"@cite_62",
"@cite_65",
"@cite_56",
"@cite_46",
"@cite_12"
],
"mid": [
"",
"2146673169",
"2102832611",
"",
"1925603120",
"2232997092"
],
"abstract": [
"",
"A private stream aggregation (PSA) system contributes a user's data to a data aggregator without compromising the user's privacy. The system can begin by determining a private key for a local user in a set of users, wherein the sum of the private keys associated with the set of users and the data aggregator is equal to zero. The system also selects a set of data values associated with the local user. Then, the system encrypts individual data values in the set based in part on the private key to produce a set of encrypted data values, thereby allowing the data aggregator to decrypt an aggregate value across the set of users without decrypting individual data values associated with the set of users, and without interacting with the set of users while decrypting the aggregate value. The system also sends the set of encrypted data values to the data aggregator.",
"Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.",
"",
"Smart metering of utility consumption is rapidly becoming reality for multitudes of people and households. It promises real-time measurement and adjustment of power demand which is expected to result in lower overall energy use and better load balancing. On the other hand, finely granular measurements reported by smart meters can lead to starkly increased exposure of sensitive information, including all kinds of personal attributes and activities. Reconciling smart metering's benefits with privacy concerns is a major challenge. In this paper we explore some simple and relatively efficient cryptographic privacy techniques that allow spatial (group-wide) aggregation of smart meter measurements. We also consider temporal aggregation of multiple measurements for a single smart meter. While our work is certainly not the first to tackle this topic, we believe that proposed techniques are appealing due to their simplicity, few assumptions and peer-based nature, i.e., no need for any on-line aggregators or trusted third parties.",
"We consider applications where an untrusted aggregator would like to collect privacy sensitive data from users, and compute aggregate statistics periodically. For example, imagine a smart grid operator who wishes to aggregate the total power consumption of a neighborhood every ten minutes; or a market researcher who wishes to track the fraction of population watching ESPN on an hourly basis."
]
} |
1508.06110 | 1906205923 | Large-scale collection of contextual information is often essential in order to gather statistics, train machine learning models, and extract knowledge from data. The ability to do so in a privacy-preserving way -- i.e., without collecting fine-grained user data -- enables a number of additional computational scenarios that would be hard, or outright impossible, to realize without strong privacy guarantees. In this paper, we present the design and implementation of practical techniques for privately gathering statistics from large data streams. We build on efficient cryptographic protocols for private aggregation and on data structures for succinct data representation, namely, Count-Min Sketch and Count Sketch. These allow us to reduce the communication and computation complexity incurred by each data source (e.g., end-users) from linear to logarithmic in the size of their input, while introducing a parametrized upper-bounded error that does not compromise the quality of the statistics. We then show how to use our techniques, efficiently, to instantiate real-world privacy-friendly systems, supporting recommendations for media streaming services, prediction of user locations, and computation of median statistics for Tor hidden services. | McSherry and Mironov @cite_28 propose a privacy-preserving recommender system that relies on trusted computing, while Ciss 'e e and Albayrak @cite_9 use differential privacy to add privacy guarantees to a few algorithms presented during the Netflix Prize competition. Our private recommender system differs from theirs as we do not rely on trusted computing or differential privacy, but leverage a privacy-friendly aggregation cryptographic protocol and Count-Min Sketch. | {
"cite_N": [
"@cite_28",
"@cite_9"
],
"mid": [
"2022097286",
"2008261658"
],
"abstract": [
"We consider the problem of producing recommendations from collective user behavior while simultaneously providing guarantees of privacy for these users. Specifically, we consider the Netflix Prize data set, and its leading algorithms, adapted to the framework of differential privacy. Unlike prior privacy work concerned with cryptographically securing the computation of recommendations, differential privacy constrains a computation in a way that precludes any inference about the underlying records from its output. Such algorithms necessarily introduce uncertainty--i.e., noise--to computations, trading accuracy for privacy. We find that several of the leading approaches in the Netflix Prize competition can be adapted to provide differential privacy, without significantly degrading their accuracy. To adapt these algorithms, we explicitly factor them into two parts, an aggregation learning phase that can be performed with differential privacy guarantees, and an individual recommendation phase that uses the learned correlations and an individual's data to provide personalized recommendations. The adaptations are non-trivial, and involve both careful analysis of the per-record sensitivity of the algorithms to calibrate noise, as well as new post-processing steps to mitigate the impact of this noise. We measure the empirical trade-off between accuracy and privacy in these adaptations, and find that we can provide non-trivial formal privacy guarantees while still outperforming the Cinematch baseline Netflix provides.",
"Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multiagent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach."
]
} |
1508.06110 | 1906205923 | Large-scale collection of contextual information is often essential in order to gather statistics, train machine learning models, and extract knowledge from data. The ability to do so in a privacy-preserving way -- i.e., without collecting fine-grained user data -- enables a number of additional computational scenarios that would be hard, or outright impossible, to realize without strong privacy guarantees. In this paper, we present the design and implementation of practical techniques for privately gathering statistics from large data streams. We build on efficient cryptographic protocols for private aggregation and on data structures for succinct data representation, namely, Count-Min Sketch and Count Sketch. These allow us to reduce the communication and computation complexity incurred by each data source (e.g., end-users) from linear to logarithmic in the size of their input, while introducing a parametrized upper-bounded error that does not compromise the quality of the statistics. We then show how to use our techniques, efficiently, to instantiate real-world privacy-friendly systems, supporting recommendations for media streaming services, prediction of user locations, and computation of median statistics for Tor hidden services. | @cite_61 propose a privacy-preserving participatory sensing application which allows users to locate nearby friends without disclosing exact locations, via secure function evaluation @cite_60 , but do not address the problem of scaling to large streams number of users. De Cristofaro and Soriente @cite_25 introduce a privacy-enhanced distributed querying infrastructure for participatory and urban sensing systems. Work in @cite_20 and @cite_49 provide either @math -anonymity @cite_16 and @math -diversity @cite_29 to guarantee anonymity of users through Mix Network techniques @cite_36 . However, their techniques are not provably-secure and they only provide partial confidentiality. Then, @cite_37 suggest data perturbation in a known community for computing statistics and protecting anonymity. Trusted Platform Modules (TPMs) are instead used in @cite_50 and @cite_32 to protect integrity and authenticity of user contents. | {
"cite_N": [
"@cite_61",
"@cite_37",
"@cite_60",
"@cite_36",
"@cite_29",
"@cite_32",
"@cite_49",
"@cite_50",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2074968817",
"2092422002",
"2103647628",
"2134167315",
"2145060596",
"2109784850",
"2036110521",
"2159024459",
"1963651427",
"2146697949"
],
"abstract": [
"",
"This paper develops mathematical foundations and architectural components for providing privacy guarantees on stream data in grassroots participatory sensing applications, where groups of participants use privately-owned sensors to collectively measure aggregate phenomena of mutual interest. Grassroots applications refer to those initiated by members of the community themselves as opposed to by some governing or official entities. The potential lack of a hierarchical trust structure in such applications makes it harder to enforce privacy. To address this problem, we develop a privacy-preserving architecture, called PoolView, that relies on data perturbation on the client-side to ensure individuals' privacy and uses community-wide reconstruction techniques to compute the aggregate information of interest. PoolView allows arbitrary parties to start new services, called pools, to compute new types of aggregate information for their clients. Both the client-side and server-side components of PoolView are implemented and available for download, including the data perturbation and reconstruction components. Two simple sensing services are developed for illustration; one computes traffic statistics from subscriber GPS data and the other computes weight statistics for a particular diet. Evaluation, using actual data traces collected by the authors, demonstrates the privacy-preserving aggregation functionality in PoolView.",
"Two millionaires wish to know who is richer; however, they do not want to find out inadvertently any additional information about each other’s wealth. How can they carry out such a conversation? This is a special case of the following general problem. Suppose m people wish to compute the value of a function f(x1, x2, x3, . . . , xm), which is an integer-valued function of m integer variables xi of bounded range. Assume initially person Pi knows the value of xi and no other x’s. Is it possible for them to compute the value of f , by communicating among themselves, without unduly giving away any information about the values of their own variables? The millionaires’ problem corresponds to the case when m = 2 and f(x1, x2) = 1 if x1 < x2, and 0 otherwise. In this paper, we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert). These results have applications to secret voting, private querying of database, oblivious negotiation, playing mental poker, etc. We will also discuss the complexity question “How many bits need to be exchanged for the computation”, and describe methods to prevent participants from cheating. Finally, we study the question “What cannot be accomplished with one-way functions”. Before describing these results, we would like to put this work in perspective by first considering a unified view of secure computation in the next section.",
"A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients.",
"Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called k-anonymity has gained popularity. In a k-anonymized dataset, each record is indistinguishable from at least k − 1 other records with respect to certain identifying attributes. In this article, we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems. First, an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. This is a known problem. Second, attackers often have background knowledge, and we show that k-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks, and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks. In addition to building a formal foundation for e-diversity, we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently.",
"Grassroots Participatory Sensing empowers people to collect and share sensor data using mobile devices across many applications, spanning intelligent transportation, air quality monitoring and social networking. In this paper, we argue that the very openness of such a system makes it vulnerable to abuse by malicious users who may poison the information, collude to fabricate information, or launch Sybils to distort that information. We propose and implement a novel trusted platform module (TPM), or angel based system that addresses the problem of providing sensor data integrity. The key idea is to provide a trusted platform within each sensor device to attest the integrity of sensor readings. We argue that this localizes integrity checking to the device, rather than relying on corroboration, making the system not only simpler, but also resistant to collusion and data poisoning. A \"burned-in\" private key in the TPM prevents users from launching Sybils. We also make the case for content protection and access control mechanisms that enable users to publish sensor data streams to selected groups of people and address it using broadcast encryption techniques.",
"The ubiquity of mobile devices has brought forth the concept of participatory sensing, whereby ordinary citizens can now contribute and share information from the urban environment. However, such applications introduce a key research challenge: preserving the privacy of the individuals contributing data. In this paper, we study two different privacy concepts, k-anonymity and l-diversity, and demonstrate how their privacy models can be applied to protect users' spatial and temporal privacy in the context of participatory sensing. The first part of the paper focuses on schemes implementing k-anonymity. We propose the use of microaggregation, a technique used for facilitating disclosure control in databases, as an alternate to tessellation, which is the current state-of-the-art for location privacy in participatory sensing applications. We conduct a comparative study of the two techniques and demonstrate that each has its advantage in certain mutually exclusive situations. We then propose the Hybrid Variable size Maximum Distance to Average Vector (Hybrid-VMDAV) algorithm, which combines the positive aspects of microaggregation and tessellation. The second part of the paper addresses the limitations of the k-anonymity privacy model. We employ the principle of l-diversity and propose an l-diverse version of VMDAV (LD-VMDAV) as an improvement. In particular, LD-VMDAV is robust in situations where an adversary may have gained partial knowledge about certain attributes of the victim. We evaluate the performances of our proposed techniques using real-world traces. Our results show that Hybrid-VMDAV improves the percentage of positive identifications made by an application server by up to 100 and decreases the amount of information loss by about 40 . We empirically show that LD-VMDAV always outperforms its k-anonymity counterpart. In particular, it improves the ability of the applications to accurately interpret the anonymized location and time included in user reports. Our studies also confirm that perturbing the true locations of the users with random Gaussian noise can provide an extra layer of protection, while causing little impact on the application performance.",
"Commodity mobile devices have been utilized as sensor nodes in a variety of domains, including citizen journalism, mobile social services, and domestic eldercare. In each of these domains, data integrity and device-owners' privacy are first-class concerns, but current approaches to secure sensing fail to balance these properties. External signing infrastructure cannot attest to the values generated by a device's sensing hardware, while trusted sensing hardware does not allow users to securely reduce the fidelity of readings in order to preserve their privacy. In this paper we examine the challenges posed by the potentially conflicting goals of data integrity and user privacy and propose a trustworthy mobile sensing platform which leverages inexpensive commodity Trusted Platform Module (TPM) hardware.",
"Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection.",
"Participatory sensing is emerging as an innovative computing paradigm that targets the ubiquity of always-connected mobile phones and their sensing capabilities. In this paper, a multitude of pioneering applications increasingly carry out pervasive collection and dissemination of information and environmental data, such as traffic conditions, pollution, temperature, and so on. Participants collect and report measurements from their mobile devices and entrust them to the cloud to be made available to applications and users. Naturally, due to the personal information associated to the reports (e.g., location, movements, etc.), a number of privacy concerns need to be considered prior to a large-scale deployment of these applications. Motivated by the need for privacy protection in participatory sensing, this paper presents a privacy-enhanced participatory sensing infrastructure. We explore realistic architectural assumptions and a minimal set of formal requirements aiming at protecting privacy of both data producers and consumers. We propose two instantiations that attain privacy guarantees with provable security at very low additional computational cost and almost no extra communication overhead.",
"Personal mobile devices are increasingly equipped with the capability to sense the physical world (through cameras, microphones, and accelerometers, for example) and the, network world (with Wi-Fi and Bluetooth interfaces). Such devices offer many new opportunities for cooperative sensing applications. For example, users' mobile phones may contribute data to community-oriented information services, from city-wide pollution monitoring to enterprise-wide detection of unauthorized Wi-Fi access points. This people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems: protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications. We describe AnonySense, a privacy-aware architecture for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our trust model, and the security properties that drove the design of the AnonySense system. We evaluate our prototype implementation through experiments that indicate the feasibility of this approach, and through two applications: a Wi-Fi rogue access point detector and a lost-object finder."
]
} |
1508.06216 | 2283881700 | Cardinality estimation algorithms receive a stream of elements whose order might be arbitrary, with possible repetitions, and return the number of distinct elements. Such algorithms usually seek to minimize the required storage and processing at the price of inaccuracy in their output. Real-world applications of these algorithms are required to process large volumes of monitored data, making it impractical to collect and analyze the entire input stream. In such cases, it is common practice to sample and process only a small part of the stream elements. This paper presents and analyzes a generic algorithm for combining every cardinality estimation algorithm with a sampling process. We show that the proposed sampling algorithm does not affect the estimator's asymptotic unbiasedness, and we analyze the sampling effect on the estimator's variance. | Although sampling techniques provide greater scalability, they also make it more difficult to infer the characteristics of the original stream. One of the first works addressing inference from samples is the Good-Turing frequency estimation, a statistical technique for estimating the probability of encountering a hitherto unseen element in a stream, given a set of past samples. For a recent paper on the Good-Turing technique, see @cite_18 . | {
"cite_N": [
"@cite_18"
],
"mid": [
"2063918473"
],
"abstract": [
"Linguists and speech researchers who use statistical methods often need to estimate the frequency of some type of item in a population containing items of various types. A common approach is to divide the number of cases observed in a sample by the size of the sample; sometimes small positive quantities are added to divisor and dividend in order to avoid zero estimates for types missing from the sample. These approaches are obvious and simple, but they lack principled justification, and yield estimates that can be wildly inaccurate. I.J. Good and Alan Turing developed a family of theoretically well-founded techniques appropriate to this domain. Some versions of the Good–Turing approach are very demanding computationally, but we define a version, the Simple Good–Turing estimator, which is straightforward to use. Tested on a variety of natural-language-related data sets, the Simple Good–Turing estimator performs well, absolutely and relative both to the approaches just discussed and to other, more sophisticated techniques."
]
} |
1508.06216 | 2283881700 | Cardinality estimation algorithms receive a stream of elements whose order might be arbitrary, with possible repetitions, and return the number of distinct elements. Such algorithms usually seek to minimize the required storage and processing at the price of inaccuracy in their output. Real-world applications of these algorithms are required to process large volumes of monitored data, making it impractical to collect and analyze the entire input stream. In such cases, it is common practice to sample and process only a small part of the stream elements. This paper presents and analyzes a generic algorithm for combining every cardinality estimation algorithm with a sampling process. We show that the proposed sampling algorithm does not affect the estimator's asymptotic unbiasedness, and we analyze the sampling effect on the estimator's variance. | Related to the cardinality estimation problem is the problem of finding a uniform sample of the distinct values in the stream. Such a sample can be used for a variety of database management applications, such as query optimization, query monitoring, query progress indication and query execution time prediction @cite_2 @cite_19 @cite_7 . Additional applications of the uniform sample pertain to approximate query answering, such as estimating the mean, the variance, and the quantiles over the distinct values of the query @cite_29 @cite_10 @cite_13 . Several algorithms provide a uniform sample of the stream; for example, the authors of @cite_40 show how to find such a sample in a single data pass. Several variations of this work are also proposed in @cite_15 @cite_27 @cite_30 . However, all the discussed approaches require scanning the entire input stream, which is usually impractical. In this paper we present a generic algorithm that does not require a full data pass over the input stream. | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_7",
"@cite_29",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"1993482412",
"2089066317",
"",
"2199389590",
"2078907037",
"2134786002",
"2143606444",
"2623818710",
"2131726153",
"2020584928"
],
"abstract": [
"In large data recording and warehousing environments, it is often advantageous to provide fast, approximate answers to queries, whenever possible. Before DBMSs providing highly-accurate approximate answers can become a reality, many new techniques for summarizing data and for estimating answers from summarized data must be developed. This paper introduces two new sampling-based summary statistics, concise samples and counting samples, and presents new techniques for their fast incremental maintenance regardless of the data distribution. We quantify their advantages over standard sample views in terms of the number of additional sample points for the same view size, and hence in providing more accurate query answers. Finally, we consider their application to providing fast approximate answers to hot list queries. Our algorithms maintain their accuracy in the presence of ongoing insertions to the data warehouse.",
"In data streaming applications, data arrives at rapid rates and in high volume, thus making it essential to process each stream update very efficiently in terms of both time and space. A data stream is a sequence of data records that must be processed continuously in an online fashion using sub-linear space and sub-linear processing time. We consider the problem of tracking the number of distinct items over data streams that allow insertion and deletion operations. We present two algorithms that improve on the space and time complexity of existing algorithms.",
"",
"In large data warehousing environments, it is often advantageous to provide fast, approximate answers to complex decision support queries using precomputed summary statistics, such as samples. Decision support queries routinely segment the data into groups and then aggregate the information in each group (group-by queries). Depending on the data, there can be a wide disparity between the number of data items in each group. As a result, approximate answers based on uniform random samples of the data can result in poor accuracy for groups with very few data items, since such groups will be represented in the sample by very few (often zero) tuples. In this paper, we propose a general class of techniques for obtaining fast, highly-accurate answers for group-by queries. These techniques rely on precomputed non-uniform (biased) samples of the data. In particular, we propose congressional samples, a hybrid union of uniform and biased samples. Given a fixed amount of space, congressional samples seek to maximize the accuracy for all possible group-by queries on a set of columns. We present a one pass algorithm for constructing a congressional sample and use this technique to also incrementally maintain the sample up-to-date without accessing the base relation. We also evaluate query rewriting strategies for providing approximate answers from congressional samples. Finally, we conduct an extensive set of experiments on the TPC-D database, which demonstrates the efficacy of the techniques proposed.",
"The ability to approximately answer aggregation queries accurately and efficiently is of great benefit for decision support and data mining tools. In contrast to previous sampling-based studies, we treat the problem as an optimization problem whose goal is to minimize the error in answering queries in the given workload. A key novelty of our approach is that we can tailor the choice of samples to be robust even for workloads that are “similar” but not necessarily identical to the given workload. Finally, our techniques recognize the importance of taking into account the variance in the data distribution in a principled manner. We show how our solution can be implemented on a database system, and present results of extensive experiments on Microsoft SQL Server 2000 that demonstrate the superior quality of our method compared to previous work.",
"Estimating the number of distinct values is a wellstudied problem, due to its frequent occurrence in queries and its importance in selecting good query plans. Previous work has shown powerful negative results on the quality of distinct-values estimates based on sampling (or other techniques that examine only part of the input data). We present an approach, called distinct sampling, that collects a specially tailored sample over the distinct values in the input, in a single scan of the data. In contrast to the previous negative results, our small Distinct Samples are guaranteed to accurately estimate the number of distinct values. The samples can be incrementally maintained up-to-date in the presence of data insertions and deletions, with minimal time and memory overheads, so that the full scan may be performed only once. Moreover, a stored Distinct Sample can be used to accurately estimate the number of distinct values within any range specified by the query, or within any other subset of the data satisfying a query predicate. We present an extensive experimental study of distinct sampling. Using synthetic and real-world data sets, we show that distinct sampling gives distinct-values estimates to within 0 ‐10 relative error, whereas previous methods typically incur 50 ‐250 relative error. Next, we show how distinct sampling can provide fast, highlyaccurate approximate answers for “report” queries in high-volume, session-based event recording environments, such as IP networks, customer service call centers, etc. For a commercial call center environment, we show that a 1 Distinct Sample",
"A dynamic geometric data stream is a sequence of m ADD REMOVE operations of points from a discrete geometric space 1,…, Δ d ?. ADD (p) inserts a point p from 1,…, Δ d into the current point set P, REMOVE(p) deletes p from P. We develop low-storage data structures to (i) maintain e-nets and e-approximations of range spaces of P with small VC-dimension and (ii) maintain a (1 + e)-approximation of the weight of the Euclidean minimum spanning tree of P. Our data structure for e-nets uses bits of memory and returns with probability 1 – δ a set of points that is an e-net for an arbitrary fixed finite range space with VC-dimension . Our data structure for e-approximations uses bits of memory and returns with probability 1 – δ a set of points that is an e-approximation for an arbitrary fixed finite range space with VC-dimension . The data structure for the approximation of the weight of a Euclidean minimum spanning tree uses O(log(1 δ)(log Δ e)O(d)) space and is correct with probability at least 1 – δ. Our results are based on a new data structure that maintains a set of elements chosen (almost) uniformly at random from P.",
"",
"Emerging data stream management systems approach the challenge of massive data distributions which arrive at high speeds while there is only small storage by summarizing and mining the distributions using samples or sketches. However, data distributions can be \"viewed\" in different ways. A data stream of integer values can be viewed either as the forward distribution f (x), ie., the number of occurrences of x in the stream, or as its inverse, f-1 (i), which is the number of items that appear i times. While both such \"views\" are equivalent in stored data systems, over data streams that entail approximations, they may be significantly different. In other words, samples and sketches developed for the forward distribution may be ineffective for summarizing or mining the inverse distribution. Yet, many applications such as IP traffic monitoring naturally rely on mining inverse distributions.We formalize the problems of managing and mining inverse distributions and show provable differences between summarizing the forward distribution vs the inverse distribution. We present methods for summarizing and mining inverse distributions of data streams: they rely on a novel technique to maintain a dynamic sample over the stream with provable guarantees which can be used for variety of summarization tasks (building quantiles or equidepth histograms) and mining (anomaly detection: finding heavy hitters, and measuring the number of rare items), all with provable guarantees on quality of approximations and time space used by our streaming methods.We also complement our analytical and algorithmic results by presenting an experimental study of the methods over network data streams.",
"In large data warehousing environments, it is often advantageous to provide fast, approximate answers to complex aggregate queries based on statistical summaries of the full data. In this paper, we demonstrate the difficulty of providing good approximate answers for join-queries using only statistics (in particular, samples) from the base relations. We propose join synopses as an effective solution for this problem and show how precomputing just one join synopsis for each relation suffices to significantly improve the quality of approximate answers for arbitrary queries with foreign key joins. We present optimal strategies for allocating the available space among the various join synopses when the query work load is known and identify heuristics for the common case when the work load is not known. We also present efficient algorithms for incrementally maintaining join synopses in the presence of updates to the base relations. Our extensive set of experiments on the TPC-D benchmark database show the effectiveness of join synopses and various other techniques proposed in this paper."
]
} |
1508.06216 | 2283881700 | Cardinality estimation algorithms receive a stream of elements whose order might be arbitrary, with possible repetitions, and return the number of distinct elements. Such algorithms usually seek to minimize the required storage and processing at the price of inaccuracy in their output. Real-world applications of these algorithms are required to process large volumes of monitored data, making it impractical to collect and analyze the entire input stream. In such cases, it is common practice to sample and process only a small part of the stream elements. This paper presents and analyzes a generic algorithm for combining every cardinality estimation algorithm with a sampling process. We show that the proposed sampling algorithm does not affect the estimator's asymptotic unbiasedness, and we analyze the sampling effect on the estimator's variance. | The above works consider uniform packet sampling, where each packet is sampled with a fixed probability. Previous works have also dealt with size-dependent flow sampling, where packets are sampled with different probability, according to their flow size. The first works on size-dependent flow sampling study the problem of deciding which records in a given set of flow records should be discarded when storage constraints allow only a small fraction to be kept @cite_1 @cite_21 @cite_38 . The sampling decision in these works is made off-line: a flow is first received and only then discarded or stored. In @cite_34 , the on-line version of this problem is studied. In this version, upon receiving a packet, the algorithm needs to determine whether to keep it. The authors develop a new packet sampling method that samples each packet with probability @math , where @math is a decreasing function of the estimated size of the corresponding flow when the packet is received, and the size of the flow is estimated using a small sketch that stores the approximate sizes of all flows. | {
"cite_N": [
"@cite_38",
"@cite_21",
"@cite_1",
"@cite_34"
],
"mid": [
"1973515534",
"2111707959",
"2886043355",
"2111806841"
],
"abstract": [
"Many network management applications use as their data traffic volumes differentiated by attributes such as IP address or port number. IP flow records are commonly collected for this purpose: these enable determination of fine-grained usage of network resources. However, the increasingly large volumes of flow statistics incur concomitant costs in the resources of the measurement infrastructure. This motivates sampling of flow records.This paper addresses sampling strategy for flow records. Recent work has shown that non-uniform sampling is necessary in order to control estimation variance arising from the observed heavy-tailed distribution of flow lengths. However, while this approach controls estimator variance, it does not place hard limits on the number of flows sampled. Such limits are often required during arbitrary downstream sampling, resampling and aggregation operations employed in analysis of the data.This paper proposes a correlated sampling strategy that is able to select an arbitrarily small number of the \"best\" representatives of a set of flows. We show that usage estimates arising from such selection are unbiased, and show how to estimate their variance, both offline for modeling purposes, and online during the sampling itself. The selection algorithm can be implemented in a queue-like data structure in which memory usage is uniformly bounded during measurement. Finally, we compare the complexity and performance of our scheme with other potential approaches.",
"IP flows have heavy-tailed packet and byte size distributions. This make them poor candidates for uniform sampling---i.e. selecting 1 in N flows---since omission or inclusion of a large flow can have a large effect on estimated total traffic. Flows selected in this manner are thus unsuitable for use in usage sensitive billing. We propose instead using a size-dependent sampling scheme which gives priority to the larger contributions to customer usage. This turns the heavy tails to our advantage; we can obtain accurate estimates of customer usage from a relatively small number of important samples.The sampling scheme allows us to control error when charging is sensitive to estimated usage only above a given base level. A refinement allows us to strictly limit the chance that a customers estimated usage will exceed their actual usage. Furthermore, we show that a secondary goal, that of controlling the rate at which samples are produced, can be fulfilled provided the billing cycle is sufficiently long. All these claims are supported by experiments on flow traces gathered from a commercial network.",
"",
"Summaries of massive data sets support approximate query processing over the original data. A basic aggregate over a set of records is the weight of subpopulations specified as a predicate over records' attributes. Bottom-k sketches are a powerful summarization format of weighted items that includes priority sampling [22], and the classic weighted sampling without replacement. They can be computed efficiently for many representations of the data including distributed databases and data streams and support coordinated and all-distances sketches. We derive novel unbiased estimators and confidence bounds for subpopulation weight. Our rank conditioning (RC) estimator is applicable when the total weight of the sketched set cannot be computed by the summarization algorithm without a significant use of additional resources (such as for sketches of network neighborhoods) and the tighter subset conditioning (SC) estimator that is applicable when the total weight is available (sketches of data streams). Our estimators are derived using clever applications of the Horvitz-Thompson estimator (that is not directly applicable to bottom-k sketches). We develop efficient computational methods and conduct performance evaluation using a range of synthetic and real data sets. We demonstrate considerable benefits of the SC estimator on larger subpopulations (over all other estimators); of the RC estimator (over existing estimators for weighted sampling without replacement); and of our confidence bounds (over all previous approaches)."
]
} |
1508.05817 | 2261965950 | While the effect of various lexical, syntactic, semantic and stylistic features have been addressed in persuasive language from a computational point of view, the persuasive effect of phonetics has received little attention. By modeling a notion of euphony and analyzing four datasets comprising persuasive and non-persuasive sentences in different domains (political speeches, movie quotes, slogans and tweets), we explore the impact of sounds on different forms of persuasiveness. We conduct a series of analyses and prediction experiments within and across datasets. Our results highlight the positive role of phonetic devices on persuasion. | propose a phonetic scorer for creative sentence generation such that generated sentences can contain various phonetic features including alliteration, rhyme and plosive sounds. The authors evaluate the proposed model on automatic slogan generation. In a more recent work @cite_1 , they enforce the existence of these features in the sentences that are automatically generated for second language learning to introduce hooks to echoic memory. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2151769600"
],
"abstract": [
"In this paper, we combine existing NLP techniques with minimal supervision to build memory tips according to the keyword method, a well established mnemonic device for second language learning. We present what we believe to be the first extrinsic evaluation of a creative sentence generator on a vocabulary learning task. The results demonstrate that NLP techniques can effectively support the development of resources for second language learning."
]
} |
1508.05514 | 2201133737 | Bayesian inference is a statistical inference technique in which Bayes’ theorem is used to update the probability distribution of a random variable using observations. Except for few simple cases, expression of such probability distributions using compact analytical expressions is infeasible. Approximation methods are required to express the a priori knowledge about a random variable in form of prior distributions. Further approximations are needed to compute posterior distributions of the random variables using the observations. When the computational complexity of representation of such posteriors increases over time as in mixture models, approximations are required to reduce the complexity of such representations.This thesis further extends existing approximation methods for Bayesian inference, and generalizes the existing approximation methods in three aspects namely; prior selection, posterior evaluation given the observations and maintenance of computation complexity.Particularly, the maximum entropy properties of the first-order stable spline kernel for identification of linear time-invariant stable and causal systems are shown. Analytical approximations are used to express the prior knowledge about the properties of the impulse response of a linear time-invariant stable and causal system.Variational Bayes (VB) method is used to compute an approximate posterior in two inference problems. In the first problem, an approximate posterior for the state smoothing problem for linear statespace models with unknown and time-varying noise covariances is proposed. In the second problem, the VB method is used for approximate inference in state-space models with skewed measurement noise.Moreover, a novel approximation method for Bayesian inference is proposed. The proposed Bayesian inference technique is based on Taylor series approximation of the logarithm of the likelihood function. The proposed approximation is devised for the case where the prior distribution belongs to the exponential family of distributions.Finally, two contributions are dedicated to the mixture reduction (MR) problem. The first contribution, generalize the existing MR algorithms for Gaussian mixtures to the exponential family of distributions and compares them in an extended target tracking scenario. The second contribution, proposes a new Gaussian mixture reduction algorithm which minimizes the reverse Kullback-Leibler divergence and has specific peak preserving properties. | Runnalls' method @cite_7 is a global greedy MR algorithm that minimizes the FKLD (i.e., @math ). Unfortunately, the KLD between two Gaussian mixtures can not be calculated analytically. Runnalls uses an analytical upper-bound for the KLD which can only be used for comparing merging hypotheses. The upper bound @math for @math , which is given as is used as the cost of merging the components @math and @math where @math is the merged component density. Hence, the original global decision statistics @math for merging is replaced with its local approximation @math to obtain the decision rule as follows: | {
"cite_N": [
"@cite_7"
],
"mid": [
"2125105520"
],
"abstract": [
"A common problem in multi-target tracking is to approximate a Gaussian mixture by one containing fewer components; similar problems can arise in integrated navigation. A common approach is successively to merge pairs of components, replacing the pair with a single Gaussian component whose moments up to second order match those of the merged pair. Salmond [1] and Williams [2, 3] have each proposed algorithms along these lines, but using different criteria for selecting the pair to be merged at each stage. The paper shows how under certain circumstances each of these pair-selection criteria can give rise to anomalous behaviour, and proposes that a key consideration should the the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture. Although computing this directly would normally be impractical, the paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches. The behaviour of the three algorithms is compared using a high-dimensional example drawn from terrain-referenced navigation."
]
} |
1508.05514 | 2201133737 | Bayesian inference is a statistical inference technique in which Bayes’ theorem is used to update the probability distribution of a random variable using observations. Except for few simple cases, expression of such probability distributions using compact analytical expressions is infeasible. Approximation methods are required to express the a priori knowledge about a random variable in form of prior distributions. Further approximations are needed to compute posterior distributions of the random variables using the observations. When the computational complexity of representation of such posteriors increases over time as in mixture models, approximations are required to reduce the complexity of such representations.This thesis further extends existing approximation methods for Bayesian inference, and generalizes the existing approximation methods in three aspects namely; prior selection, posterior evaluation given the observations and maintenance of computation complexity.Particularly, the maximum entropy properties of the first-order stable spline kernel for identification of linear time-invariant stable and causal systems are shown. Analytical approximations are used to express the prior knowledge about the properties of the impulse response of a linear time-invariant stable and causal system.Variational Bayes (VB) method is used to compute an approximate posterior in two inference problems. In the first problem, an approximate posterior for the state smoothing problem for linear statespace models with unknown and time-varying noise covariances is proposed. In the second problem, the VB method is used for approximate inference in state-space models with skewed measurement noise.Moreover, a novel approximation method for Bayesian inference is proposed. The proposed Bayesian inference technique is based on Taylor series approximation of the logarithm of the likelihood function. The proposed approximation is devised for the case where the prior distribution belongs to the exponential family of distributions.Finally, two contributions are dedicated to the mixture reduction (MR) problem. The first contribution, generalize the existing MR algorithms for Gaussian mixtures to the exponential family of distributions and compares them in an extended target tracking scenario. The second contribution, proposes a new Gaussian mixture reduction algorithm which minimizes the reverse Kullback-Leibler divergence and has specific peak preserving properties. | Williams and Maybeck proposed a global greedy MRA in @cite_4 where ISE is used as the cost function. ISE between two probability distributions @math and @math is defined by ISE has all properties of a metric such as symmetry and triangle inequality and is analytically tractable for Gaussian mixtures. Williams' method minimizes @math over all pruning and merging hypotheses, i.e., | {
"cite_N": [
"@cite_4"
],
"mid": [
"1974193495"
],
"abstract": [
"The problem of tracking targets in clutter naturally leads to a Gaussian mixture representation of the probability density function of the target state vector. Modern tracking methods maintain the mean, covariance and probability weight corresponding to each hypothesis, yet they rely on simple merging and pruning rules to control the growth of hypotheses. This paper proposes a structured, cost-function-based approach to the hypothesis control problem, utilizing the Integral Square Error (ISE) cost measure. A comparison of track life performance versus computational cost is made between the ISE-based filter and previously proposed approximations including simple pruning, Singer's n-scan memory filter, Salmond's joining filter, and Chen and Liu's Mixture Kalman Filter (MKF). The results demonstrate that the ISE-based mixture reduction algorithm provides mean track life which is significantly greater than that of the compared techniques using similar numbers of mixture components, and mean track life competitive with that of the compared algorithms for similar mean computation times."
]
} |
1508.05514 | 2201133737 | Bayesian inference is a statistical inference technique in which Bayes’ theorem is used to update the probability distribution of a random variable using observations. Except for few simple cases, expression of such probability distributions using compact analytical expressions is infeasible. Approximation methods are required to express the a priori knowledge about a random variable in form of prior distributions. Further approximations are needed to compute posterior distributions of the random variables using the observations. When the computational complexity of representation of such posteriors increases over time as in mixture models, approximations are required to reduce the complexity of such representations.This thesis further extends existing approximation methods for Bayesian inference, and generalizes the existing approximation methods in three aspects namely; prior selection, posterior evaluation given the observations and maintenance of computation complexity.Particularly, the maximum entropy properties of the first-order stable spline kernel for identification of linear time-invariant stable and causal systems are shown. Analytical approximations are used to express the prior knowledge about the properties of the impulse response of a linear time-invariant stable and causal system.Variational Bayes (VB) method is used to compute an approximate posterior in two inference problems. In the first problem, an approximate posterior for the state smoothing problem for linear statespace models with unknown and time-varying noise covariances is proposed. In the second problem, the VB method is used for approximate inference in state-space models with skewed measurement noise.Moreover, a novel approximation method for Bayesian inference is proposed. The proposed Bayesian inference technique is based on Taylor series approximation of the logarithm of the likelihood function. The proposed approximation is devised for the case where the prior distribution belongs to the exponential family of distributions.Finally, two contributions are dedicated to the mixture reduction (MR) problem. The first contribution, generalize the existing MR algorithms for Gaussian mixtures to the exponential family of distributions and compares them in an extended target tracking scenario. The second contribution, proposes a new Gaussian mixture reduction algorithm which minimizes the reverse Kullback-Leibler divergence and has specific peak preserving properties. | Williams' method, being a global greedy approach to MR, is computationally quite expensive for mixture densities with many components. The computational burden results from the following facts. Reducing a mixture with @math components to a mixture with @math components involves @math hypotheses. Since computational load of calculating the ISE between mixtures of @math and @math components is @math , reducing a mixture with @math components to a mixture with @math components has the computation complexity @math with Williams' method. On the other hand, using the upper bound , Runnalls' method avoids the computations associated with the components which are not directly involved in the merging operation resulting in just @math computations for the same reduction. Another disadvantage of Williams' method is that the ISE does not scale up with the dimension nicely, as pointed out in an example in @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2125105520"
],
"abstract": [
"A common problem in multi-target tracking is to approximate a Gaussian mixture by one containing fewer components; similar problems can arise in integrated navigation. A common approach is successively to merge pairs of components, replacing the pair with a single Gaussian component whose moments up to second order match those of the merged pair. Salmond [1] and Williams [2, 3] have each proposed algorithms along these lines, but using different criteria for selecting the pair to be merged at each stage. The paper shows how under certain circumstances each of these pair-selection criteria can give rise to anomalous behaviour, and proposes that a key consideration should the the Kullback-Leibler (KL) discrimination of the reduced mixture with respect to the original mixture. Although computing this directly would normally be impractical, the paper shows how an easily computed upper bound can be used as a pair-selection criterion which avoids the anomalies of the earlier approaches. The behaviour of the three algorithms is compared using a high-dimensional example drawn from terrain-referenced navigation."
]
} |
1508.05879 | 1871338341 | We address the issue of adapting optical images-based edge detection techniques for use in Polarimetric Synthetic Aperture Radar (PolSAR) imagery. We modify the gravitational edge detection technique (inspired by the Law of Universal Gravity) proposed by Lopez-, using the non-standard neighbourhood configuration proposed by , to reduce the speckle noise in polarimetric SAR imagery. We compare the modified and unmodified versions of the gravitational edge detection technique with the well-established one proposed by Canny, as well as with a recent multiscale fuzzy-based technique proposed by Lopez- We also address the issues of aggregation of gray level images before and after edge detection and of filtering. All techniques addressed here are applied to a mosaic built using class distributions obtained from a real scene, as well as to the true PolSAR image; the mosaic results are assessed using Baddeley's Delta Metric. Our experiments show that modifying the gravitational edge detection technique with a non-standard neighbourhood configuration produces better results than the original technique, as well as the other techniques used for comparison. The experiments show that adapting edge detection methods from Computational Intelligence for use in PolSAR imagery is a new field worthy of exploration. | One of the most successful edge detection algorithms for optical images was proposed by Canny @cite_21 , based on the following guidelines: i) the algorithm should mark as many real edges in the image as possible; ii) the marked edges should be as close as possible to the edge in the real image; iii) a given edge in the image should only be marked once; and iv) image noise should not create false edges. It makes use of numerical optimization to derive optimal operators for ridge and roof edges. The usual implementation of this method uses a @math neighbourhood. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2145023731"
],
"abstract": [
"This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge."
]
} |
1508.05879 | 1871338341 | We address the issue of adapting optical images-based edge detection techniques for use in Polarimetric Synthetic Aperture Radar (PolSAR) imagery. We modify the gravitational edge detection technique (inspired by the Law of Universal Gravity) proposed by Lopez-, using the non-standard neighbourhood configuration proposed by , to reduce the speckle noise in polarimetric SAR imagery. We compare the modified and unmodified versions of the gravitational edge detection technique with the well-established one proposed by Canny, as well as with a recent multiscale fuzzy-based technique proposed by Lopez- We also address the issues of aggregation of gray level images before and after edge detection and of filtering. All techniques addressed here are applied to a mosaic built using class distributions obtained from a real scene, as well as to the true PolSAR image; the mosaic results are assessed using Baddeley's Delta Metric. Our experiments show that modifying the gravitational edge detection technique with a non-standard neighbourhood configuration produces better results than the original technique, as well as the other techniques used for comparison. The experiments show that adapting edge detection methods from Computational Intelligence for use in PolSAR imagery is a new field worthy of exploration. | A more recent multi-scale edge detection method was proposed by Lopez- @cite_18 , using Sobel operators for edge extraction and the concept of Gaussian scale-space. More specifically, the Sobel edge detection method is applied on increasingly smoother versions of the image. Then, the edges which appear on different scales are combined by performing coarse-to-fine edge tracking. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2023578721"
],
"abstract": [
"The human vision is usually considered a multiscale, hierarchical knowledge extraction system. Inspired by this fact, multiscale techniques for computer vision perform a sequential analysis, driven by different interpretations of the concept of scale. In the case of edge detection, the scale usually relates to the size of the region where the intensity changes are measured or to the size of the regularization filter applied before edge extraction. Multiscale edge detection methods constitute an effort to combine the spatial accuracy of fine-scale methods with the ability to deal with spurious responses inherent to coarse-scale methods. In this work we introduce a multiscale method for edge detection based on increasing Gaussian smoothing, the Sobel operators and coarse-to-fine edge tracking. We include visual examples and quantitative evaluations illustrating the benefits of our proposal."
]
} |
1508.05879 | 1871338341 | We address the issue of adapting optical images-based edge detection techniques for use in Polarimetric Synthetic Aperture Radar (PolSAR) imagery. We modify the gravitational edge detection technique (inspired by the Law of Universal Gravity) proposed by Lopez-, using the non-standard neighbourhood configuration proposed by , to reduce the speckle noise in polarimetric SAR imagery. We compare the modified and unmodified versions of the gravitational edge detection technique with the well-established one proposed by Canny, as well as with a recent multiscale fuzzy-based technique proposed by Lopez- We also address the issues of aggregation of gray level images before and after edge detection and of filtering. All techniques addressed here are applied to a mosaic built using class distributions obtained from a real scene, as well as to the true PolSAR image; the mosaic results are assessed using Baddeley's Delta Metric. Our experiments show that modifying the gravitational edge detection technique with a non-standard neighbourhood configuration produces better results than the original technique, as well as the other techniques used for comparison. The experiments show that adapting edge detection methods from Computational Intelligence for use in PolSAR imagery is a new field worthy of exploration. | The so-called Lee (or sigma) filter introduced in 1983 @cite_13 , is still in use today due to its simplicity, its effectiveness in speckle reduction, and its computational efficiency. It is based on the fact that, under Gaussian distribution, approximately @math p @math p @math h @math @math 9 9 @math 3 3$ window in an image, the values considered for the surrounding pixels in the window are no longer the ones in the original image, but the mean values in this new configuration. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2052491606"
],
"abstract": [
"Speckles appearing in synthetic aperture radar (SAR) images are generated by the coherent processing of radar signals. Basically the speckles have the nature of a multiplicative noise. A simple and effective method of smoothing speckle-corrupted images by a digital computer is discussed, based on the recently developed sigma filter. The sigma filter is motivated by the sigma probability of a Gaussian distribution. The pixel to be processed is replaced by an advantage of those neighboring pixels having their gray level within two noise standard deviations from that of the concerned pixel. Consequently the speckles are suppressed without blurring edges and fine detail. Several Seasat SAR images are used for illustration, and comparisons are made with several noise-smoothing algorithms. Extensions of this algorithm to contrast enhancement and signal-dependent noise filtering are also presented. This algorithm is computationally efficient, and has the potential to achieve real or near real-time processing."
]
} |
1508.05786 | 2952361412 | Consider @math sensors placed randomly and independently with the uniform distribution in a @math dimensional unit cube ( @math ). The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the @math dimensional unit cube is completely covered, i.e., every point in the @math dimensional cube is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider the @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the @math dimensional unit cube and @math . The main contribution of the paper is to show the existence of a tradeoff between the @math dimensional cube, sensing radius and @math -total movement. The main results can be summarized as follows for the case of the @math dimensional cube. If the @math dimensional cube sensing radius is @math and @math , for some @math , then we present an algorithm that uses @math total expected movement (see Algorithm 2 and Theorem 5). If the @math dimensional cube sensing radius is greater than @math and @math is a natural number then the total expected movement is @math (see Algorithm 3 and Theorem 7). In addition, we simulate Algorithm 2 and discuss the results of our simulations. | Assume that @math sensors of identical range are all initially placed on a line. It was shown in @cite_1 that there is an @math algorithm for minimizing the max displacement of a sensor while the optimization problem becomes NP-complete if there are two separate (non-overlapping) barriers on the line (cf. also @cite_3 for arbitrary sensor ranges). If the optimization cost is the sum of displacements then @cite_9 shows that the problem is NP-complete when arbitrary sensor ranges are allowed, while an @math algorithm is given when all sensing ranges are the same. Similarly, if one is interested in the number of sensors moved then the coverage problem is NP-complete when arbitrary sensor ranges are allowed, and an @math algorithm is given when all sensing ranges are the same @cite_8 . Further, @cite_12 considers the algorithmic complexity of several natural generalizations of the barrier coverage problem with sensors of arbitrary ranges, including when the initial positions of sensors are arbitrary points in the two-dimensional plane, as well as multiple barriers that are parallel or perpendicular to each other. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_12"
],
"mid": [
"2125242971",
"1530465016",
"",
"1817887473",
"2217656054"
],
"abstract": [
"We study the problem of achieving maximum barrier coverage by sensors on a barrier modeled by a line segment, by moving the minimum possible number of sensors, initially placed at arbitrary positions on the line containing the barrier. We consider several cases based on whether or not complete coverage is possible, and whether non-contiguous coverage is allowed in the case when complete coverage is impossible. When the sensors have unequal transmission ranges, we show that the problem of finding a minimum-sized subset of sensors to move in order to achieve maximum contiguous or non-contiguous coverage on a finite line segment barrier is NP-complete. In contrast, if the sensors all have the same range, we give efficient algorithms to achieve maximum contiguous as well as non-contiguous coverage. For some cases, we reduce the problem to finding a maximum-hop path of a certain minimum (maximum) weight on a related graph, and solve it using dynamic programming.",
"A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.",
"",
"In this paper, we study the problem of moving n sensors on a line to form a barrier coverage of a specified segment of the line such that the maximum moving distance of the sensors is minimized. Previously, it was an open question whether this problem on sensors with arbitrary sensing ranges is solvable in polynomial time. We settle this open question positively by giving an O(n2lognloglogn) time algorithm. Further, if all sensors have the same-size sensing range, we give an O(nlogn) time algorithm, which improves the previous best O(n2) time solution.",
"We consider several variations of the problems of covering a set of barriers (modeled as line segments) using sensors that can detect any intruder crossing any of the barriers. Sensors are initially located in the plane and they can relocate to the barriers. We assume that each sensor can detect any intruder in a circular area centered at the sensor. Given a set of barriers and a set of sensors located in the plane, we study three problems: the feasibility of barrier coverage, the problem of minimizing the largest relocation distance of a sensor (MinMax), and the problem of minimizing the sum of relocation distances of sensors (MinSum). When sensors are permitted to move to arbitrary positions on the barrier, the problems are shown to be NP-complete. We also study the case when sensors use perpendicular movement to one of the barriers. We show that when the barriers are parallel, both the MinMax and MinSum problems can be solved in polynomial time. In contrast, we show that even the feasibility problem is NP-complete if two perpendicular barriers are to be covered, even if the sensors are located at integer positions, and have only two possible sensing ranges. On the other hand, we give an O(n 3 2) algorithm for a natural special case of this last problem."
]
} |
1508.05786 | 2952361412 | Consider @math sensors placed randomly and independently with the uniform distribution in a @math dimensional unit cube ( @math ). The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the @math dimensional unit cube is completely covered, i.e., every point in the @math dimensional cube is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider the @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the @math dimensional unit cube and @math . The main contribution of the paper is to show the existence of a tradeoff between the @math dimensional cube, sensing radius and @math -total movement. The main results can be summarized as follows for the case of the @math dimensional cube. If the @math dimensional cube sensing radius is @math and @math , for some @math , then we present an algorithm that uses @math total expected movement (see Algorithm 2 and Theorem 5). If the @math dimensional cube sensing radius is greater than @math and @math is a natural number then the total expected movement is @math (see Algorithm 3 and Theorem 7). In addition, we simulate Algorithm 2 and discuss the results of our simulations. | An important setting in considerations for barrier coverage is when the sensors are placed at random on the barrier according to the uniform distribution. Clearly, when the sensor dispersal on the barrier is random then coverage depends on the sensor density and some authors have proposed using several rounds of random dispersal for complete barrier coverage @cite_13 @cite_11 . Another approach is to have the sensors relocate from their initial position to a new position on the barrier so as to achieve complete coverage @cite_1 @cite_9 @cite_10 @cite_8 . Further, this relocation may be done in a centralized (cf. @cite_1 @cite_9 ) or distributed manner (cf. @cite_10 ). | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_10",
"@cite_11"
],
"mid": [
"2092860475",
"2125242971",
"1530465016",
"",
"",
"1965274367"
],
"abstract": [
"We consider the k-barrier coverage problem, that is, the problem of deploying sensors on a border or perimeter to ensure that any intruder would be detected by at least k sensors. With random deployment of sensors, there is always a chance of gaps in coverage, thereby necessitating multiple rounds of deployment. In this paper, we study multi-round wireless sensor deployment on a border modeled as a line segment. We present two different classes of deployment strategies: complete and partial. In complete strategies, in every round, sensors are deployed over the entire border segment, while in partial strategies, sensors are deployed over only some part(s) of the border. First, we analyze the probability of k-coverage for any complete strategy as a function of parameters such as length of barrier to be covered, the width of the intruder, the sensing range of sensors, as well as the density of deployed sensors. Second, we propose two specific deployment strategies - Fixed-Density Complete and Fixed-Density Partial - and analyze the expected number of deployment rounds and expected total number of deployed sensors for each strategy. Next, we present a model for cost analysis of multi-round sensor deployment and calculate, for each deployment strategy, the expected total cost as a function of problem parameters and density of sensor deployment. Finally we find the optimal density of sensors in each round that minimizes the total expected cost of deployment for each deployment strategy. We validate our analysis by extensive simulation results.",
"We study the problem of achieving maximum barrier coverage by sensors on a barrier modeled by a line segment, by moving the minimum possible number of sensors, initially placed at arbitrary positions on the line containing the barrier. We consider several cases based on whether or not complete coverage is possible, and whether non-contiguous coverage is allowed in the case when complete coverage is impossible. When the sensors have unequal transmission ranges, we show that the problem of finding a minimum-sized subset of sensors to move in order to achieve maximum contiguous or non-contiguous coverage on a finite line segment barrier is NP-complete. In contrast, if the sensors all have the same range, we give efficient algorithms to achieve maximum contiguous as well as non-contiguous coverage. For some cases, we reduce the problem to finding a maximum-hop path of a certain minimum (maximum) weight on a related graph, and solve it using dynamic programming.",
"A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.",
"",
"",
"Deploying wireless sensor networks to provide guaranteed barrier coverage is critical for many sensor networks applications such as intrusion detection and border surveillance. To reduce the number of sensors needed to provide guaranteed barrier coverage, we propose multi-round sensor deployment which splits sensor deployment into multiple rounds and can better deal with placement errors that often accompany sensor deployment. We conduct a comprehensive analytical study on multi-round sensor deployment and identify the tradeoff between the number of sensors deployed in each round of multi-round sensor deployment and the barrier coverage performance. Both numerical and simulation studies show that, by simply splitting sensor deployment into two rounds, guaranteed barrier coverage can be achieved with significantly less sensors comparing to single-round sensor deployment. Moreover, we propose two practical solutions for multi-round sensor deployment when the distribution of a sensor's residence point is not fully known. The effectiveness of the proposed multi-round sensor deployment strategies is demonstrated by numerical and simulation results."
]
} |
1508.05786 | 2952361412 | Consider @math sensors placed randomly and independently with the uniform distribution in a @math dimensional unit cube ( @math ). The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the @math dimensional unit cube is completely covered, i.e., every point in the @math dimensional cube is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider the @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the @math dimensional unit cube and @math . The main contribution of the paper is to show the existence of a tradeoff between the @math dimensional cube, sensing radius and @math -total movement. The main results can be summarized as follows for the case of the @math dimensional cube. If the @math dimensional cube sensing radius is @math and @math , for some @math , then we present an algorithm that uses @math total expected movement (see Algorithm 2 and Theorem 5). If the @math dimensional cube sensing radius is greater than @math and @math is a natural number then the total expected movement is @math (see Algorithm 3 and Theorem 7). In addition, we simulate Algorithm 2 and discuss the results of our simulations. | Closely related to our work is @cite_14 , where algorithm @math was analysed. In this paper, @math sensors were placed in the unit interval uniformly and independently at random and the cost of displacement was measured by the sum of the respective displacements of the individual sensors in the unit line segment @math Lets call the positions @math , for @math , anchor positions. The sensors have the sensing radius @math each. Notice that the only way to attain complete coverage is for the sensors to occupy the anchor positions. The following result was proved in @cite_14 . | {
"cite_N": [
"@cite_14"
],
"mid": [
"1526254959"
],
"abstract": [
"Assume that n sensors with identical range r = f(n) 2n, for some f(n) > 1 for all n, are thrown randomly and independently with the uniform distribution in the unit interval [0,1]. They are required to move to new positions so as to cover the entire unit interval in the sense that every point in the interval is within the range of a sensor. We obtain tradeoffs between the expected sum and maximum of displacements of the sensors and their range required to accomplish this task. In particular, when f(n) = 1 the expected total displacement is shown to be Θ(√n). For sensors with larger ranges we present two algorithms that prove the upper bound for the sum drops sharply as f(n) increases. The first of these holds for f(n) ≥ 6 and shows the total movement of the sensors is O ( √In n (n)) while the second holds for 12 ≤ f(n) ≤ In n - 2 In In n and gives an upper bound of O(ln n f(n)ef(n) 2 ). Note that the second algorithm improves upon the first for f(n) δ In In n - In In Inn. Further we show a lower bound, for any 1 ω f(n) 0. For the case of the expected maximum displacement of a sensor when f(n) = 1 our bounds are (n -1 2) and for any e > 0, O(n-1 2+e). For larger sensor ranges (up to (1 - e)ln n n e > O) the expected maximum displacement is shown to be Θ(ln n n). We also obtain similar sum and maximum displacement and range tradeoffs for area coverage for sensors thrown at random in a unit square. In this case, for the expected maximum displacement our bounds are tight and for the expected sum they are within a factor of √ln n. Finally, we investigate the related problem of the expected total and maximum displacement for perimeter coverage (whereby only the perimeter of the region need be covered) of a unit square. For example, when n sensors of radius > 2 n are thrown randomly and independently with the uniform distribution in the interior of a unit square, we can show the total expected displacement required to cover the perimeter is n 12 + o(n). © 2013 ACM."
]
} |
1508.05786 | 2952361412 | Consider @math sensors placed randomly and independently with the uniform distribution in a @math dimensional unit cube ( @math ). The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the @math dimensional unit cube is completely covered, i.e., every point in the @math dimensional cube is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider the @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the @math dimensional unit cube and @math . The main contribution of the paper is to show the existence of a tradeoff between the @math dimensional cube, sensing radius and @math -total movement. The main results can be summarized as follows for the case of the @math dimensional cube. If the @math dimensional cube sensing radius is @math and @math , for some @math , then we present an algorithm that uses @math total expected movement (see Algorithm 2 and Theorem 5). If the @math dimensional cube sensing radius is greater than @math and @math is a natural number then the total expected movement is @math (see Algorithm 3 and Theorem 7). In addition, we simulate Algorithm 2 and discuss the results of our simulations. | In @cite_5 , Theorem was extended to when the cost of displacement is measured by the sum of the respective displacements raised to the power @math of the respective sensors in the unit line segment @math The following result was proved. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2951523777"
],
"abstract": [
"Consider @math mobile sensors placed independently at random with the uniform distribution on a barrier represented as the unit line segment @math . The sensors have identical sensing radius, say @math . When a sensor is displaced on the line a distance equal to @math it consumes energy (in movement) which is proportional to some (fixed) power @math of the distance @math traveled. The energy consumption of a system of @math sensors thus displaced is defined as the sum of the energy consumptions for the displacement of the individual sensors. We focus on the problem of energy efficient displacement of the sensors so that in their final placement the sensor system ensures coverage of the barrier and the energy consumed for the displacement of the sensors to these final positions is minimized in expectation. In particular, we analyze the problem of displacing the sensors from their initial positions so as to attain coverage of the unit interval and derive trade-offs for this displacement as a function of the sensor range. We obtain several tight bounds in this setting thus generalizing several of the results of [10] to any power @math ."
]
} |
1508.05786 | 2952361412 | Consider @math sensors placed randomly and independently with the uniform distribution in a @math dimensional unit cube ( @math ). The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the @math dimensional unit cube is completely covered, i.e., every point in the @math dimensional cube is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider the @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the @math dimensional unit cube and @math . The main contribution of the paper is to show the existence of a tradeoff between the @math dimensional cube, sensing radius and @math -total movement. The main results can be summarized as follows for the case of the @math dimensional cube. If the @math dimensional cube sensing radius is @math and @math , for some @math , then we present an algorithm that uses @math total expected movement (see Algorithm 2 and Theorem 5). If the @math dimensional cube sensing radius is greater than @math and @math is a natural number then the total expected movement is @math (see Algorithm 3 and Theorem 7). In addition, we simulate Algorithm 2 and discuss the results of our simulations. | An analysis similar to the one for the line segment was provided for the unit square in @cite_14 . Our present paper focuses on the analysis of sensor displacement for a group of sensors placed uniformly at random on the @math dimensional unit cube, thus also generalizing the results of @cite_6 from @math to arbitrary dimension @math . In particular, our approach is the first to generalize the results of @cite_14 to the @math dimensional unit cube using as cost metric the @math -total movement, and also obtain sharper bounds for the case of the unit square. | {
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"1526254959",
"1163412417"
],
"abstract": [
"Assume that n sensors with identical range r = f(n) 2n, for some f(n) > 1 for all n, are thrown randomly and independently with the uniform distribution in the unit interval [0,1]. They are required to move to new positions so as to cover the entire unit interval in the sense that every point in the interval is within the range of a sensor. We obtain tradeoffs between the expected sum and maximum of displacements of the sensors and their range required to accomplish this task. In particular, when f(n) = 1 the expected total displacement is shown to be Θ(√n). For sensors with larger ranges we present two algorithms that prove the upper bound for the sum drops sharply as f(n) increases. The first of these holds for f(n) ≥ 6 and shows the total movement of the sensors is O ( √In n (n)) while the second holds for 12 ≤ f(n) ≤ In n - 2 In In n and gives an upper bound of O(ln n f(n)ef(n) 2 ). Note that the second algorithm improves upon the first for f(n) δ In In n - In In Inn. Further we show a lower bound, for any 1 ω f(n) 0. For the case of the expected maximum displacement of a sensor when f(n) = 1 our bounds are (n -1 2) and for any e > 0, O(n-1 2+e). For larger sensor ranges (up to (1 - e)ln n n e > O) the expected maximum displacement is shown to be Θ(ln n n). We also obtain similar sum and maximum displacement and range tradeoffs for area coverage for sensors thrown at random in a unit square. In this case, for the expected maximum displacement our bounds are tight and for the expected sum they are within a factor of √ln n. Finally, we investigate the related problem of the expected total and maximum displacement for perimeter coverage (whereby only the perimeter of the region need be covered) of a unit square. For example, when n sensors of radius > 2 n are thrown randomly and independently with the uniform distribution in the interior of a unit square, we can show the total expected displacement required to cover the perimeter is n 12 + o(n). © 2013 ACM.",
"Consider @math sensors placed randomly and independently with the uniform distribution in a unit square. The sensors have identical sensing range equal to @math , for some @math . We are interested in moving the sensors from their initial positions to new positions so as to ensure that the unit square is completely covered, i.e., every point in the squarei¾?is within the range of a sensor. If the @math -th sensor is displaced a distance @math , what is a displacement of minimum cost? As cost measure for the displacement of the team of sensors we consider thei¾? @math -total movement defined as the sum @math , for some constant @math . We assume that @math and @math are chosen so as to allow full coverage of the square and @math . The main contribution of the paper is to show the existence of a tradeoff between the square sensing radius and @math -total movement and can be summarized as follows:1.If the square sensing radius is equal to @math and @math is the square of a natural number we present an algorithm and show that in expectation the @math -total movement is in @math .2.If the square sensing radius is greater than @math and @math is natural number then we present an algorithm and show that in expectation the @math -total movement is in @math . Therefore this sharp decrease from @math to @math in the @math -total movement of the sensors to attain complete coverage of the square indicates the presence of an interesting threshold on the square sensing radius when it increases from @math to @math . In addition, we simulate our algorithms above and discuss the results of our simulations."
]
} |
1508.05896 | 1881130965 | In this paper, joint designs of data routes and resource allocations are developed for generic half-duplex multicarrier wireless networks in which each subcarrier can be reused by multiple links. Two instances are considered. The first instance pertains to the general case in which each subcarrier can be time-shared by multiple links, whereas the second instance pertains to a special case in which time-sharing is not allowed and a subcarrier, once assigned to a set of links, is used by those links throughout the signalling interval. Novel frameworks are developed to optimize the joint design of data routes, subcarrier schedules, and power allocations. These design problems are nonconvex and hence difficult to solve. To circumvent this difficulty, efficient techniques based on geometric programming are developed to obtain locally optimal solutions. Numerical results show that the designs developed in both instances yield performance that is superior to that of their counterparts in which frequency-reuse is not allowed. | Resource allocation in wireless networks constitutes the task of determining the power allocated for each transmission and the fraction of time over which a particular subcarrier is assigned to that transmission. Instances in which resource allocation techniques were developed are provided in @cite_7 @cite_13 @cite_19 @cite_22 @cite_3 @cite_20 for various network scenarios. For instance, power allocation techniques for single-carrier cellular systems and multicarrier systems were developed in @cite_7 and @cite_20 , respectively. To enable more effective utilization of resources, power allocations were optimized jointly with binary-constrained subcarrier schedules. For instance, the designs developed in @cite_13 and @cite_19 rely on the premise that each subcarrier is exclusively used by one node and the solutions obtained therein are potentially suboptimal. When the binary constraint on the subcarrier schedules is relaxed allowing the subcarriers to be time-shared by multiple nodes, the optimal power allocations can be shown to be the water-filling ones @cite_22 ; a related problem was considered in @cite_3 for a case in which the nodes experience self-noise. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2118019594",
"2068771370",
"1964773003",
"2105280612",
"2135946392",
"2112820013"
],
"abstract": [
"In this paper, we develop a transmit power adaptation method that maximizes the total data rate of multiuser orthogonal frequency division multiplexing (OFDM) systems in a downlink transmission. We generally formulate the data rate maximization problem by allowing that a subcarrier could be shared by multiple users. The transmit power adaptation scheme is derived by solving the maximization problem via two steps: subcarrier assignment for users and power allocation for subcarriers. We have found that the data rate of a multiuser OFDM system is maximized when each subcarrier is assigned to only one user with the best channel gain for that subcarrier and the transmit power is distributed over the subcarriers by the water-filling policy. In order to reduce the computational complexity in calculating water-filling level in the proposed transmit power adaptation method, we also propose a simple method where users with the best channel gain for each subcarrier are selected and then the transmit power is equally distributed among the subcarriers. Results show that the total data rate for the proposed transmit power adaptation methods significantly increases with the number of users owing to the multiuser diversity effects and is greater than that for the conventional frequency-division multiple access (FDMA)-like transmit power adaptation schemes. Furthermore, we have found that the total data rate of the multiuser OFDM system with the proposed transmit power adaptation methods becomes even higher than the capacity of the AWGN channel when the number of users is large enough.",
"This paper considers the optimum single cell power control maximizing the aggregate (uplink) communication rate of the cell when there are peak power constraints at mobile users, and a low-complexity data decoder (without successive decoding) at the base station. It is shown that the optimum power allocation is binary, which means that links are either “on” or “off.” By exploiting further structure of the optimum binary power allocation, a simple polynomial-time algorithm for finding the optimum transmission power allocation is proposed, together with a reduced complexity near-optimal heuristic algorithm. Sufficient conditions under which channel-state aware time division multiple access (TDMA) maximizes the aggregate communication rate are established. In a numerical study, we compare and contrast the performance achieved by the optimum binary power-control policy with other suboptimum policies and the throughput capacity achievable via successive decoding. It is observed that two dominant modes of communication arise, wideband or TDMA, and that successive decoding achieves better sum-rates only under near perfect interference cancellation efficiency. In this paper, we exploit the theory of majorization to obtain the aforementioned results. In the final part of this paper, we do so to solve power-control problems in the areas of femtocells and cognitive radio and find that, again, optimal solutions have a binary (or almost binary) character.",
"We consider scheduling and resource allocation for the downlink of a cellular OFDM system, with various practical considerations including integer tone allocations, different sub-channelization schemes, maximum SNR constraint per tone, and \"self-noise\" due to channel estimation errors and phase noise. During each time-slot a subset of users must be scheduled, and the available tones and transmission power must be allocated among them. Employing a gradient-based scheduling scheme presented in earlier papers reduces this to an optimization problem to be solved in each time-slot. Using a dual formulation, we give an optimal algorithm for this problem when multiple users can time-share each tone. We then give several low complexity heuristics that enforce integer tone allocations. Simulations are used to compare the performance of different algorithms.",
"We consider the joint subcarrier and power allocation problem with the objective of maximizing the total utility of users in the uplink of an OFDMA system. Our formulation includes the problems of sum rate maximization, proportional fairness and max-min fairness as special cases. Unlike some previous algorithms, which are iterative and time consuming, our proposed one is non-iterative and with time complexity of only O(KN log2 N), where K and N are the number of users and subcarriers respectively. We prove that it provides a solution that is Pareto optimal within a large neighborhood of itself. Besides, we derive an efficiently computable upper bound of the optimal solution. Simulation results show that our algorithm is nearly optimal.",
"In this letter, we focus on joint subcarrier and power allocation in the uplink of an OFDMA system. Our goal is to maximize the rate-sum capacity in the uplink. For the purpose, we formulate an optimization problem subject to subcarrier and power constraints and draw necessary conditions for optimality, from which we derive joint subcarrier and power allocation algorithms. Simulation results show that our proposed scheme enhances the system capacity, providing almost near optimal solutions with low computational burden.",
"In wireless cellular or ad hoc networks where Quality of Service (QoS) is interference-limited, a variety of power control problems can be formulated as nonlinear optimization with a system-wide objective, e.g., maximizing the total system throughput or the worst user throughput, subject to QoS constraints from individual users, e.g., on data rate, delay, and outage probability. We show that in the high Signal-to- interference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimization problems in the form of geometric programming; hence they can be very efficiently solved for global optimality even with a large number of users. In the medium to low SIR regime, some of these constrained nonlinear optimization of power control cannot be turned into tractable convex formulations, but a heuristic can be used to compute in most cases the optimal solution by solving a series of geometric programs through the approach of successive convex approximation. While efficient and robust algorithms have been extensively studied for centralized solutions of geometric programs, distributed algorithms have not been explored before. We present a systematic method of distributed algorithms for power control that is geometric-programming-based. These techniques for power control, together with their implications to admission control and pricing in wireless networks, are illustrated through several numerical examples."
]
} |
1508.05896 | 1881130965 | In this paper, joint designs of data routes and resource allocations are developed for generic half-duplex multicarrier wireless networks in which each subcarrier can be reused by multiple links. Two instances are considered. The first instance pertains to the general case in which each subcarrier can be time-shared by multiple links, whereas the second instance pertains to a special case in which time-sharing is not allowed and a subcarrier, once assigned to a set of links, is used by those links throughout the signalling interval. Novel frameworks are developed to optimize the joint design of data routes, subcarrier schedules, and power allocations. These design problems are nonconvex and hence difficult to solve. To circumvent this difficulty, efficient techniques based on geometric programming are developed to obtain locally optimal solutions. Numerical results show that the designs developed in both instances yield performance that is superior to that of their counterparts in which frequency-reuse is not allowed. | Further improvement can be achieved by joint optimization of resource allocations and routing @cite_16 @cite_10 @cite_17 @cite_2 @cite_12 . For instance, a method for obtaining jointly optimal routes and power allocations was developed in @cite_10 for the case in which the nodes were restricted to use orthogonal channels for their transmissions. In a complementary fashion, the case in which the power allocations are fixed was considered in @cite_17 . Therein, a heuristic was developed for optimizing the data routes and subcarrier schedules jointly. | {
"cite_N": [
"@cite_2",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2098779032",
"2035307429",
"2165275149",
"2109068433",
"2113952765"
],
"abstract": [
"We consider the problem of finding the jointly optimal end-to-end communication rates, routing, power allocation and transmission scheduling for wireless networks. In particular, we focus on finding the resource allocation that achieves fair end-to-end communication rates. Using realistic models of several rate and power adaption schemes, we show how this cross-layer optimization problem can be formulated as a nonlinear mathematical program. We develop a specialized solution method, based on a nonlinear column generation technique, and prove that it converges to the globally optimal solution. We present computational results from a large set of networks and discuss the insight that can be gained about the influence of power control, spatial reuse, routing strategies and variable transmission rates on network performance.",
"The goal of this paper is to determine the data routes, subchannel schedules, and power allocations that maximize a weighted-sum rate of the data communicated over a generic OFDMA wireless network in which the nodes are capable of simultaneously transmitting, receiving and relaying data. Two instances are considered. In the first instance, subchannels are allowed to be time-shared by multiple links, whereas in the second instance, each subchannel is exclusively used by one of the links. Using a change of variables, the first problem is transformed into a convex form. In contrast, the second problem is not amenable to such a transformation and results in a complex mixed integer optimization problem. To develop insight into this problem, we utilize the first instance to obtain efficiently computable lower and upper bounds on the weighted-sum rate that can be achieved in the absence of time-sharing. Another lower bound is obtained by enforcing the scheduling constraints through additional power constraints and a monomial approximation technique to formulate the design problem as a geometric program. Numerical investigations show that the obtained rates are higher when time-sharing is allowed, and that the lower bounds on rates in the absence of time-sharing are relatively tight.",
"In wireless data networks, the optimal routing of data depends on the link capacities which, in turn, are determined by the allocation of communications resources (such as transmit powers and bandwidths) to the links. The optimal performance of the network can only be achieved by simultaneous optimization of routing and resource allocation. In this paper, we formulate the simultaneous routing and resource allocation (SRRA) problem, and exploit problem structure to derive efficient solution methods. We use a capacitated multicommodity flow model to describe the data flows in the network. We assume that the capacity of a wireless link is a concave and increasing function of the communications resources allocated to the link, and the communications resources for groups of links are limited. These assumptions allow us to formulate the SRRA problem as a convex optimization problem over the network flow variables and the communications variables. These two sets of variables are coupled only through the link capacity constraints. We exploit this separable structure by dual decomposition. The resulting solution method attains the optimal coordination of data routing in the network layer and resource allocation in the radio control layer via pricing on the link capacities.",
"A wireless data network with K static nodes is considered. The nodes communicate simultaneously over the same narrowband channel and each node uses superposition coding to broadcast independent messages to individual nodes in the network. The goal herein is to find optimal data routes and power allocations to maximize a weighted sum of the data rates injected and reliably communicated over the network. Two instances of this problem are considered. In the first instance, each node uses a fixed power budget, whereas in the second instance the power used by each node is adjustable. For the latter case, two variants are considered: in the first there is a constraint on the power used by each node and in the second there is constraint on the total power used by all nodes. It will be shown that while the instance in which the power of each node is fixed can be cast in the form of an efficiently solvable geometric program (GP), the second instance in which the node powers are adjustable cannot be readily cast in this form. To circumvent this difficulty, an iterative technique is proposed for approximating the constraints of the original optimization problem by GP-compatible constraints. Numerical simulations suggest that this technique converges to a locally optimal solution within a few iterations.",
"In this paper, we propose a cross layer optimization framework for multi-hop routing and resource allocation design in an orthogonal frequency division multiple access (OFDMA) based wireless mesh network. The network under consideration is assumed to consist of fixed mesh routers (or base station routers) inter-connected using OFDMA wireless links with some of the mesh routers functioning as gateways to a wired network. The objective of our cross-layer formulation is to allow joint determination of power control, frequency-selective OFDMA scheduling and multi-hop routing in order to maximize the minimum throughput that can be supported to all mesh routers. Results of our investigations under typical cellular deployment, propagation and channel model assumptions show that this approach achieves significant mesh throughput improvements primarily due to the following: (a) frequency selective scheduling with OFDMA which provides improved tone diversity thus allowing more efficient bandwidth utilization relative to single carrier methods; and (b) multi-hop routing which provides improved path diversity relative to single hop transmissions."
]
} |
1508.05896 | 1881130965 | In this paper, joint designs of data routes and resource allocations are developed for generic half-duplex multicarrier wireless networks in which each subcarrier can be reused by multiple links. Two instances are considered. The first instance pertains to the general case in which each subcarrier can be time-shared by multiple links, whereas the second instance pertains to a special case in which time-sharing is not allowed and a subcarrier, once assigned to a set of links, is used by those links throughout the signalling interval. Novel frameworks are developed to optimize the joint design of data routes, subcarrier schedules, and power allocations. These design problems are nonconvex and hence difficult to solve. To circumvent this difficulty, efficient techniques based on geometric programming are developed to obtain locally optimal solutions. Numerical results show that the designs developed in both instances yield performance that is superior to that of their counterparts in which frequency-reuse is not allowed. | Capitalizing on the potential gains of incorporating power allocation jointly with data routing and subcarrier scheduling, the authors considered a generic network in which the nodes can assume multiple roles at the same time and each subcarrier could be either used exclusively by one link or time-shared by multiple links @cite_16 . Although the designs provided in @cite_16 offer an effective means for exploiting the resources available for the network, these designs restrict the subcarriers to be used exclusively by only one link at any given time instant. Such a restriction may not incur a significant performance loss in tightly coupled networks @cite_1 , but in networks with clustered structures, this restriction can be quite harmful. For unclustered networks, frequency-reuse may result in a substantial increase in the interference levels. However, if properly exploited, frequency-reuse can yield valuable performance gains. The effect of frequency-reuse was considered in single-channel networks in @cite_2 for the case in which the data rates are restricted to assume discrete values, and in @cite_12 for the case in which the nodes use superposition coding. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_12",
"@cite_2"
],
"mid": [
"2035307429",
"2144145247",
"2109068433",
"2098779032"
],
"abstract": [
"The goal of this paper is to determine the data routes, subchannel schedules, and power allocations that maximize a weighted-sum rate of the data communicated over a generic OFDMA wireless network in which the nodes are capable of simultaneously transmitting, receiving and relaying data. Two instances are considered. In the first instance, subchannels are allowed to be time-shared by multiple links, whereas in the second instance, each subchannel is exclusively used by one of the links. Using a change of variables, the first problem is transformed into a convex form. In contrast, the second problem is not amenable to such a transformation and results in a complex mixed integer optimization problem. To develop insight into this problem, we utilize the first instance to obtain efficiently computable lower and upper bounds on the weighted-sum rate that can be achieved in the absence of time-sharing. Another lower bound is obtained by enforcing the scheduling constraints through additional power constraints and a monomial approximation technique to formulate the design problem as a geometric program. Numerical investigations show that the obtained rates are higher when time-sharing is allowed, and that the lower bounds on rates in the absence of time-sharing are relatively tight.",
"Consider a multiuser communication system in a frequency selective environment whereby users share a common spectrum and can interfere with each other. Assuming Gaussian signaling and no interference cancelation, we study optimal spectrum sharing strategies for the maximization of sum-rate under separate power constraints for individual users. Since the sum-rate function is nonconcave in terms of the users' power allocations, there can be multiple local maxima for the sum-rate maximization problem in general. In this paper, we show that, if the normalized crosstalk coefficients are larger than a given threshold (roughly equal to 1 2 ), then the optimal spectrum sharing strategy is frequency division multiple access (FDMA). In case of arbitrary positive crosstalk coefficients, if each user's power budget exceeds a given threshold, then FDMA is again sum-rate optimal, at least in a local sense. In addition, we show that the problem of finding the optimal FDMA spectrum allocation is NP-hard, implying that the general problem of maximizing sum-rate is also NP-hard, even in the case of two users. We also propose several simple distributed spectrum allocation algorithms that can approximately maximize sum-rates. Numerical results indicate that these algorithms are efficient and can achieve substantially larger sum-rates than the existing Iterative Waterfilling solutions, either in an interference-rich environment or when the users' power budgets are sufficiently high.",
"A wireless data network with K static nodes is considered. The nodes communicate simultaneously over the same narrowband channel and each node uses superposition coding to broadcast independent messages to individual nodes in the network. The goal herein is to find optimal data routes and power allocations to maximize a weighted sum of the data rates injected and reliably communicated over the network. Two instances of this problem are considered. In the first instance, each node uses a fixed power budget, whereas in the second instance the power used by each node is adjustable. For the latter case, two variants are considered: in the first there is a constraint on the power used by each node and in the second there is constraint on the total power used by all nodes. It will be shown that while the instance in which the power of each node is fixed can be cast in the form of an efficiently solvable geometric program (GP), the second instance in which the node powers are adjustable cannot be readily cast in this form. To circumvent this difficulty, an iterative technique is proposed for approximating the constraints of the original optimization problem by GP-compatible constraints. Numerical simulations suggest that this technique converges to a locally optimal solution within a few iterations.",
"We consider the problem of finding the jointly optimal end-to-end communication rates, routing, power allocation and transmission scheduling for wireless networks. In particular, we focus on finding the resource allocation that achieves fair end-to-end communication rates. Using realistic models of several rate and power adaption schemes, we show how this cross-layer optimization problem can be formulated as a nonlinear mathematical program. We develop a specialized solution method, based on a nonlinear column generation technique, and prove that it converges to the globally optimal solution. We present computational results from a large set of networks and discuss the insight that can be gained about the influence of power control, spatial reuse, routing strategies and variable transmission rates on network performance."
]
} |
1508.05789 | 1843654501 | Continuous-domain visual signals are usually captured as discrete (digital) images. This operation is not invertible in general, in the sense that the continuous-domain signal cannot be exactly reconstructed based on the discrete image, unless it satisfies certain constraints (e.g., bandlimitedness). In this paper, we study the problem of recovering shape images with smooth boundaries from a set of samples. Thus, the reconstructed image is constrained to regenerate the same samples (consistency), as well as forming a shape (bilevel) image. We initially formulate the reconstruction technique by minimizing the shape perimeter over the set of consistent binary shapes. Next, we relax the non-convex shape constraint to transform the problem into minimizing the total variation over consistent non-negative-valued images. We also introduce a requirement (called reducibility) that guarantees equivalence between the two problems. We illustrate that the reducibility property effectively sets a requirement on the minimum sampling density. We also evaluate the performance of the relaxed alternative in various numerical experiments. | Due to the diverse shape geometries and sharp intensity transitions on the boundaries, this class of visual signals, like many other real-world signals, are neither bandlimitted nor belong to a shift invariant subspace. Hence, the classical sampling results do not apply here. A similar scenario happens for the class of 1D signals studied in @cite_24 @cite_15 , known as signals with finite rate of innovation (FRI). It is shown that the discrete samples can lead to perfect signal recovery, although the signals are not necessarily bandlimitted. A generalization to 2D FRI signals is presented in @cite_19 @cite_17 @cite_21 , with the goal of recovering convex polygonal shapes from the gray-scale pixels. A different approach is devised in @cite_2 by considering the boundary curves in a shape image as the zero-level-sets of specific 2D FRI signals. Due to the FRI requirements, exact recovery relies on the PSF satisfying the so-called Strang-Fix condition. Furthermore, the FRI model admits limited shape geometries. | {
"cite_N": [
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_17"
],
"mid": [
"2125782250",
"2158537680",
"2099267803",
"1971056102",
"",
"2105483458"
],
"abstract": [
"The finite rate of innovation (FRI) principle is developed for sampling a class of non-bandlimited signals that have a finite number of degrees of freedom per unit of time, i.e., signals with FRI. This sampling scheme is later extended to three classes of sampling kernels with compact support and applied to the step edge reconstruction problem by treating the image row by row. In this paper, we regard step edges as 2D FRI signals and reconstruct them block by block. The step edge parameters are obtained from the 2D moments of a given image block. Experimentally, our technique can reconstruct the edge more precisely and track the Cramer-Rao bounds (CRBs) closely with a signal-to-noise ratio (SNR) larger than 4 dB on synthetic step edge images. Experiments on real images show that our proposed method can reconstruct the step edges under practical conditions, i.e., in the presence of various types of noise and using a real sampling kernel. The results on locating the corners of data matrix barcodes using our method also outperform some state-of-the-art barcode decoders.",
"The authors consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic \"bandlimited and sinc kernel\" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels. The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results. Applications of these new sampling results can be found in signal processing, communications systems, and biological systems.",
"We present sampling results for certain classes of two-dimensional (2-D) signals that are not bandlimited but have a parametric representation with a finite number of degrees of freedom. While there are many such parametric signals, it is often difficult to propose practical sampling schemes; therefore, we will concentrate on those classes for which we are able to give exact sampling algorithms and reconstruction formulas. We analyze in detail a set of 2-D Diracs and extend the results to more complex objects such as lines and polygons. Unlike most multidimensional sampling schemes, the methods we propose perfectly reconstruct such signals from a finite number of samples in the noiseless case. Some of the techniques we use are already encountered in the context of harmonic retrieval and error correction coding. In particular, singular value decomposition (SVD)-based methods and the annihilating filter approach are both explored as inherent parts of the developed algorithms. Potentials and limitations of the algorithms in the noisy case are also pointed out. Applications of our results can be found in astronomical signal processing, image processing, and in some classes of identification problems.",
"In this paper, we extend the theory of sampling signals with finite rate of innovation (FRI) to a specific class of two-dimensional curves, which are defined implicitly as the zeros of a mask function. Here the mask function has a parametric representation as a weighted summation of a finite number of complex exponentials, and therefore, has finite rate of innovation . An associated edge image, which is discontinuous on the predefined parametric curve, is proved to satisfy a set of linear annihilation equations. We show that it is possible to reconstruct the parameters of the curve (i.e., to detect the exact edge positions in the continuous domain) based on the annihilation equations. Robust reconstruction algorithms are also developed to cope with scenarios with model mismatch. Moreover, the annihilation equations that characterize the curve are linear constraints that can be easily exploited in optimization problems for further image processing (e.g., image up-sampling). We demonstrate one potential application of the annihilation algorithm with examples in edge-preserving interpolation. Experimental results with both synthetic curves as well as edges of natural images clearly show the effectiveness of the annihilation constraint in preserving sharp edges, and improving SNRs.",
"",
"In this paper, we consider the problem of sampling signals that are nonband-limited but have finite number of degrees of freedom per unit of time and call this number the rate of innovation. Streams of Diracs and piecewise polynomials are the examples of such signals, and thus are known as signals with finite rate of innovation (FRI). We know that the classical (\"band-limited sine\") sampling theory does not enable perfect reconstruction of such signals from their samples since they are not band-limited. However, the recent results on FRI sampling suggest that it is possible to sample and perfectly reconstruct such nonband-limited signals using a rich class of kernels. In this paper, we extend those results in higher dimensions using compactly supported kernels that reproduce polynomials (satisfy Strang-Fix conditions). In fact, the polynomial reproduction property of the kernel makes it possible to obtain the continuous moments of the signal from its samples. Using these moments and the annihilating filter method (Prony's method), the innovative part of the signal, and therefore, the signal itself is perfectly reconstructed. In particular, we present local (directional-derivatives-based) and global (complex-moments-based, Radon-transform-based) sampling schemes for classes of FRI signals such as sets of Diracs, bilevel, and planar polygons, quadrature domains (e.g., circles, ellipses, and cardioids), 2D polynomials with polygonal boundaries, and n-dimensional Diracs and convex polytopes. This work has been explored in a promising way in super-resolution algorithms and distributed compression, and might find its applications in photogrammetry, computer graphics, and machine vision."
]
} |
1508.05789 | 1843654501 | Continuous-domain visual signals are usually captured as discrete (digital) images. This operation is not invertible in general, in the sense that the continuous-domain signal cannot be exactly reconstructed based on the discrete image, unless it satisfies certain constraints (e.g., bandlimitedness). In this paper, we study the problem of recovering shape images with smooth boundaries from a set of samples. Thus, the reconstructed image is constrained to regenerate the same samples (consistency), as well as forming a shape (bilevel) image. We initially formulate the reconstruction technique by minimizing the shape perimeter over the set of consistent binary shapes. Next, we relax the non-convex shape constraint to transform the problem into minimizing the total variation over consistent non-negative-valued images. We also introduce a requirement (called reducibility) that guarantees equivalence between the two problems. We illustrate that the reducibility property effectively sets a requirement on the minimum sampling density. We also evaluate the performance of the relaxed alternative in various numerical experiments. | The shape image recovery can also be viewed as fitting boundary curves to the interpolated gray-scale image (high-resolution version of the measurements). Such methods are widely known as segmentation techniques that fit deformable curves to gray-scale images, and include active contour algorithms also known as snakes. Based on the curve models, they are classified as point snakes @cite_1 , geodesic snakes @cite_22 @cite_26 and parametric snakes @cite_13 @cite_18 . In all cases, the segmentation algorithm is formulated by minimizing a snake energy functional that depends on the gray-scale image and the model of the boundary curves. However, it does not take the PSF into account @cite_6 . As a consequence, the resulting binary image is likely to fail the consistency requirements. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_1",
"@cite_6",
"@cite_13"
],
"mid": [
"2158642611",
"2134820502",
"2149184914",
"2104095591",
"1982662510",
"2157878716"
],
"abstract": [
"We present a novel formulation for B-spline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvature-constrained snake is a cubic spline, irrespective of the form of the external energy field. Unfortunately, such regularized snakes suffer from slow convergence speed because of a large number of control points, as well as from difficulties in determining the weight factors associated to the internal energies of the curve. We therefore propose an alternative formulation in which the intrinsic scale of the spline model is adjusted a priori; this leads to a reduction of the number of parameters to be optimized and eliminates the need for internal energies (i.e., the regularization term). In other words, we are now controlling the elasticity of the spline implicitly and rather intuitively by varying the spacing between the spline knots. The theory is embedded into a multiresolution formulation demonstrating improved stability in noisy image environments. Validation results are presented, comparing the traditional snake using internal energies and the proposed approach without internal energies, showing the similar performance of the latter. Several biomedical examples of applications are included to illustrate the versatility of the method.",
"A novel scheme for the detection of object boundaries is presented. The technique is based on active contours deforming according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric as defined by the image content. This geodesic approach for object segmentation allows to connect classical \"snakes\" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved as showed by a number of examples. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. >",
"Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods and overcomes some of their limitations. The authors' techniques can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object's topology is made. A single instance of the authors' model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian (1988) to model propagating solid liquid interfaces with curvature-dependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a \"Hamilton-Jacobi\" type equation written for a function in which the interface is a particular level set. A speed term synthesized from the image is used to stop the interface in the vicinity of object boundaries. The resulting equation of motion is solved by employing entropy-satisfying upwind finite difference schemes. The authors present a variety of ways of computing the evolving front, including narrow bands, reinitializations, and different stopping criteria. The efficacy of the scheme is demonstrated with numerical experiments on some synthesized images and some low contrast medical images. >",
"A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.",
"Reference EPFL-ARTICLE-205429doi:10.1109 Msp.2014.2344552View record in Web of Science URL: http: bigwww.epfl.ch publications delgadogonzalo1501.html URL: http: bigwww.epfl.ch publications delgadogonzalo1501.pdf URL: http: bigwww.epfl.ch publications delgadogonzalo1501.ps Record created on 2015-02-20, modified on 2017-05-10",
"This paper describes a new approach to adaptive estimation of parametric deformable contours based on B-spline representations. The problem is formulated in a statistical framework with the likelihood function being derived from a region-based image model. The parameters of the image model, the contour parameters, and the B-spline parameterization order (i.e., the number of control points) are all considered unknown. The parameterization order is estimated via a minimum description length (MDL) type criterion. A deterministic iterative algorithm is developed to implement the derived contour estimation criterion, the result is an unsupervised parametric deformable contour: it adapts its degree of smoothness complexity (number of control points) and it also estimates the observation (image) model parameters. The experiments reported in the paper, performed on synthetic and real (medical) images, confirm the adequate and good performance of the approach."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | The idea of modeling text documents as mixtures of a few semantic topics was first proposed in @cite_11 where the mixing weights were assumed to be deterministic. Latent Dirichlet Allocation (LDA) in the seminal work of @cite_12 extended this to a probabilistic setting by modeling topic mixing weights using Dirichlet priors. This setting has been further extended to include other topic priors such as the log-normal prior in the Correlated Topic Model @cite_8 . LDA models and their derivatives have been successful on a wide range of problems in terms of achieving good empirical performance @cite_17 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_11",
"@cite_8",
"@cite_12",
"@cite_17"
],
"mid": [
"1161645253",
"2107743791",
"2128925311",
"",
"2138107145"
],
"abstract": [
"In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology. Through examples using real data sets, youll discover how to characterize complex multivariate data in: Studies involving genetic databases Patterns in the progression of diseases and disabilities Combinations of topics covered by text documents Political ideology or electorate voting patterns Heterogeneous relationships in networks, and much more The handbook spans more than 20 years of the editors and contributors statistical work in the field. Top researchers compare partial and mixed membership models, explain how to interpret mixed membership, delve into factor analysis, and describe nonparametric mixed membership models. They also present extensions of the mixed membership model for text analysis, sequence and rank data, and network data as well as semi-supervised mixed membership models.",
"Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous.",
"Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139--177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990--1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.",
"",
"Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | The prevailing approaches for estimation and inference problems in topic modeling are based on MAP or ML estimation @cite_17 . However, the computation of posterior distributions conditioned on observations @math is intractable @cite_12 . Moreover, the MAP estimation objective is non-convex and has been shown to be @math -hard @cite_47 @cite_21 . Therefore various approximation and heuristic strategies have been employed. These approaches fall into two major categories -- sampling approaches and optimization approaches. Most sampling approaches are based on Markov Chain Monte Carlo (MCMC) algorithms that seek to generate (approximately) independent samples from a Markov Chain that is carefully designed to ensure that the sample distribution converges to the true posterior @cite_27 @cite_43 . Optimization approaches are typically based on the so-called Variational-Bayes methods. These methods optimize the parameters of a simpler parametric distribution so that it is close to the true posterior in terms of KL divergence @cite_12 @cite_2 . Expectation- -Maximization- -type algorithms are typically used in these methods. In practice, while both Variational-Bayes and MCMC algorithms have similar performance, Variational-Bayes is typically faster than MCMC @cite_15 @cite_17 . | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_43",
"@cite_27",
"@cite_2",
"@cite_47",
"@cite_12",
"@cite_17"
],
"mid": [
"1880262756",
"",
"2144100511",
"",
"2120340025",
"2110558001",
"",
"2138107145"
],
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.",
"",
"A natural evaluation metric for statistical topic models is the probability of held-out documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonly-used methods are unlikely to accurately estimate the probability of held-out documents, and propose two alternative methods that are both accurate and efficient.",
"",
"The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.",
"We consider the computational complexity of probabilistic inference in Latent Dirichlet Allocation (LDA). First, we study the problem of finding the maximum a posteriori (MAP) assignment of topics to words, where the document's topic distribution is integrated out. We show that, when the effective number of topics per document is small, exact inference takes polynomial time. In contrast, we show that, when a document has a large number of topics, finding the MAP assignment of topics to words in LDA is NP-hard. Next, we consider the problem of finding the MAP topic distribution for a document, where the topic-word assignments are integrated out. We show that this problem is also NP-hard. Finally, we briefly discuss the problem of sampling from the posterior, showing that this is NP-hard in one restricted setting, but leaving open the general question.",
"",
"Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | Nonnegative Matrix Factorization (NMF) is an alternative approach for topic estimation. NMF-based methods exploit the fact that both the topic matrix @math and the mixing weights are nonnegative and attempt to decompose the empirical observation matrix @math into a product of a nonnegative topic matrix @math and the matrix of mixing weights by minimizing a cost function of the form @cite_36 @cite_9 @cite_30 @cite_15 where @math is some measure of closeness and @math is a regularization term which enforces desirable properties, e.g., sparsity, on @math and the mixing weights. The NMF problem, however, is also known to be non-convex and @math -hard @cite_6 in general. Sub-optimal strategies such as alternating minimization, greedy gradient descent, and heuristics are used in practice @cite_9 . | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_9",
"@cite_6",
"@cite_15"
],
"mid": [
"2951734015",
"1902027874",
"1246381107",
"2124172487",
"1880262756"
],
"abstract": [
"This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C such that X approximately equals CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of (2012). In contrast with this earlier work, the proposed method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.",
"Is perception of the whole based on perception of its parts? There is psychological1 and physiological2,3 evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations4,5. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign.",
"This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD). NMF NTF and their extensions are increasingly used as tools in signal and image processing, and data analysis, having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets. It is suggested that NMF can provide meaningful components with physical interpretations; for example, in bioinformatics, NMF and its extensions have been successfully applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining. As such, the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models. Key features: Acts as a single source reference guide to NMF, collating information that is widely dispersed in current literature, including the authors own recently developed techniques in the subject area. Uses generalized cost functions such as Bregman, Alpha and Beta divergences, to present practical implementations of several types of robust algorithms, in particular Multiplicative, Alternating Least Squares, Projected Gradient and Quasi Newton algorithms. Provides a comparative analysis of the different methods in order to identify approximation error and complexity. Includes pseudo codes and optimized MATLAB source codes for almost all algorithms presented in the book. The increasing interest in nonnegative matrix and tensor factorizations, as well as decompositions and sparse representation of data, will ensure that this book is essential reading for engineers, scientists, researchers, industry practitioners and graduate students across signal and image processing; neuroscience; data mining and data analysis; computer science; bioinformatics; speech processing; biomedical engineering; and multimedia.",
"Nonnegative matrix factorization (NMF) has become a prominent technique for the analysis of image databases, text databases, and other information retrieval and clustering applications. The problem is most naturally posed as continuous optimization. In this report, we define an exact version of NMF. Then we establish several results about exact NMF: (i) that it is equivalent to a problem in polyhedral combinatorics; (ii) that it is NP-hard; and (iii) that a polynomial-time local search heuristic exists.",
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | In contrast to the above approaches, a new approach has recently emerged which is based on imposing additional structure on the model parameters @cite_21 @cite_22 @cite_23 @cite_29 @cite_37 @cite_3 . These approaches show that the topic discovery problem lends itself to provably consistent and polynomial-time solutions by making assumptions about the structure of the topic matrix @math and the distribution of the mixing weights. In this category of approaches are methods based on a tensor decomposition of the moments of @math @cite_10 @cite_40 . The algorithm in @cite_10 uses second order empirical moments and is shown to be asymptotically consistent when the topic matrix @math has a special sparsity structure. The algorithm in @cite_37 uses the third order tensor of observations. It is, however, strongly tied to the specific structure of the Dirichlet prior on the mixing weights and requires knowledge of the concentration parameters of the Dirichlet distribution @cite_37 . Furthermore, in practice these approaches are computationally intensive and require some initial coarse dimensionality reduction, gradient descent speedups, and GPU acceleration to process large-scale text corpora like the NYT dataset @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_40",
"@cite_23",
"@cite_10"
],
"mid": [
"2963300816",
"2105617746",
"168764236",
"",
"2952779762",
"",
"2951030277",
"1505105018"
],
"abstract": [
"Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced by (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is fast and is based on simple linear algebraic operations, e.g., singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. As an important special case, our results match the best known scaling requirements for the (homogeneous) stochastic block model.",
"Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster.",
"Topic modeling for large-scale distributed web-collections requires distributed techniques that account for both computational and communication costs. We consider topic modeling under the separability assumption and develop novel computationally efficient methods that provably achieve the statistical performance of the state-of-the-art centralized approaches while requiring insignificant communication between the distributed document collections. We achieve tradeoffs between communication and computation without actually transmitting the documents. Our scheme is based on exploiting the geometry of normalized word-word cooccurrence matrix and viewing each row of this matrix as a vector in a high-dimensional space. We relate the solid angle subtended by extreme points of the convex hull of these vectors to topic identities and construct distributed schemes to identify topics.",
"",
"The separability assumption (Donoho & Stodden, 2003; , 2012) turns non-negative matrix factorization (NMF) into a tractable problem. Recently, a new class of provably-correct NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem as that of finding the extreme rays of the conical hull of a finite set of vectors. From this geometric perspective, we derive new separable NMF algorithms that are highly scalable and empirically noise robust, and have several other favorable properties in relation to existing methods. A parallel implementation of our algorithm demonstrates high scalability on shared- and distributed-memory machines.",
"",
"We present algorithms for topic modeling based on the geometry of cross-document word-frequency patterns. This perspective gains significance under the so called separability condition. This is a condition on existence of novel-words that are unique to each topic. We present a suite of highly efficient algorithms based on data-dependent and random projections of word-frequency patterns to identify novel words and associated topics. We will also discuss the statistical guarantees of the data-dependent projections method based on two mild assumptions on the prior density of topic document matrix. Our key insight here is that the maximum and minimum values of cross-document frequency patterns projected along any direction are associated with novel words. While our sample complexity bounds for topic recovery are similar to the state-of-art, the computational complexity of our random projection scheme scales linearly with the number of documents and the number of words per document. We present several experiments on synthetic and real-world datasets to demonstrate qualitative and quantitative merits of our scheme.",
"This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from low-order observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multi-level DAGs, DAGs with effective depth one, as well as certain families of polytrees."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | We note that the separability property has been exploited in other recent work as well @cite_0 @cite_13 . In @cite_0 , a singular value decomposition based approach is proposed for topic estimation. In @cite_13 , it is shown that the standard Variational-Bayes approximation can be asymptotically consistent if @math is separable. However, the additional constraints proposed essentially boil down to the requirement that each document contain predominantly only one topic. In addition to assuming the existence of such pure'' documents, @cite_13 also requires a strict initialization. It is thus unclear how this can be achieved using only the observations @math . | {
"cite_N": [
"@cite_0",
"@cite_13"
],
"mid": [
"2950700385",
"1938536253"
],
"abstract": [
"Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded @math error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded @math error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on @math , the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5].",
"Variational inference is a very efficient and popular heuristic used in various forms in the context of latent variable models. It's closely related to Expectation Maximization (EM), and is applied when exact EM is computationally infeasible. Despite being immensely popular, current theoretical understanding of the effectiveness of variaitonal inference based algorithms is very limited. In this work we provide the first analysis of instances where variational inference algorithms converge to the global optimum, in the setting of topic models. More specifically, we show that variational inference provably learns the optimal parameters of a topic model under natural assumptions on the topic-word matrix and the topic priors. The properties that the topic word matrix must satisfy in our setting are related to the topic expansion assumption introduced in (, 2013), as well as the anchor words assumption in (, 2012c). The assumptions on the topic priors are related to the well known Dirichlet prior, introduced to the area of topic modeling by (, 2003). It is well known that initialization plays a crucial role in how well variational based algorithms perform in practice. The initializations that we use are fairly natural. One of them is similar to what is currently used in LDA-c, the most popular implementation of variational inference for topic models. The other one is an overlapping clustering algorithm, inspired by a work by (, 2014) on dictionary learning, which is very simple and efficient. While our primary goal is to provide insights into when variational inference might work in practice, the multiplicative, rather than the additive nature of the variational inference updates forces us to use fairly non-standard proof arguments, which we believe will be of general interest."
]
} |
1508.05565 | 2268066489 | We develop necessary and sufficient conditions and a novel provably consistent and efficient algorithm for discovering topics (latent factors) from observations (documents) that are realized from a probabilistic mixture of shared latent factors that have certain properties. Our focus is on the class of topic models in which each shared latent factor contains a novel word that is unique to that factor, a property that has come to be known as separability. Our algorithm is based on the key insight that the novel words correspond to the extreme points of the convex hull formed by the row-vectors of a suitably normalized word co-occurrence matrix. We leverage this geometric insight to establish polynomial computation and sample complexity bounds based on a few isotropic random projections of the rows of the normalized word co-occurrence matrix. Our proposed random-projections-based algorithm is naturally amenable to an efficient distributed implementation and is attractive for modern web-scale distributed data mining applications. | The separability property has been re-discovered and exploited in the literature across a number of different fields and has found application in several problems. To the best of our knowledge, this concept was first introduced as the Pure Pixel Index assumption in the Hyperspectral Image unmixing problem @cite_41 . This work assumes the existence of pixels in a hyper-spectral image containing predominantly one species. Separability has also been studied in the NMF literature in the context of ensuring the uniqueness of NMF @cite_33 . Subsequent work has led to the development of NMF algorithms that exploit separability @cite_30 @cite_7 . The uniqueness and correctness results in this line of work has primarily focused on the noiseless case. We finally note that separability has also been recently exploited in the problem of learning multiple ranking preferences from pairwise comparisons for personal recommendation systems and information retrieval @cite_26 @cite_32 and has led to provably consistent and efficient estimation algorithms. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_41",
"@cite_32"
],
"mid": [
"2951734015",
"2148039157",
"2140318696",
"2125118959",
"1555549210",
"1752254861"
],
"abstract": [
"This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm identifies a matrix C such that X approximately equals CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of (2012). In contrast with this earlier work, the proposed method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.",
"We visit the following fundamental problem: For a 'generic' model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal preference information), how may one predict revenues from offering a particular assortment of choices? This problem is central to areas within operations research, marketing and econometrics. We present a framework to answer such questions and design a number of tractable algorithms (from a data and computational standpoint) for the same.",
"We interpret non-negative matrix factorization geometrically, as the problem of finding a simplicial cone which contains a cloud of data points and which is contained in the positive orthant. We show that under certain conditions, basically requiring that some of the data are spread across the faces of the positive orthant, there is a unique such simplicial cone. We give examples of synthetic image articulation databases which obey these conditions; these require separated support and factorial sampling. For such databases there is a generative model in terms of 'parts' and NMF correctly identifies the 'parts'. We show that our theoretical results are predictive of the performance of published NMF code, by running the published algorithms on one of our synthetic image articulation databases.",
"In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We present a family of fast recursive algorithms and prove they are robust under any small perturbations of the input data matrix. This family generalizes several existing hyperspectral unmixing algorithms and hence provides for the first time a theoretical justification of their better practical performance.",
"",
"We propose a topic modeling approach to the prediction of preferences in pairwise comparisons. We develop a new generative model for pairwise comparisons that accounts for multiple shared latent rankings that are prevalent in a population of users. This new model also captures inconsistent user behavior in a natural way. We show how the estimation of latent rankings in the new generative model can be formally reduced to the estimation of topics in a statistically equivalent topic modeling problem. We leverage recent advances in the topic modeling literature to develop an algorithm that can learn shared latent rankings with provable consistency as well as sample and computational complexity guarantees. We demonstrate that the new approach is empirically competitive with the current state-of-the-art approaches in predicting preferences on some semi-synthetic and real world datasets."
]
} |
1508.05400 | 2951205761 | Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31 of brown energy consumption. | Owing to the energy demands of DCs, many techniques and algorithms have been proposed to minimize the energy consumption of DCs @cite_11 . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2021689699"
],
"abstract": [
"While a large body of work has recently focused on reducing data center's energy expenses, there exists no prior work on investigating the trade-off between minimizing data center's energy expenditure and maximizing their revenue for various Internet and cloud computing services that they may offer. In this paper, we seek to tackle this shortcoming by proposing a systematic approach to maximize green data center's profit, i.e., revenue minus cost. In this regard, we explicitly take into account practical service-level agreements (SLAs) that currently exist between data centers and their customers. Our model also incorporates various other factors such as availability of local renewable power generation at data centers and the stochastic nature of data centers' workload. Furthermore, we propose a novel optimization-based profit maximization strategy for data centers for two different cases, without and with behind-the-meter renewable generators. We show that the formulated optimization problems in both cases are convex programs; therefore, they are tractable and appropriate for practical implementation. Using various experimental data and via computer simulations, we assess the performance of the proposed optimization-based profit maximization strategy and show that it significantly outperforms two comparable energy and performance management algorithms that are recently proposed in the literature."
]
} |
1508.05400 | 2951205761 | Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31 of brown energy consumption. | Fang ph et al @cite_12 presented a novel power management strategy for the DCs, and their target was to minimize the energy consumption of switches in a DC. Cavdar and Alagoz @cite_4 surveyed the energy consumption of server and network devices of intra-DC networks, and showed that both computing resources and network elements should be designed with energy proportionality. In other words, it is better if the computing and networking devices can be designed with multiple sleeping states. A few green metrics are also provided by this survey, such as Power Usage Effectiveness (PUE) and Carbon Usage Effectiveness (CUE). | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2025501499",
"2167017317"
],
"abstract": [
"The growing economical and environmental cost of data centers due to the high energy consumption is becoming a major issue. “Green Data Centers” refers to, energy aware, energy efficient and CO2 emission minimizing designs, protocols, devices, infrastructures and algorithms for data centers. Today's data centers are provisioned for peak load. However, it is shown that servers are idle most of the time. Idle servers and the connected network elements are consuming considerable amount of energy. In this survey, we identify the key enablers of green data center research. Firstly we overview the green metrics that are applicable to data centers. Then we describe the most recent stage of research and give a taxonomy of the related work. We focus on computing and networking proposals for green data centers, even though we briefly describe some other green research related to data centers such as cloud computing, cooling.",
"Data center consumes increasing amount of power nowadays, together with expanding number of data centers and upgrading data center scale, its power consumption becomes a knotty issue. While main efforts of this research focus on server and storage power reduction, network devices as part of the key components of data centers, also contribute to the overall power consumption as data centers expand. In this paper, we address this problem with two perspectives. First, in a macro level, we attempt to reduce redundant energy usage incurred by network redundancies for load balancing. Second, in the micro level, we design algorithm to limit port rate in order to reduce unnecessary power consumption. Given the guidelines we obtained from problem formulation, we propose a solution based on greedy approach with integration of network traffic and minimization of switch link rate. We also present results from a simulation-based performance evaluation which shows that expected power saving is achieved with tolerable delay."
]
} |
1508.05400 | 2951205761 | Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31 of brown energy consumption. | Deng ph et al @cite_8 presented five aspects of applying renewable energy in the DCs: the renewable energy generation model, the renewable energy prediction model, the planning of green DCs (i.e., various renewable options, avalabity of energy sources, different energy storage devices), the intra-DC work loads scheduling, and the inter-DC load balancing. They also discussed the research challenges of powering DCs with renewable energy. Ghamkhari and Mohsenian-Rad @cite_11 developed a mathematical model to capture the trade-off between the energy consumption of a data center and its revenue of offering Internet services. They proposed an algorithm to maximize the revenue of a DC by adapting the number of active servers according to the traffic profile. Gattulli ph et al @cite_13 proposed algorithms to reduce @math emissions in DCs by balancing the loads according to the renewable energy generation. These algorithms optimize renewable energy utilization while maintaining a relatively low blocking probability. | {
"cite_N": [
"@cite_11",
"@cite_13",
"@cite_8"
],
"mid": [
"2021689699",
"",
"1991971657"
],
"abstract": [
"While a large body of work has recently focused on reducing data center's energy expenses, there exists no prior work on investigating the trade-off between minimizing data center's energy expenditure and maximizing their revenue for various Internet and cloud computing services that they may offer. In this paper, we seek to tackle this shortcoming by proposing a systematic approach to maximize green data center's profit, i.e., revenue minus cost. In this regard, we explicitly take into account practical service-level agreements (SLAs) that currently exist between data centers and their customers. Our model also incorporates various other factors such as availability of local renewable power generation at data centers and the stochastic nature of data centers' workload. Furthermore, we propose a novel optimization-based profit maximization strategy for data centers for two different cases, without and with behind-the-meter renewable generators. We show that the formulated optimization problems in both cases are convex programs; therefore, they are tractable and appropriate for practical implementation. Using various experimental data and via computer simulations, we assess the performance of the proposed optimization-based profit maximization strategy and show that it significantly outperforms two comparable energy and performance management algorithms that are recently proposed in the literature.",
"",
"The proliferation of cloud computing has promoted the wide deployment of largescale datacenters with tremendous power consumption and high carbon emission. To reduce power cost and carbon footprint, an increasing number of cloud service providers have considered green datacenters with renewable energy sources, such as solar or wind. However, unlike the stable supply of grid energy, it is challenging to utilize and realize renewable energy due to the uncertain, intermittent and variable nature. In this article, we provide a taxonomy of the state-of-the-art research in applying renewable energy in cloud computing datacenters from five key aspects, including generation models and prediction methods of renewable energy, capacity planning of green datacenters, intra-datacenter workload scheduling and load balancing across geographically distributed datacenters. By exploring new research challenges involved in managing the use of renewable energy in datacenters, this article attempts to address why, when, where and how to leverage renewable energy in datacenters, also with a focus on future research avenues."
]
} |
1508.05400 | 2951205761 | Datacenters (DCs) are deployed in a large scale to support the ever increasing demand for data processing to support various applications. The energy consumption of DCs becomes a critical issue. Powering DCs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem. Owing to geographical deployments of DCs, the renewable energy generation and the data processing demands usually vary in different DCs. Migrating virtual machines (VMs) among DCs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in DCs, and thus maximizes the utilization of renewable energy. Since migrating VMs incurs additional traffic in the network, the VM migration is constrained by the network capacity. The inter-datacenter (inter-DC) VM migration with network capacity constraints is an NP-hard problem. In this paper, we propose two heuristic algorithms that approximate the optimal VM migration solution. Through extensive simulations, we show that the proposed algorithms, by migrating VM among DCs, can reduce up to 31 of brown energy consumption. | Mandal ph et al @cite_14 studied green energy aware VM migration techniques to reduce the energy consumption of DCs. They proposed an algorithm to enhance the green energy utilization by migrating VMs according to the available green energy in DCs. However, they did not consider the network constraints while migrating VMs among DCs. In the optical networks, the available spectrum is limited. The large amount of traffic generated by the VM migration may congest the optical networks and increase the blocking rate of the network. Therefore, it is important to consider the network constraints in migrating VMs. In this paper, we propose algorithms to solve the green energy aware inter-DC VM migration problem with network constraints. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1983667143"
],
"abstract": [
"Cloud computing is the new paradigm of operation in information technology. While cloud computing infrastructures have benefits, their energy consumption is becoming a growing concern. Data centers, which are used to provide the infrastructure and resource pool for cloud computing, consume a large amount of energy. Future energy consumption predictions of these data centers are even bigger concerns. To reduce this energy consumption, and hence the carbon footprint and greenhouse gas emission of cloud computing, and information technology in general, energy-efficient methods of operation have to be investigated and adopted. In addition, renewable energy usage in place of non-renewable can also reduce carbon emission. However, due to its intermittency and volatility, renewable energy cannot be used to its full potential. In this study, we introduce the renewable-energy- aware cloud service and virtual machine migration to relocate energy demand using dynamic and flexible cloud resource allocation techniques, and help overcome the challenges of renewable energy. Results from a U.S.-wide cloud network infrastructure show that, using simple migration techniques, up to 30 percent nonrenewable energy can be replaced by renewable energy, while consuming only a small amount of extra resources and energy to perform demand relocation."
]
} |
1508.04675 | 2199658097 | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tet ali, and Zhao showing that a union of copies of @math maximizes the number of independent sets and the independence polynomial of a d-regular graph. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of @math . Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr "om. In probabilistic language, our main theorems state that for all d-regular graphs and all @math , the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity @math are maximized by @math . Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | The results of Kahn @cite_27 @cite_1 , Galvin and Tet ali @cite_19 , and Zhao @cite_7 culminating in the fact that @math is maximized over @math -regular graphs by @math are based on the entropy method, a powerful tool for the type of problems we address here. Apart from the results mentioned above, see @cite_10 and @cite_15 for surveys of the method. A direct application of the method requires the graph @math to be bipartite. Zhao @cite_25 showed that in some, but not all cases, this restriction can be removed by using a bipartite swapping trick'. An entropy-free proof of Galvin and Tet ali's general theorem on counting homomorphisms was recently given by Lubetzky and Zhao @cite_26 . Our method also does not use entropy, but in contrast to the other proofs it works directly for all @math -regular graphs, without a reduction to the bipartite case. The method deals directly with the hard-core model instead of counting homomorphisms and seems to require more problem-specific information than the entropy method; a question for future work is to extend the method to a more general class of homomorphisms. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_1",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_10",
"@cite_25"
],
"mid": [
"1825819013",
"2164181738",
"1834605046",
"2963574247",
"1986646471",
"1593988169",
"",
"2106299455"
],
"abstract": [
"The following question is due to Chatterjee and Varadhan 2011. Fix 0<p<r<1 and take G Gn,p, the Erdi¾?s-Renyi random graph with edge density p, conditioned to have at least as many triangles as the typical Gn,r. Is G close in cut-distance to a typical Gn,r? Via a beautiful new framework for large deviation principles in Gn,p, Chatterjee and Varadhan gave bounds on the replica symmetric phase, the region of p,r where the answer is positive. They further showed that for any small enough p there are at least two phase transitions as r varies. We settle this question by identifying the replica symmetric phase for triangles and more generally for any fixed d-regular graph. By analyzing the variational problem arising from the framework of Chatterjee and Varadhan we show that the replica symmetry phase consists of all p,r such that rd,hpr lies on the convex minorant of xi¾?hpx1 d where hp is the rate function of a binomial with parameter p. In particular, the answer for triangles involves hpx rather than the natural guess of hpx1 3 where symmetry was previously known. Analogous results are obtained for linear hypergraphs as well as the setting where the largest eigenvalue of G Gn,p is conditioned to exceed the typical value of the largest eigenvalue of Gn,r. Building on the work of Chatterjee and Diaconis 2012 we obtain additional results on a class of exponential random graphs including a new range of parameters where symmetry breaking occurs. En route we give a short alternative proof of a graph homomorphism inequality due to Kahn 2001 and Galvin and Tet ali 2004. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 47, 109-146, 2015",
"We show that the number of independent sets in an N-vertex, d-regular graph is at most (2d+1 − 1)N 2d, where the bound is sharp for a disjoint union of complete d-regular bipartite graphs. This settles a conjecture of Alon in 1991 and Kahn in 2001. Kahn proved the bound when the graph is assumed to be bipartite. We give a short proof that reduces the general case to the bipartite case. Our method also works for a weighted generalization, i.e., an upper bound for the independence polynomial of a regular graph.",
"For n-regular, N-vertex bipartite graphs with bipartition A U B, a precise bound is given for the sum over independent sets I of the quantity μ |I ∩ A| λ |I ∩ B| , (In other language, this is bounding the partition function for certain instances of the hard-core model.) This result is then extended to graded partially ordered sets, which in particular provides a simple proof of a well-known bound for Dedekind's Problem given by Kleitman and Markowsky in 1975.",
"For given graphs G and H, let |Hom(G,H)| denote the set of graph homomorphisms from G to H. We show that for any finite, n-regular, bipartite graph G and any finite graph H (perhaps with loops), |Hom(G,H)| is maximum when G is a disjoint union of Kn,n’s. This generalizes a result of J. Kahn on the number of independent sets in a regular bipartite graph. We also give the asymptotics of the logarithm of |Hom(G,H)| in terms of a simply expressed parameter of H. We also consider weighted versions of these results which may be viewed as statements about the partition functions of certain models of physical systems with hard constraints.",
"We use entropy ideas to study hard-core distributions on the independent sets of a finite, regular bipartite graph, specifically distributions according to which each independent set I is chosen with probability proportional to λ∣I∣ for some fixed λ > 0. Among the results obtained are rather precise bounds on occupation probabilities; a ‘phase transition’ statement for Hamming cubes; and an exact upper bound on the number of independent sets in an n-regular bipartite graph on a given number of vertices.",
"We explain the notion of the entropy of a discrete random variable, and derive some of its basic properties. We then show through examples how entropy can be useful as a combinatorial enumeration tool. We end with a few open questions.",
"",
"We provide an upper bound to the number of graph homomorphisms from G to H, where H is a fixed graph with certain properties, and G varies over all N-vertex, d-regular graphs. This result generalizes a recently resolved conjecture of Alon and Kahn on the number of independent sets. We build on the work of Galvin and Tet ali, who studied the number of graph homomorphisms from G to H when G is bipartite. We also apply our techniques to graph colorings and stable set polytopes."
]
} |
1508.04675 | 2199658097 | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tet ali, and Zhao showing that a union of copies of @math maximizes the number of independent sets and the independence polynomial of a d-regular graph. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of @math . Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr "om. In probabilistic language, our main theorems state that for all d-regular graphs and all @math , the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity @math are maximized by @math . Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | The technique of writing the expected size of an independent set in two ways (as we do here) was used by Shearer @cite_9 in proving lower bounds on the average size of an independent set in @math -free graphs and then by Alon @cite_29 for graphs in which all vertex neighborhoods are @math -colorable. The idea of bounding the occupancy fraction instead of the partition function comes in part from work of the third author @cite_24 in improving, at low densities, the bounds on matchings of a given size in Ilinca and Kahn @cite_5 and independent sets of a given size in Carroll, Galvin, and Tet ali @cite_12 . The use of linear programming for counting graph homomorphisms appears in Kopparty and Rossman @cite_11 , where they use a combination of entropy and linear programming to compute a related quantity, the homomorphism domination exponent, in chordal and series-parallel graphs. | {
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_24",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"2162218992",
"2078604563",
"2963560773",
"2032529651",
"2083618772",
"2062088877"
],
"abstract": [
"Let G be a regular graph of degree d on n points which contains no Kr (r ≥ 4). Let α be the independence number of G. Then we show for large d that α ≥ c(r)n ***image***. © 1995 John Wiley & Sons, Inc.",
"Let G = (V,E) be a graph on n vertices with average degree t ≥ 1 in which for every vertex v ∈ V the induced subgraph on the set of all neighbors of v is r-colorable. We show that the independence number of G is at least c log (r+1) n t log t, for some absolute positive constant c. This strengthens a well known result of Ajtai, Komlos and Szemeredi. Combining their result with some probabilistic arguments, we prove the following Ramsey type theorem, conjectured by Erdos in 1979. There exists an absolute constant c′ > 0 so that in every graph on n vertices in which any set of b √ nc vertices contains at least one edge, there is some set of b √ nc vertices that contains at least c′ √ n log n edges.",
"",
"We give upper bounds for the number @F\"@?(G) of matchings of size @? in (i) bipartite graphs G=(X@?Y,E) with specified degrees d\"x (x@?X), and (ii) general graphs G=(V,E) with all degrees specified. In particular, for d-regular, N-vertex graphs, our bound is best possible up to an error factor of the form exp[o\"d(1)N], where o\"d(1)->0 as d-> . This represents the best progress to date on the ''Upper Matching Conjecture'' of Friedland, Krop, Lundow and Markstrom. Some further possibilities are also suggested.",
"We use an entropy based method to study two graph maximization problems. We upper bound the number of matchings of fixed size @? in a d-regular graph on N vertices. For [email protected]?N bounded away from 0 and 1, the logarithm of the bound we obtain agrees in its leading term with the logarithm of the number of matchings of size @? in the graph consisting of N2d disjoint copies of K\"d\",\"d. This provides asymptotic evidence for a conjecture of S. We also obtain an analogous result for independent sets of a fixed size in regular graphs, giving asymptotic evidence for a conjecture of J. Kahn. Our bounds on the number of matchings and independent sets of a fixed size are derived from bounds on the partition function (or generating polynomial) for matchings and independent sets.",
"We initiate a study of the homomorphism domination exponent of a pair of graphs F and G, defined as the maximum real number c such that |Hom(F,T)|>=|Hom(G,T)|^c for every graph T. The problem of determining whether HDE(F,G)>=1 is known as the homomorphism domination problem, and its decidability is an important open question arising in the theory of relational databases. We investigate the combinatorial and computational properties of the homomorphism domination exponent, proving upper and lower bounds and isolating classes of graphs F and G for which HDE(F,G) is computable. In particular, we present a linear program computing HDE(F,G) in the special case, where F is chordal and G is series-parallel."
]
} |
1508.04675 | 2199658097 | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tet ali, and Zhao showing that a union of copies of @math maximizes the number of independent sets and the independence polynomial of a d-regular graph. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of @math . Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr "om. In probabilistic language, our main theorems state that for all d-regular graphs and all @math , the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity @math are maximized by @math . Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | For matchings, Carroll, Galvin, and Tet ali @cite_12 used the entropy method to give an upper bound of @math on @math over @math -regular graphs. It was previously conjectured (eg. @cite_16 @cite_15 ) that @math maximizes @math over all @math -regular graphs. This is an implication of our Theorem . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_12"
],
"mid": [
"1593988169",
"2155237361",
"2083618772"
],
"abstract": [
"We explain the notion of the entropy of a discrete random variable, and derive some of its basic properties. We then show through examples how entropy can be useful as a combinatorial enumeration tool. We end with a few open questions.",
"For the set of graphs with a given degree sequence, consisting of any number of @math and @math , and its subset of bipartite graphs, we characterize the optimal graphs who maximize and minimize the number of @math -matchings. We find the expected value of the number of @math -matchings of @math -regular bipartite graphs on @math vertices with respect to the two standard measures. We state and discuss the conjectured upper and lower bounds for @math -matchings in @math -regular bipartite graphs on @math vertices, and their asymptotic versions for infinite @math -regular bipartite graphs. We prove these conjectures for @math -regular bipartite graphs and for @math -matchings with @math .",
"We use an entropy based method to study two graph maximization problems. We upper bound the number of matchings of fixed size @? in a d-regular graph on N vertices. For [email protected]?N bounded away from 0 and 1, the logarithm of the bound we obtain agrees in its leading term with the logarithm of the number of matchings of size @? in the graph consisting of N2d disjoint copies of K\"d\",\"d. This provides asymptotic evidence for a conjecture of S. We also obtain an analogous result for independent sets of a fixed size in regular graphs, giving asymptotic evidence for a conjecture of J. Kahn. Our bounds on the number of matchings and independent sets of a fixed size are derived from bounds on the partition function (or generating polynomial) for matchings and independent sets."
]
} |
1508.04675 | 2199658097 | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tet ali, and Zhao showing that a union of copies of @math maximizes the number of independent sets and the independence polynomial of a d-regular graph. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of @math . Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr "om. In probabilistic language, our main theorems state that for all d-regular graphs and all @math , the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity @math are maximized by @math . Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | In @cite_31 , Csikv 'a ri proved the lower matching conjecture' of @cite_16 and in @cite_22 gave a new lower bound on the number of perfect matchings of @math -regular, vertex-transitive, bipartite graphs, in both comparing an arbitrary graph with the infinite @math -regular tree (see also the recent extension by Lelarge @cite_13 to irregular graphs). Proposition 2.10 in @cite_22 states that the edge occupancy fraction of any @math -regular, vertex-transitive, bipartite graph is at least that of the infinite @math -regular tree; in Theorem we prove an analogous result for independent sets. Csikv 'a ri's techniques in the two papers are different than the methods of this paper, but similar in that he bounds the occupancy fraction instead of directly working with the partition function. His results rely on an elegant interplay between the Heilman-Lieb theorem @cite_4 and Benjamini-Schramm convergence of bounded-degree graphs. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2037389370",
"1485502831",
"2128943288",
"2155237361",
"2144865572"
],
"abstract": [
"We investigate the general monomer-dimer partition function,P(x), which is a polynomial in the monomer activity,x, with coefficients depending on the dimer activities. Our main result is thatP(x) has its zeros on the imaginary axis when the dimer activities are nonnegative. Therefore, no monomer-dimer system can have a phase transition as a function of monomer density except, possibly, when the monomer density is minimal (i.e.x=0). Elaborating on this theme we prove the existence and analyticity of correlation functions (away fromx=0) in the thermodynamic limit. Among other things we obtain bounds on the compressibility and derive a new variable in which to make an expansion of the free energy that converges down to the minimal monomer density. We also relate the monomer-dimer problem to the Heisenberg and Ising models of a magnet and derive Christoffell-Darboux formulas for the monomer-dimer and Ising model partition functions. This casts the Ising model in a new light and provides an alternative proof of the Lee-Yang circle theorem. We also derive joint complex analyticity domains in the monomer and dimer activities. Our considerations are independent of geometry and hence are valid for any dimensionality.",
"A theorem of A. Schrijver asserts that a d-regular bipartite graph on 2n vertices has at least @math perfect matchings. L. Gurvits gave an extension of Schrijver’s theorem for matchings of density p. In this paper we give a stronger version of Gurvits’s theorem in the case of vertex-transitive bipartite graphs. This stronger version in particular implies that for every positive integer k, there exists a positive constant c(k) such that if a d-regular vertex-transitive bipartite graph on 2n vertices contains a cycle of length at most k, then it has at least @math perfect matchings.",
"Friedland's Lower Matching Conjecture asserts that if @math is a @math --regular bipartite graph on @math vertices, and @math denotes the number of matchings of size @math , then @math where @math . When @math , this conjecture reduces to a theorem of Schrijver which says that a @math --regular bipartite graph on @math vertices has at least @math perfect matchings. L. Gurvits proved an asymptotic version of the Lower Matching Conjecture, namely he proved that @math In this paper, we prove the Lower Matching Conjecture. In fact, we will prove a slightly stronger statement which gives an extra @math factor compared to the conjecture if @math is separated away from @math and @math , and is tight up to a constant factor if @math is separated away from @math . We will also give a new proof of Gurvits's and Schrijver's theorems, and we extend these theorems to @math --biregular bipartite graphs.",
"For the set of graphs with a given degree sequence, consisting of any number of @math and @math , and its subset of bipartite graphs, we characterize the optimal graphs who maximize and minimize the number of @math -matchings. We find the expected value of the number of @math -matchings of @math -regular bipartite graphs on @math vertices with respect to the two standard measures. We state and discuss the conjectured upper and lower bounds for @math -matchings in @math -regular bipartite graphs on @math vertices, and their asymptotic versions for infinite @math -regular bipartite graphs. We prove these conjectures for @math -regular bipartite graphs and for @math -matchings with @math .",
"We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees."
]
} |
1508.04675 | 2199658097 | We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of the results of Kahn, Galvin and Tet ali, and Zhao showing that a union of copies of @math maximizes the number of independent sets and the independence polynomial of a d-regular graph. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of @math . Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstr "om. In probabilistic language, our main theorems state that for all d-regular graphs and all @math , the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity @math are maximized by @math . Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. | In statistical physics, the analogue of the occupancy fraction in a general spin system is called the ; on general graphs it is @math -hard to compute the magnetization in the ferromagnetic Ising model, the monomer-dimer model, and the hard-core model @cite_21 @cite_3 . | {
"cite_N": [
"@cite_21",
"@cite_3"
],
"mid": [
"2570168223",
"2215999167"
],
"abstract": [
"We study the complexity of computing average quantities related to spin systems, such as the mean magnetization and susceptibility in the ferromagnetic Ising model, and the average dimer count (or average size of a matching) in the monomer-dimer model. By establishing connections between the complexity of computing these averages and the location of the complex zeros of the partition function, we show that these averages are #P-hard to compute, and hence, under standard assumptions, computationally intractable. In the case of the Ising model, our approach requires us to prove an extension of the famous Lee–Yang Theorem from the 1950s.",
"We study the computational complexity of several natural problems arising in statistical physics and combinatorics. In particular, we consider the following problems: the mean magnetization and mean energy of the Ising model (both the ferromagnetic and the anti-ferromagnetic settings), the average size of an independent set in the hard core model, and the average size of a matching in the monomer-dimer model. We prove that for all non-trivial values of the underlying model parameters, exactly computing these averages is #P-hard. In contrast to previous results of Sinclair and Srivastava (2013) for the mean magnetization of the ferromagnetic Ising model, our approach does not use any Lee-Yang type theorems about the complex zeros of partition functions. Indeed, it was due to the lack of suitable Lee-Yang theorems for models such as the anti-ferromagnetic Ising model that some of the problems we study here were left open by Sinclair and Srivastava. In this paper, we instead use some relatively simple and well-known ideas from the theory of automatic symbolic integration to complete our hardness reductions."
]
} |
1508.04999 | 2286647287 | Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms. Such interest has grown in the area of music information retrieval (MIR) as well, particularly in music audio classification tasks such as auto-tagging. In this paper, we present a two-stage learning model to effectively predict multiple labels from music audio. The first stage learns to project local spectral patterns of an audio track onto a high-dimensional sparse space in an unsupervised manner and summarizes the audio track as a bag-of-features. The second stage successively performs the unsupervised learning on the bag-of-features in a layer-by-layer manner to initialize a deep neural network and finally fine-tunes it with the tag labels. Through the experiment, we rigorously examine training choices and tuning parameters, and show that the model achieves high performance on Magnatagatune, a popularly used dataset in music auto-tagging. | One group investigated unsupervised feature learning based on sparse representations, for example, using K-means @cite_46 @cite_15 @cite_24 @cite_4 , sparse coding @cite_2 @cite_54 @cite_43 @cite_15 @cite_25 and restricted Boltzmann machine (RBM) @cite_23 @cite_15 . The majority of them focused on capturing local structures of music data over one or multiple audio frames to learn high-dimensional single-layer features. They summarized the locally learned features as (also called , for example, in @cite_42 ) and fed them into a separate classifier. The advantage of this single-layer feature learning is that it is quite simple to learn a large-size of feature bases and they generally provide good performance @cite_44 . In addition, it is easy to handle the variable length of audio tracks as they usually represent song-level features with summary statistics of the locally learned features (i.e. temporal pooling). However, this single-layer approach is limited to learning local features only. Some works worked on dual or multiple layers to capture segment-level features @cite_34 @cite_19 . Although they showed slight improvement by combining the local and segment-level features, learning hierarchical structures of music in an unsupervised way is highly challenging. | {
"cite_N": [
"@cite_4",
"@cite_15",
"@cite_54",
"@cite_42",
"@cite_24",
"@cite_43",
"@cite_44",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_46",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"",
"",
"2018083238",
"",
"",
"2118858186",
"",
"2542729039",
"",
"2188492526",
"2107789863",
""
],
"abstract": [
"",
"",
"",
"There has been an increasing attention on learning feature representations from the complex, high-dimensional audio data applied in various music information retrieval (MIR) problems. Unsupervised feature learning techniques, such as sparse coding and deep belief networks have been utilized to represent music information as a term-document structure comprising of elementary audio codewords. Despite the widespread use of such bag-of-frames (BoF) model, few attempts have been made to systematically compare different component settings. Moreover, whether techniques developed in the text retrieval community are applicable to audio codewords is poorly understood. To further our understanding of the BoF model, we present in this paper a comprehensive evaluation that compares a large number of BoF variants on three different MIR tasks, by considering different ways of low-level feature representation, codebook construction, codeword assignment, segment-level and song-level feature pooling, tf-idf term weighting, power normalization, and dimension reduction. Our evaluations lead to the following findings: 1) modeling music information by two levels of abstraction improves the result for difficult tasks such as predominant instrument recognition, 2) tf-idf weighting and power normalization improve system performance in general, 3) topic modeling methods such as latent Dirichlet allocation does not work for audio codewords.",
"",
"",
"A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the eect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the eect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6 and 97.2 respectively).",
"",
"Existing content-based music similarity estimation methods largely build on complex hand-crafted feature extractors, which are difficult to engineer. As an alternative, unsupervised machine learning allows to learn features empirically from data. We train a recently proposed model, the mean-covariance Restricted Boltzmann Machine, on music spectrogram excerpts and employ it for music similarity estimation. In k-NN based genre retrieval experiments on three datasets, it clearly outperforms MFCC-based methods, beats simple unsupervised feature extraction using k-Means and comes close to the state-of-the-art. This shows that unsupervised feature extraction poses a viable alternative to engineered features.",
"",
"In this work we present a system to automatically learn features from audio in an unsupervised manner. Our method first learns an overcomplete dictionary which can be used to sparsely decompose log-scaled spectrograms. It then trains an efficient encoder which quickly maps new inputs to approximations of their sparse representations using the learned dictionary. This avoids expensive iterative procedures usually required to infer sparse codes. We then use these sparse codes as inputs for a linear Support Vector Machine (SVM). Our system achieves 83.4 accuracy in predicting genres on the GTZAN dataset, which is competitive with current state-of-the-art approaches. Furthermore, the use of a simple linear classifier combined with a fast feature extraction system allows our approach to scale well to large datasets.",
"In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks.",
""
]
} |
1508.04999 | 2286647287 | Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms. Such interest has grown in the area of music information retrieval (MIR) as well, particularly in music audio classification tasks such as auto-tagging. In this paper, we present a two-stage learning model to effectively predict multiple labels from music audio. The first stage learns to project local spectral patterns of an audio track onto a high-dimensional sparse space in an unsupervised manner and summarizes the audio track as a bag-of-features. The second stage successively performs the unsupervised learning on the bag-of-features in a layer-by-layer manner to initialize a deep neural network and finally fine-tunes it with the tag labels. Through the experiment, we rigorously examine training choices and tuning parameters, and show that the model achieves high performance on Magnatagatune, a popularly used dataset in music auto-tagging. | The second group used supervised learning that directly maps audio and labels via multi-layered neural networks. One approach was mapping single frames of spectrogram @cite_6 @cite_8 @cite_16 or summarized spectrogram @cite_37 to labels via DNNs, where some of them pretrain the networks with deep belief networks @cite_6 @cite_8 @cite_20 . They used the hidden-unit activations of DNNs as local audio features. While this frame-level audio-to-label mapping is somewhat counter-intuitive, the supervised approach makes the learned features more discriminative for the given task, being directly comparable to hand-engineered features such as MFCC. The other approach in this group used CNNs where the convolution setting can take longer audio frames and the networks directly predict labels @cite_1 @cite_9 @cite_53 @cite_47 . CNNs has become the de-facto standard in image classification since the break-through in ImageNet challenge @cite_12 . As such, the CNN-based approach has shown great performance in music auto-tagging @cite_27 @cite_47 . However, in order to achieve high performance with CNNs, the model needs to be trained with a large dataset along with a huge number of parameters. Otherwise, CNNs is not necessarily better than the bag-of-features approach @cite_38 @cite_47 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_8",
"@cite_9",
"@cite_53",
"@cite_1",
"@cite_6",
"@cite_27",
"@cite_47",
"@cite_16",
"@cite_12",
"@cite_20"
],
"mid": [
"2242773987",
"953419521",
"",
"",
"",
"2406196141",
"2295991281",
"2295870502",
"",
"",
"2042390666",
""
],
"abstract": [
"Content-based music information retrieval tasks are typically solved with a two-stage approach: features are extracted from music audio signals, and are then used as input to a regressor or classifier. These features can be engineered or learned from data. Although the former approach was dominant in the past, feature learning has started to receive more attention from the MIR community in recent years. Recent results in feature learning indicate that simple algorithms such as K-means can be very effective, sometimes surpassing more complicated approaches based on restricted Boltzmann machines, autoencoders or sparse coding. Furthermore, there has been increased interest in multiscale representations of music audio recently. Such representations are more versatile because music audio exhibits structure on multiple timescales, which are relevant for different MIR tasks to varying degrees. We develop and compare three approaches to multiscale audio feature learning using the spherical K-means algorithm. We evaluate them in an automatic tagging task and a similarity metric learning task on the Magnatagatune dataset.",
"While emotion-based music organization is a natural process for humans, quantifying it empirically proves to be a very difficult task, and as such no dominant feature representation for music emotion recognition has yet emerged. Much of the difficulty in developing emotion-based features is the ambiguity of the ground-truth. Even using the smallest time window, opinions about emotion are bound to vary and reflect some disagreement between listeners. In previous work, we have modeled human response labels to music in the arousal-valence (A-V) emotion space with time-varying stochastic distributions. Current methods for automatic detection of emotion in music seek performance increases by combining several feature domains (e.g. loudness, timbre, harmony, rhythm). Such work has focused largely in dimensionality reduction for minor classification performance gains, but has provided little insight into the relationship between audio and emotional associations. In this work, we seek to employ regression-based deep belief networks to learn features directly from magnitude spectra. Taking into account the dynamic nature of music, we investigate combining multiple timescales of aggregated magnitude spectra as a basis for feature learning.",
"",
"",
"",
"This paper analyzes some of the challenges in performing automatic annotation and ranking of music audio, and proposes a few improvements. First, we motivate the use of principal component analysis on the mel-scaled spectrum. Secondly, we present an analysis of the impact of the selection of pooling functions for summarization of the features over time. We show that combining several pooling functions improves the performance of the system. Finally, we introduce the idea of multiscale learning. By incorporating these ideas in our model, we obtained state-of-the-art performance on the Magnatagatune dataset.",
"Feature extraction is a crucial part of many MIR tasks. In this work, we present a system that can automatically extract relevant features from audio for a given task. The feature extraction system consists of a Deep Belief Network (DBN) on Discrete Fourier Transforms (DFTs) of the audio. We then use the activations of the trained network as inputs for a non-linear Support Vector Machine (SVM) classifier. In particular, we learned the features to solve the task of genre recognition. The learned features perform significantly better than MFCCs. Moreover, we obtain a classification accuracy of 84.3 on the Tzanetakis dataset, which compares favorably against state-of-the-art genre classifiers using frame-based features. We also applied these same features to the task of auto-tagging. The autotaggers trained with our features performed better than those that were trained with timbral and temporal features.",
"Low-level aspects of music audio such as timbre, loudness and pitch, can be relatively well modelled by features extracted from short-time windows. Higher-level aspects such as melody, harmony, phrasing and rhythm, on the other hand, are salient only at larger timescales and require a better representation of time dynamics. For various music information retrieval tasks, one would benefit from modelling both low and high level aspects in a unified feature extraction framework. By combining adaptive features computed at different timescales, short-timescale events are put in context by detecting longer timescale features. In this paper, we describe a method to obtain such multi-scale features and evaluate its effectiveness for automatic tag annotation.",
"",
"",
"Musical onset detection is one of the most elementary tasks in music analysis, but still only solved imperfectly for polyphonic music signals. Interpreted as a computer vision problem in spectrograms, Convolutional Neural Networks (CNNs) seem to be an ideal fit. On a dataset of about 100 minutes of music with 26k annotated onsets, we show that CNNs outperform the previous state-of-the-art while requiring less manual preprocessing. Investigating their inner workings, we find two key advantages over hand-designed methods: Using separate detectors for percussive and harmonic onsets, and combining results from many minor variations of the same scheme. The results suggest that even for well-understood signal processing tasks, machine learning can be superior to knowledge engineering.",
""
]
} |
1508.04531 | 2221117007 | Norms are known to be a major factor determining humans behavior. It's also shown that norms can be quite effective tool for building agent-based societies. Various normative architectures have been proposed for designing normative multi-agent systems (NorMAS). Due to human nature of the concept norms, many of these architectures are built based on theories in social sciences. Tipping point theory, as is briefly discussed in this paper, seems to have a great potential to be used for designing normative architectures. This theory deals with the factors that affect social epidemics that arise in human societies. In this paper, we try to apply the main concepts of this theory to agent-based normative architectures. We show several ways to implement these concepts, and study their effects in an agent-based normative scenario. | Hollander and Wu @cite_8 refer to three categories of normative studies in the social sciences: 1) social function of norms @cite_31 , 2) impact of social norms @cite_7 , 3) mechanisms leading to the emergence and creation of norms @cite_35 . In the context of social function, norms are often concerned with the and of agent behavior; where oughtness refers to the condition where an agent should or should not perform an action, and expectation refers to the behavior that other agents expect to observe from that agent @cite_37 @cite_9 . An example of work belonging to this category is Boella and Torre's architecture containing separate subsystems for counts-as conditionals, conditional obligations, and conditional permissions @cite_15 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_15",
"@cite_9",
"@cite_31"
],
"mid": [
"2017492123",
"1827387789",
"1567484702",
"2133901799",
"2067743023",
"",
"2237315243"
],
"abstract": [
"Urban simulations are an important tool for analyzing many policy questions relating to the usage of public space, roads, and communal transportation; they can be used to predict the long-term impact of new construction projects, traffic restrictions, and zoning laws. However, it is unwise to rely upon predictions from a single model since each technique possesses different strengths and weaknesses and can be highly sensitive to the choice of parameters and initial conditions. In this article, we describe a hybrid approach for combining agent-based and stochastic simulations (Markov chain Monte Carlo, MCMC) to improve the accuracy and reduce the variance of long-term predictions. In our proposed approach, the agent-based model is used to bootstrap the proposal distribution for the MCMC estimator. To demonstrate the applicability of our modeling technique, this article presents a case study describing the usage of our hybrid simulation method for forecasting transportation patterns and parking lot utilization on a large university campus. A comparison of our simulation results against an independently collected dataset reveals that our hybrid approach accurately predicts parking lot usage and performs significantly better than other comparable modeling techniques. Developing novel architectures for combining the predictions of agent-based models can produce insights that are different than simply selecting the best model.",
"The Markov Chain Monte Carlo (MCMC) family of methods form a valuable part of the toolbox of social modeling and prediction techniques, enabling modelers to generate samples and summary statistics of a population of interest with minimal information. It has been used successfully to model changes over time in many types of social systems, including patterns of disease spread, adolescent smoking, and geopolitical conflicts. In MCMC an initial proposal distribution is iteratively refined until it approximates the posterior distribution. However, the selection of the proposal distribution can have a significant impact on model convergence. In this paper, we propose a new hybrid modeling technique in which an agent-based model is used to initialize the proposal distribution of the MCMC simulation. We demonstrate the use of our modeling technique in an urban transportation prediction scenario and show that the hybrid combined model produces more accurate predictions than either of the parent models.",
"Holonic multi-agent systems are a special category of multi-agent systems that best fit to environments with numerous agents and high complexity. Like in general multi-agent systems, the agents in the holonic system may negotiate with each other. These systems have their own characteristics and structure, for which a specific negotiation mechanism is required. This mechanism should be simple, fast and operable in real world applications. It would be better to equip negotiators with a learning method which can efficiently use the available information. The learning method should itself be fast, too. Additionally, this mechanism should match the special characteristics of the holonic multi-agent systems. In this paper, we introduce such a negotiation method. Experimental results demonstrate the efficiency of this new approach.",
"Recent years have seen an increase in the application of ideas from the social sciences to computational systems. Nowhere has this been more pronounced than in the domain of multiagent systems. Because multiagent systems are composed of multiple individual agents interacting with each other many parallels can be drawn to human and animal societies. One of the main challenges currently faced in multiagent systems research is that of social control. In particular, how can open multiagent systems be configured and organized given their constantly changing structure? One leading solution is to employ the use of social norms. In human societies, social norms are essential to regulation, coordination, and cooperation. The current trend of thinking is that these same principles can be applied to agent societies, of which multiagent systems are one type. In this article, we provide an introduction to and present a holistic viewpoint of the state of normative computing (computational solutions that employ ideas based on social norms.) To accomplish this, we (1) introduce social norms and their application to agent-based systems; (2) identify and describe a normative process abstracted from the existing research; and (3) discuss future directions for research in normative multiagent computing. The intent of this paper is to introduce new researchers to the ideas that underlie normative computing and survey the existing state of the art, as well as provide direction for future research.",
"Normative systems are traditionally described and analyzed using deontic logic, describing the logical relations among obligations and permissions. However, there is still a gap between deontic logic and normative multi-agent systems such as electronic institutions, which may be seen as an instance of the gap between on the one hand logical agent specification languages and on the other hand agent architectures and programming languages. To bridge the gap, in this paper we propose an architecture containing separate subsystems or components for counts-as conditionals, conditional obligations and conditional permissions. We add a norm database component in which the three kinds of rules are stored, and we use a channel based coordination model to describe the relations among the four normative components.",
"",
"Norms are an important part of human social systems, governing many aspects of group decision-making. Yet many popularly used social models neglect to model normative effects on human behavior, relying on simple probabilistic and majority voting models of influence diffusion. Within the multi-agent research community, the study of norm emergence, compliance, and adoption has resulted in new architectures and standards for normative agents; however few of these models have been successfully applied to real-world public policy problems. In this paper, we propose a new lightweight architecture for constructing normative agents to model human social systems; the aim of our research is to be able to study the effects of different public policy decisions on a community. Here we present a case study showing the usage of our architecture for predicting trends in smoking cessation resulting from a smoke-free campus initiative. Our agent-based model combines social, environmental, and personal factors to accurately predict smoking trends and attitudes. The performance of both the whole and ablated model is evaluated against statistics from an independent source."
]
} |
1508.04531 | 2221117007 | Norms are known to be a major factor determining humans behavior. It's also shown that norms can be quite effective tool for building agent-based societies. Various normative architectures have been proposed for designing normative multi-agent systems (NorMAS). Due to human nature of the concept norms, many of these architectures are built based on theories in social sciences. Tipping point theory, as is briefly discussed in this paper, seems to have a great potential to be used for designing normative architectures. This theory deals with the factors that affect social epidemics that arise in human societies. In this paper, we try to apply the main concepts of this theory to agent-based normative architectures. We show several ways to implement these concepts, and study their effects in an agent-based normative scenario. | Within the second category, social impact, norms are considered in terms of cost provided to or imposed on the parties involved in a social interaction @cite_23 . For instance, punishment and sanctions are introduced as two enforcement mechanisms used to achieve the necessary social control required to impose social norms @cite_0 . Here they demonstrate a normative agent that can punish and sanction defectors and also dynamically choose the right amount of punishment and sanction to impose @cite_27 . | {
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_23"
],
"mid": [
"199315196",
"2110098884",
"2463645168"
],
"abstract": [
"As explained by Axelrod in his seminal work An Evolutionary Approach to Norms, punishment is a key mechanism to achieve the necessary social control and to impose social norms in a self-regulated society. In this paper, we distinguish between two enforcing mechanisms. i.e. punishment and sanction, focusing on the specific ways in which they favor the emergence and maintenance of cooperation. The key research question is to find more stable and cheaper mechanisms for norm compliance in hybrid social environments (populated by humans and computational agents). To achieve this task, we have developed a normative agent able to punish and sanction defectors and to dynamically choose the right amount of punishment and sanction to impose on them (Dynamic Adaptation Heuristic). The results obtained through agent-based simulation show us that sanction is more effective and less costly than punishment in the achievement and maintenance of cooperation and it makes the population more resilient to sudden changes than if it were enforced only by mere punishment.",
"Agent-based models are a useful technique for rapidly prototyping complex social systems, they are widely used in a number of disciplines and can yield theoretical insights that are different from those produced by a variable based analysis. However, it remains difficult to compare the results of two models and to validate the performance of an agent-based simulation. In this paper, we present a case study on how to analyze the relationship between agent-based models using category theory. Category theory is a powerful mathematical methodology that was originally introduced to organize mathematical ideas according to their shared structure. It has been successfully employed in abstract mathematical domains, but has also enjoyed some success as a formalism for software engineering. Here we present a procedure for analyzing agent-based models using category theory and a case study in its usage at analyzing two different types of simulations.",
"Holonic multi-agent systems (HOMAS) have their own properties that make them distinct from general multi-agent systems (MAS). They are neither like competitive multi-agent systems nor cooperative, and they have features from both of these categories. There are many circumstances that holonic agents need to negotiate. Agents involved in negotiations try to maximize their utility as well as their holon’s utility. In addition, holon’s Head can overrule the negotiation whenever it wants. These differences make defining a specific negotiation mechanism for holonic multi-agent systems more significant. In this work, holonic systems are introduced at the beginning; and then different aspects of negotiation in these systems are studied. We especially try to introduce the idea of holonic negotiations. A specific negotiation mechanism for holonic multi-agent systems is proposed which is consistent with the challenges of HOMAS."
]
} |
1508.03716 | 1760596101 | Network utility maximization is often applied for the cross-layer design of wireless networks considering known wireless channels. However, realistic wireless channel capacities are stochastic bearing time-varying statistics, necessitating the redesign and solution of NUM problems to capture such effects. Based on NUM theory we develop a framework for scheduling, routing, congestion control and power control in wireless multihop networks that considers stochastic long or short term fading wireless channels. Specifically, the wireless channel is modeled via stochastic differential equations alleviating several assumptions that exist in state-of-the-art channel modeling within the NUM framework such as the finite number of states or the stationarity. Our consideration of wireless channel modeling leads to a NUM problem formulation that accommodates non-convex and time-varying utilities. We consider both cases of non orthogonal and orthogonal access of users to the medium. In the first case, scheduling is performed via power control, while the latter separates scheduling and power control and the role of power control is to further increase users' optimal utility by exploiting random reductions of the stochastic channel power loss while also considering energy efficiency. Finally, numerical results evaluate the performance and operation of the proposed approach and study the impact of several involved parameters on convergence. | In the sequel, in @cite_6 , NUM is employed to perform, joint congestion control, power control, routing and scheduling assuming that the channel fading process is stationary and ergodic. Similarly in @cite_15 @cite_8 , joint congestion control, routing and scheduling is performed in the framework of NUM while assuming that there is a finite number of channel states. In these works, the network functions in time slots, where during each time slot the channel state remains stable and changes randomly and independently on the boundary of time slots. Finally, in @cite_22 the convergence of primal-dual algorithms for solving NUM is studied under wireless fading channels with time-varying parameters (and thus statistics). Time-varying statistics of wireless channels lead to time-varying optimal solutions of the NUM problem necessitating the study of how well the solution algorithms track the changes in the optimal values. However, it is assumed that the channel fading parameters vary following a Finite State Markov Chain. | {
"cite_N": [
"@cite_15",
"@cite_22",
"@cite_6",
"@cite_8"
],
"mid": [
"2170178059",
"2120025165",
"",
"2146120924"
],
"abstract": [
"This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints. By dual decomposition, the resource allocation problem naturally decomposes into three subproblems: congestion control, routing and scheduling that interact through congestion price. The global convergence property of this algorithm is proved. We next extend the dual algorithm to handle networks with timevarying channels and adaptive multi-rate devices. The stability of the resulting system is established, and its performance is characterized with respect to an ideal reference system which has the best feasible rate region at link layer. We then generalize the aforementioned results to a general model of queueing network served by a set of interdependent parallel servers with time-varying service capabilities, which models many design problems in communication networks. We show that for a general convex optimization problem where a subset of variables lie in a polytope and the rest in a convex set, the dual-based algorithm remains stable and optimal when the constraint set is modulated by an irreducible finite-state Markov chain. This paper thus presents a step toward a systematic way to carry out cross-layer design in the framework of “layering as optimization decomposition” for time-varying channel models.",
"Distributed network utility maximization (NUM) has received an increasing intensity of interest over the past few years. Distributed solutions (e.g., the primal-dual gradient method) have been intensively investigated under fading channels. As such distributed solutions involve iterative updating and explicit message passing, it is unrealistic to assume that the wireless channel remains unchanged during the iterations. Unfortunately, the behavior of those distributed solutions under time-varying channels is in general unknown. In this paper, we shall investigate the convergence behavior and tracking errors of the iterative primal-dual scaled gradient algorithm (PDSGA) with dynamic scaling matrices (DSC) for solving distributive NUM problems under time-varying fading channels. We shall also study a specific application example, namely the multicommodity flow control and multicarrier power allocation problem in multihop ad hoc networks. Our analysis shows that the PDSGA converges to a limit region rather than a single point under the finite state Markov chain (FSMC) fading channels. We also show that the order of growth of the tracking errors is given by O(T N), where T and N are the update interval and the average sojourn time of the FSMC, respectively. Based on this analysis, we derive a low complexity distributive adaptation algorithm for determining the adaptive scaling matrices, which can be implemented distributively at each transmitter. The numerical results show the superior performance of the proposed dynamic scaling matrix algorithm over several baseline schemes, such as the regular primal-dual gradient algorithm.",
"",
"There has been considerable work developing a stochastic network utility maximization framework using Backpressure algorithms, also known as MaxWeight. A key open problem has been the development of utility-optimal algorithms that are also delay-efficient. In this paper, we show that the Backpressure algorithm, when combined with the last-in-first-out (LIFO) queueing discipline (called LIFO-Backpressure), is able to achieve a utility that is within O(1 V) of the optimal value, for any scalar V ≥ 1, while maintaining an average delay of O([log(V)]2) for all but a tiny fraction of the network traffic. This result holds for a general class of problems with Markovian dynamics. Remarkably, the performance of LIFO-Backpressure can be achieved by simply changing the queueing discipline; it requires no other modifications of the original Backpressure algorithm. We validate the results through empirical measurements from a sensor network testbed, which show a good match between theory and practice. Because some packets may stay in the queues for a very long time under LIFO-Backpressure, we further develop the LIFOp-Backpressure algorithm, which generalizes LIFOp-Backpressure by allowing interleaving between first-in-first-out (FIFO) and LIFO. We show that LIFOp Backpressure also achieves the same O(1 V) close-to-optimal utility performance and guarantees an average delay of O([log(V)]2) for the packets that are served during the LIFO period."
]
} |
1508.03601 | 2951959002 | Programming question and answer (Q & A) websites, such as Quora, Stack Overflow, and Yahoo! Answer etc. helps us to understand the programming concepts easily and quickly in a way that has been tested and applied by many software developers. Stack Overflow is one of the most frequently used programming Q &A website where the questions and answers posted are presently analyzed manually, which requires a huge amount of time and resource. To save the effort, we present a topic modeling based technique to analyze the words of the original texts to discover the themes that run through them. We also propose a method to automate the process of reviewing the quality of questions on Stack Overflow dataset in order to avoid ballooning the stack overflow with insignificant questions. The proposed method also recommends the appropriate tags for the new post, which averts the creation of unnecessary tags on Stack Overflow. | : Previous work has focused on analyzing general Q &A websites based on user's social interactions. @cite_0 have analyzed several aspects of user behavior in Yahoo! Answers, a Q &A website for the general public. The authors use the number of questions and answers in each predefined top level category to determine the popularity of each category. @cite_6 have also analyzed Yahoo! Answers to cluster the top-level categories into three broader categories using both content and user interactions. In contrast to these efforts, instead of using existing tags, we use a statistical topic model, LDA, to automatically discover topics from the textual content of the posts and employ temporal measures to identify a topic's popularity over time. | {
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"2121861737",
"2129251351"
],
"abstract": [
"Yahoo! Answers represents a new type of community portal that allows users to post questions and or answer questions asked by other members of the community, already featuring a very large number of questions and several million users. Other recently launched services, like Microsoft’s Live QnA and Amazon’s Askville, follow the same basic interaction model. The popularity and the particular characteristics of this model call for a closer study that can help a deeper understanding of the entities involved, their interactions, and the implications of the model. Such understanding is a crucial step in social and algorithmic research that could yield improvements to various components of the service, for instance, personalizing the interaction with the system based on user interest. In this paper, we perform an analysis of 10 months worth of Yahoo! Answers data that provides insights into user behavior and impact as well as into various aspects of the service and its possible evolution.",
"Yahoo Answers (YA) is a large and diverse question-answer forum, acting not only as a medium for sharing technical knowledge, but as a place where one can seek advice, gather opinions, and satisfy one's curiosity about a countless number of things. In this paper, we seek to understand YA's knowledge sharing and activity. We analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users. While interactions in some categories resemble expertise sharing forums, others incorporate discussion, everyday advice, and support. With such a diversity of categories in which one can participate, we find that some users focus narrowly on specific topics, while others participate across categories. This not only allows us to map related categories, but to characterize the entropy of the users' interests. We find that lower entropy correlates with receiving higher answer ratings, but only for categories where factual expertise is primarily sought after. We combine both user attributes and answer characteristics to predict, within a given category, whether a particular answer will be chosen as the best answer by the asker."
]
} |
1508.03846 | 2213810917 | Learning novel concepts and relations from relational databases is an important problem with many applications in database systems and machine learning. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same data set may be represented under different schemas for various reasons, such as efficiency, data quality, and usability. Unfortunately, the output of current relational learning algorithms tends to vary quite substantially over the choice of schema, both in terms of learning accuracy and efficiency. This variation complicates their off-the-shelf application. In this paper, we introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We study both sample-based learning algorithms, which learn from sets of labeled examples, and query-based algorithms, which learn by asking queries to an oracle. We prove that current relational learning algorithms are generally not schema independent. For query-based learning algorithms we show that the (de) composition transformations influence their query complexity. We propose Castor, a sample-based relational learning algorithm that achieves schema independence by leveraging data dependencies. We support the theoretical results with an empirical study that demonstrates the schema dependence independence of several algorithms on existing benchmark and real-world datasets under (de) compositions. | The architects of relational model have argued for logical data independence, which oversimplifying a bit, means that an exact query should return the same answers no matter which logical schema is chosen for the data @cite_69 . In this paper, we extend the principle of logical data independence for relational learning algorithms. The property of schema independence also differs with the idea of logical data independence in a subtle but important issue. One may achieve logical data independence by an affordable amount of experts' intervention, such as defining a set of views over the database @cite_19 . However, it generally takes more time and much deeper expertise to find the proper schema for a relational learning algorithm, particularly for database applications that contain more than a single learning algorithm @cite_61 . Hence, it is less likely to achieve schema independence via expert's intervention. | {
"cite_N": [
"@cite_61",
"@cite_19",
"@cite_69"
],
"mid": [
"2044849727",
"1595443289",
"93742489"
],
"abstract": [
"MADlib is a free, open-source library of in-database analytic methods. It provides an evolving suite of SQL-based algorithms for machine learning, data mining and statistics that run at scale within a database engine, with no need for data import export to other tools. The goal is for MADlib to eventually serve a role for scalable database systems that is similar to the CRAN library for R: a community repository of statistical methods, this time written with scale and parallelism in mind. In this paper we introduce the MADlib project, including the background that led to its beginnings, and the motivation for its open-source nature. We provide an overview of the library's architecture and design patterns, and provide a description of various statistical methods in that context. We include performance and speedup results of a core design pattern from one of those methods over the Greenplum parallel DBMS on a modest-sized test cluster. We then report on two initial efforts at incorporating academic research into MADlib, which is one of the project's goals. MADlib is freely available at http: madlib.net, and the project is open for contributions of both new methods, and ports to additional database platforms.",
"From the Publisher: Over the past two decades, the theory concerning the logical level of database management systems has matured and become an elegant and robust piece of science. Foundations of Databases presents indepth coverage of this theory and surveys several emerging topics. Written by three leading researchers, this advanced text presents a unifying and contemporary perspective on the field. A major effort in writing the book has been to highlight the intuitions behind the theoretical development.",
""
]
} |
1508.03846 | 2213810917 | Learning novel concepts and relations from relational databases is an important problem with many applications in database systems and machine learning. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same data set may be represented under different schemas for various reasons, such as efficiency, data quality, and usability. Unfortunately, the output of current relational learning algorithms tends to vary quite substantially over the choice of schema, both in terms of learning accuracy and efficiency. This variation complicates their off-the-shelf application. In this paper, we introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We study both sample-based learning algorithms, which learn from sets of labeled examples, and query-based algorithms, which learn by asking queries to an oracle. We prove that current relational learning algorithms are generally not schema independent. For query-based learning algorithms we show that the (de) composition transformations influence their query complexity. We propose Castor, a sample-based relational learning algorithm that achieves schema independence by leveraging data dependencies. We support the theoretical results with an empirical study that demonstrates the schema dependence independence of several algorithms on existing benchmark and real-world datasets under (de) compositions. | There is a large body of work on converting a database represented under one schema to another one without modifying its information content @cite_36 @cite_24 @cite_62 @cite_68 . We build on this work by exploring the sensitivity of relational learning algorithms to such transformations. Researchers have defined other types of schema transformations @cite_68 . A notable group is schema mappings in the context of data exchange, which are defined using tuple generating dependencies between source and target schemas @cite_2 . This group of transformations may modify the information content of and or introduce incomplete information to a database. Nevertheless, for the property of schema independence, the original and transformed databases should contain essentially the same information. | {
"cite_N": [
"@cite_62",
"@cite_36",
"@cite_24",
"@cite_2",
"@cite_68"
],
"mid": [
"1503955515",
"",
"2019540568",
"2102729564",
"2024796520"
],
"abstract": [
"In this paper, we carefully explore the assumptions behind using information capacity equivalence as a measure of correctness for judging transformed schemas in schema integration and translation methodologies. We present a classification of common integration and translation tasks based on their operational goals and derive from them the relative information capacity requirements of the original and transformed schemas. We show that for many tasks, information capacity equivalence of the schemas is not strictly required. Based on this, we present a new definition of correctness that reflects each undertaken task. We then examine existing methodologies and show how anomalies can arise when using those that do not meet the proposed correctness criteria.",
"",
"A fundamental concern of data integration in an XML context is the ability to embed one or more source documents in a target document so that (a) the target document conforms to a target schema and (b) the information in the source documents is preserved. In this paper, information preservation for XML is formally studied, and the results of this study guide the definition of a novel notion of schema embedding between two XML DTD schemas represented as graphs. Schema embedding generalizes the conventional notion of graph similarity by allowing an edge in a source DTD schema to be mapped to a path in the target DTD. Instance-level embeddings can be derived from the schema embedding in a straightforward manner, such that conformance to a target schema and information preservation are guaranteed. We show that it is NP-complete to find an embedding between two DTD schemas. We also outline efficient heuristic algorithms to find candidate embeddings, which have proved effective by our experimental study. These yield the first systematic and effective approach to finding information preserving XML mappings.",
"Data exchange is the problem of taking data structured under a source schema and creating an instance of a target schema that reflects the source data as accurately as possible. In this paper, we address foundational and algorithmic issues related to the semantics of data exchange and to the query answering problem in the context of data exchange. These issues arise because, given a source instance, there may be many target instances that satisfy the constraints of the data exchange problem.We give an algebraic specification that selects, among all solutions to the data exchange problem, a special class of solutions that we call universal. We show that a universal solution has no more and no less data than required for data exchange and that it represents the entire space of possible solutions. We then identify fairly general, yet practical, conditions that guarantee the existence of a universal solution and yield algorithms to compute a canonical universal solution efficiently. We adopt the notion of the \"certain answers\" in indefinite databases for the semantics for query answering in data exchange. We investigate the computational complexity of computing the certain answers in this context and also address other algorithmic issues that arise in data exchange. In particular, we study the problem of computing the certain answers of target queries by simply evaluating them on a canonical universal solution, and we explore the boundary of what queries can and cannot be answered this way, in a data exchange setting.",
"Model management is a generic approach to solving problems of data programmability where precisely engineered mappings are required. Applications include data warehousing, e-commerce, object-to-relational wrappers, enterprise information integration, database portals, and report generators. The goal is to develop a model management engine that can support tools for all of these applications. The engine supports operations to match schemas, compose mappings, diff schemas, merge schemas, translate schemas into different data models, and generate data transformations from mappings. Much has been learned about model management since it was proposed seven years ago. This leads us to a revised vision that differs from the original in two main respects: the operations must handle more expressive mappings, and the runtime that executes mappings should be added as an important model management component. We review what has been learned from recent experience, explain the revised model management vision based on that experience, and identify the research problems that the revised vision opens up."
]
} |
1508.03846 | 2213810917 | Learning novel concepts and relations from relational databases is an important problem with many applications in database systems and machine learning. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same data set may be represented under different schemas for various reasons, such as efficiency, data quality, and usability. Unfortunately, the output of current relational learning algorithms tends to vary quite substantially over the choice of schema, both in terms of learning accuracy and efficiency. This variation complicates their off-the-shelf application. In this paper, we introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We study both sample-based learning algorithms, which learn from sets of labeled examples, and query-based algorithms, which learn by asking queries to an oracle. We prove that current relational learning algorithms are generally not schema independent. For query-based learning algorithms we show that the (de) composition transformations influence their query complexity. We propose Castor, a sample-based relational learning algorithm that achieves schema independence by leveraging data dependencies. We support the theoretical results with an empirical study that demonstrates the schema dependence independence of several algorithms on existing benchmark and real-world datasets under (de) compositions. | Researchers have defined the property of design independence for keyword query processing over XML documents @cite_31 . We extend this line of work by introducing and formally exploring the property of schema independence for relational learning algorithms. We focus on supervised learning algorithms and their schema independence properties over relational data model. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2112543636"
],
"abstract": [
"Real-world databases often have extremely complex schemas. With thousands of entity types and relationships, each with a hundred or so attributes, it is extremely difficult for new users to explore the data and formulate queries. Schema free query interfaces (SFQIs) address this problem by allowing users with no knowledge of the schema to submit queries. We postulate that SFQIs should deliver the same answers when given alternative but equivalent schemas for the same underlying information. In this paper, we introduce and formally define design independence, which captures this property for SFQIs. We establish a theoretical framework to measure the amount of design independence provided by an SFQI. We show that most current SFQIs provide a very limited degree of design independence. We also show that SFQIs based on the statistical properties of data can provide design independence when the changes in the schema do not introduce or remove redundancy in the data. We propose a novel XML SFQI called Duplication Aware Coherency Ranking (DA-CR) based on information-theoretic relationships among the data items in the database, and prove that DA-CR is design independent. Our extensive empirical study using three real-world data sets shows that the average case design independence of current SFQIs is considerably lower than that of DA-CR. We also show that the ranking quality of DA-CR is better than or equal to that of current SFQI methods."
]
} |
1508.03720 | 1750263989 | Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an @math -score of 83.7 , higher than competing methods in the literature. | In feature-based approaches, different sets of features are extracted and fed to a chosen classifier (e.g., logistic regression). Generally, three types of features are often used. Lexical features concentrate on the entities of interest, e.g., entities , entity POS, entity neighboring information. Syntactic features include chunking, parse trees, etc. Semantic features are exemplified by the concept hierarchy, entity class, entity mention. uses a maximum entropy model to combine these features for relation classification. However, different sets of handcrafted features are largely complementary to each other (e.g., hypernyms versus named-entity tags), and thus it is hard to improve performance in this way @cite_15 . | {
"cite_N": [
"@cite_15"
],
"mid": [
"2053238041"
],
"abstract": [
"Extracting semantic relationships between entities is challenging. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using SVM. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE corpus shows that effective incorporation of diverse features enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types."
]
} |
1508.03720 | 1750263989 | Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an @math -score of 83.7 , higher than competing methods in the literature. | Deep neural networks, emerging recently, can learn underlying features automatically, and have attracted growing interest in the literature. propose a recursive neural network (RNN) along sentences' parse trees for sentiment analysis; such model can also be used to classify relations @cite_11 . explicitly weight phrases' importance in RNNs to improve performance. rebuild an RNN on the dependency path between two marked entities. explore convolutional neural networks, by which they utilize sequential information of sentences. also use the convolutional network; besides, they propose a ranking loss function with data cleaning, and achieve the state-of-the-art result in SemEval-2010 Task 8. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1889268436"
],
"abstract": [
"Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them."
]
} |
1508.03720 | 1750263989 | Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an @math -score of 83.7 , higher than competing methods in the literature. | In addition to the above studies, which mainly focus on relation classification approaches and models, other related research trends include information extraction from Web documents in a semi-supervised manner @cite_14 @cite_18 , dealing with small datasets without enough labels by distant supervision techniques @cite_16 , etc. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_16"
],
"mid": [
"2247412337",
"2150588363",
"2107598941"
],
"abstract": [
"The World Wide Web contains a significant amount of information expressed using natural language. While unstructured text is often difficult for machines to understand, the field of Information Extraction (IE) offers a way to map textual content into a structured knowledge base. The ability to amass vast quantities of information from Web pages has the potential to increase the power with which a modern search engine can answer complex queries. IE has traditionally focused on acquiring knowledge about particular relationships within a small collection of domain-specific text. Typically, a target relation is provided to the system as input along with extraction patterns or examples that have been specified by hand. Shifting to a new relation requires a person to create new patterns or examples. This manual labor scales linearly with the number of relations of interest. The task of extracting information from the Web presents several challenges for existing IE systems. The Web is large and heterogeneous; the number of potentially interesting relations is massive and their identity often unknown. To enable large-scale knowledge acquisition from the Web, this thesis presents Open Information Extraction, a novel extraction paradigm that automatically discovers thousands of relations from unstructured text and readily scales to the size and diversity of the Web.",
"We present a new approach to relation extraction that requires only a handful of training examples. Given a few pairs of named entities known to exhibit or not exhibit a particular relation, bags of sentences containing the pairs are extracted from the web. We extend an existing relation extraction method to handle this weaker form of supervision, and present experimental results demonstrating that our approach can reliably extract relations from web documents.",
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | On input @math , @math can be defined by @math polynomial inequalities in @math of degree @math (see e.g. @cite_34 ). As far as we know, the state-of-the-art for designing algorithms deciding the emptiness of @math consists only of algorithms for deciding the emptiness of general semi-algebraic sets; our contribution being the first attempt to exploit structural properties of the problem, e.g. through the smallest rank property (Theorem ). | {
"cite_N": [
"@cite_34"
],
"mid": [
"2003232014"
],
"abstract": [
"Abstract We present an algorithm to determine if a real polynomial is a sum of squares (of polynomials), and to find an explicit representation if it is a sum of squares. This algorithm uses the fact that a sum of squares representation of a real polynomial corresponds to a real, symmetric, positive semi-definite matrix whose entries satisfy certain linear equations."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | A first algorithmic solution to deciding the emptiness of general semi-algebraic sets is given by Cylindrical Algebraic Decomposition algorithm @cite_45 ; however its runtime is doubly exponential in the number @math of variables. The first singly exponential algorithm is given in @cite_17 , and has led to a series of works (see e.g. @cite_5 @cite_33 ) culminating with algorithms designed in @cite_35 based on the so-called critical points method. This method is based on the general idea which consists in computing minimizers maximizers of a well-chosen function reaching its extrema in each connected component of the set under study. Applying @cite_35 to problem requires @math bit operations. Note that our technique for dealing with sets @math is based on the idea underlying the critical point method. Also, in the arithmetic complexity model, our complexity estimates are more precise (the complexity constant in the exponent is known) and better. This technique is also related to algorithms based on polar varieties for grabbing sample points in semi-algebraic sets; see for example @cite_44 @cite_48 @cite_52 @cite_47 and its application to polynomial optimization @cite_54 . | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_48",
"@cite_54",
"@cite_52",
"@cite_44",
"@cite_45",
"@cite_5",
"@cite_47",
"@cite_17"
],
"mid": [
"2151549243",
"",
"1973993014",
"2063194632",
"2104223485",
"2164838307",
"1813686459",
"1601908671",
"2018701055",
"2082387287"
],
"abstract": [
"In this paper, a new algorithm for performing quantifier elimination from first order formulas over real closed fields in given. This algorithm improves the complexity of the asymptotically fastest algorithm for this problem, known to this data. A new feature of this algorithm is that the role of the algebraic part (the dependence on the degrees of the imput polynomials) and the combinatorial part (the dependence on the number of polynomials) are sparated. Another new feature is that the degrees of the polynomials in the equivalent quantifier-free formula that is output, are independent of the number of input polynomials. As special cases of this algorithm new and improved algorithms for deciding a sentence in the first order theory over real closed fields, and also for solving the existential problem in the first order theory over real closed fields, are obtained.",
"",
"Let (S_0 ) be a smooth and compact real variety given by a reduced regular sequence of polynomials (f_1, , f_p ). This paper is devoted to the algorithmic problem of finding efficiently a representative point for each connected component of (S_0 ) . For this purpose we exhibit explicit polynomial equations that describe the generic polar varieties of (S_0 ). This leads to a procedure which solves our algorithmic problem in time that is polynomial in the (extrinsic) description length of the input equations (f_1, , f_p ) and in a suitably introduced, intrinsic geometric parameter, called the degree of the real interpretation of the given equation system (f_1, ,f_p ).",
"Let @math be @math -variate polynomials with rational coefficients of maximum degree @math and let @math be the set of common complex solutions of @math . We give an algorithm which, up to some regularity assumptions on @math , computes an exact representation of the global infimum @math of the restriction of the map @math to @math , i.e., a univariate polynomial vanishing at @math and an isolating interval for @math . Furthermore, it decides whether @math is reached, and if so, it returns @math such that @math . This algorithm is probabilistic. It makes use of the notion of polar varieties. Its complexity is essentially cubic in @math and linear in the complexity of evaluating the input. This fits within the best known deterministic complexity class @math . We report on some practical experiments of a first implementation that is available as a Maple package. It appears that it ca...",
"Let f 1 , ldots, f s be polynomials in Q[X 1 , ..., X n ] that generate a radical ideal and let V be their complex zero-set. Suppose that V is smooth and equidimensional; then we show that computing suitable sections of the polar varieties associated to generic projections of V gives at least one point in each connected component of V ∩ Rn. We deduce an algorithm that extends that of Bank, Giusti, Heintz and Mbakop to non-compact situations. Its arithmetic complexity is polynomial in the complexity of evaluation of the input system, an intrinsic algebraic quantity and a combinatorial quantity.",
"We have developed in the past several algorithms with intrinsic complexity bounds for the problem of point finding in real algebraic varieties. Our aim here is to give a comprehensive presentation of the geometrical tools which are necessary to prove the correctness and complexity estimates of these algorithms. Our results form also the geometrical main ingredients for the computational treatment of singular hypersurfaces. In particular, we show the non–emptiness of suitable generic dual polar varieties of (possibly singular) real varieties, show that generic polar varieties may become singular at smooth points of the original variety and exhibit a sufficient criterion when this is not the case. Further, we introduce the new concept of meagerly generic polar varieties and give a degree estimate for them in terms of the degrees of generic polar varieties. The statements are illustrated by examples and a computer experiment.",
"Tarski in 1948, (Tarski 1951) published a quantifier elimination method for the elementary theory of real closed fields (which he had discovered in 1930). As noted by Tarski, any quantifier elimination method for this theory also provides a decision method, which enables one to decide whether any sentence of the theory is true or false. Since many important and difficult mathematical problems can be expressed in this theory, any computationally feasible quantifier elimination algorithm would be of utmost significance.",
"PLEASE NOTE: The original Technical Report TR00853 is missing. A copy can be found at http: www.sciencedirect.com science article pii S0747717110800033",
"Computing at least one point in each connected component of a real algebraic set is a basic subroutine to decide emptiness of semi-algebraic sets, which is a fundamental algorithmic problem in effective real algebraic geometry. In this article we propose a new algorithm for the former task, which avoids a hypothesis of properness required in many of the previous methods. We show how studying the set of non-properness of a linear projection ź enables us to detect the connected components of a real algebraic set without critical points for ź. Our algorithm is based on this observation and its practical counterpoint, using the triangular representation of algebraic varieties. Our experiments show its efficiency on a family of examples.",
"Let the polynomials f\"1,..., f\"[email protected]? @?[X\"1,..., X\"n] have degrees deg (f\"i) = 0,..., f\"k >= 0. In the case of a positive answer the algorithm constructs a certain finite set of solutions (which is, in fact, a representative set for the family of components of connectivity of the set of all real solutions of the system). The algorithm runs in time polynomial in M(kd) ^n^2. The previously known upper time bound for this problem was Sh=(A\"1+A\"[email protected][email protected]?)[email protected]?^n.Sc^1^ ^3."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | To get a purely algebraic certificate of emptiness for @math , one could use the classical approach by Positivstellensatz @cite_14 @cite_1 @cite_8 . As a snake biting its tail, this would lead to a family, or hierarchy , of semidefinite programs @cite_11 . Bounds for the degree of Positivstellensatz certificates are exponential in the number of variables and have been computed in @cite_7 for Schmudgen's, and in @cite_2 for Putinar's formulation. In the recent remarkable result @cite_0 , a uniform @math fold exponential bound for the degree of the Hilbert 17th problem is provided. In @cite_6 , an emptiness certificate dedicated to the spectrahedral case, by means of special quadratic modules associated to these sets, is obtained. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_11"
],
"mid": [
"2165281529",
"",
"2077998936",
"2062162367",
"1964100506",
"2268784238",
"2083449425",
"1967344706"
],
"abstract": [
"We consider the problem of minimizing a polynomial over a semialgebraic set defined by polynomial equations and inequalities, which is NP-hard in general. Hierarchies of semidefinite relaxations have been proposed in the literature, involving positive semidefinite moment matrices and the dual theory of sums of squares of polynomials. We present these hierarchies of approximations and their main properties: asymptotic finite convergence, optimality certificate, and extraction of global optimum solutions. We review the mathematical tools underlying these properties, in particular, some sums of squares representation results for positive polynomials, some results about moment matrices (in particular, of Curto and Fialkow), and the algebraic eigenvalue method for solving zero-dimensional systems of polynomial equations. We try whenever possible to provide detailed proofs and background.",
"",
"",
"The aim of this note is to give short algebraic proofs of theorems of Handelman, Polya and Schmudgen concerning the algebraic structure of polynomials being positive on certain subsets of ℝ n . The main ingredient of the proofs is the representation theorem of Kadison–Dubois. The proof of the latter is elementary and algebraic but tricky.",
"Farkas' lemma is a fundamental result from linear programming providing linear certificates for infeasibility of systems of linear inequalities. In semidefinite programming, such linear certificates only exist for strongly infeasible linear matrix inequalities. We provide nonlinear algebraic certificates for all infeasible linear matrix inequalities in the spirit of real algebraic geometry: A linear matrix inequality @math is infeasible if and only if -1 lies in the quadratic module associated to A. We also present a new exact duality theory for semidefinite programming, motivated by the real radical and sums of squares certificates from real algebraic geometry.",
"We prove elementary recursive bounds in the degrees for Positivstellensatz and Hilbert 17-th problem, which is the expression of a nonnegative polynomial as a sum of squares of rational functions. We obtain a tower of five exponentials. A precise bound in terms of the number and degree of the polynomials and their number of variables is provided in the paper.",
"Let S= [email protected]?R^n|g\"1(x)>=0,...,g\"m(x)>=0 be a basic closed semialgebraic set defined by real polynomials g\"i. Putinar's Positivstellensatz says that, under a certain condition stronger than compactness of S, every real polynomial f positive on S possesses a representation [email protected]?\"i\"=\"0^[email protected]\"ig\"i where g\"[email protected]?1 and each @s\"i is a sum of squares of polynomials. Such a representation is a certificate for the nonnegativity of f on S. We give a bound on the degrees of the terms @s\"ig\"i in this representation which depends on the description of S, the degree of f and a measure of how close f is to having a zero on S. As a consequence, we get information about the convergence rate of Lasserre's procedure for optimization of a polynomial subject to polynomial constraints.",
"We consider the problem of finding the unconstrained global minimum of a real-valued polynomial p(x): R ^n R $, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear matrix inequality (LMI) problems. A notion of Karush--Kuhn--Tucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | All the above algorithms do not exploit the particular structure of spectrahedra understood as determinantal semi-algebraic sets. In @cite_30 , the authors showed that deciding emptiness of @math can be done in time @math , that is in polynomial time in @math (resp. linear time in @math ) if @math (resp. @math ) is fixed . The main drawback of this algorithm is that it is based on general procedures for quantifier elimination, and hence it does not lead to efficient practical implementations. Note also that the complexity constant in the exponent is still unknown. | {
"cite_N": [
"@cite_30"
],
"mid": [
"1590002045"
],
"abstract": [
"We show that the feasibility of a system of m linear inequalities over the cone of symmetric positive semidefinite matrices of order n can be tested in mn^O(min m,n^2 ) arithmetic operations with ln^O(min m,n^2 )-bit numbers, where l is the maximum binary size of the input coefficients. We also show that any feasible system of dimension (m,n) has a solution X such that log||X|| ≤ ln^O(min m,n^2 )."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | Also, in @cite_49 , a version of @cite_40 dedicated to spectrahedra exploiting some of their structural properties, decides whether a linear matrix inequality @math has a rational solution, that is whether @math contains a point with coordinates in @math . Remark that such an algorithm is not sufficient to solve our problem, since, in some degenerate but interesting cases, @math is not empty but does not contain rational points: in Section we will illustrate the application of our algorithm to one of these examples. | {
"cite_N": [
"@cite_40",
"@cite_49"
],
"mid": [
"2064131066",
"2011936157"
],
"abstract": [
"Let @math , @math for @math , @math bounding the bit length of the coefficients of the @math 's, and let @math be a quantifier-free @math -formula defining a convex semialgebraic set. We design an algorithm returning a rational point in @math if and only if @math . It requires @math bit operations. If a rational point is outputted, its coordinates have bit length dominated by @math . Using this result, we obtain a procedure for deciding whether a polynomial @math is a sum of squares of polynomials in @math . Denote by @math the degree of @math , @math the maximum bit length of the coefficients in @math , @math , and @math . This procedure requires @math bit operations, and the coefficients of the outputted polynomials have bit length dominated by @math .",
"Consider a (D x D) symmetric matrix A whose entries are linear forms in Q[X1, ..., Xk] with coefficients of bit size ≤ τ. We provide an algorithm which decides the existence of rational solutions to the linear matrix inequality A ≥ 0 and outputs such a rational solution if it exists. This problem is of first importance: it can be used to compute algebraic certificates of positivity for multivariate polynomials. Our algorithm runs within (k≤)O(1)2O(min(k, D)D2)DO(D2) bit operations; the bit size of the output solution is dominated by τO(1)2O( (k, D)D2). These results are obtained by designing algorithmic variants of constructions introduced by Klep and Schweighofer. This leads to the best complexity bounds for deciding the existence of sums of squares with rational coefficients of a given polynomial. We have implemented the algorithm; it has been able to tackle Scheiderer's example of a multivariate polynomial that is a sum of squares over the reals but not over the rationals; providing the first computer validation of this counter-example to Sturmfels' conjecture."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | As suggested by the smallest rank property, determinantal structures play an important role in our algorithm. This structure has been recently exploited in @cite_27 and @cite_22 for the fast computation of Gr "obner bases of zero-dimensional determinantal ideals and computing zero-dimensional critical loci of maps restricted to varieties in the generic case. | {
"cite_N": [
"@cite_27",
"@cite_22"
],
"mid": [
"1985924702",
"2949730746"
],
"abstract": [
"We study the complexity of solving the generalized MinRank problem, i.e. computing the set of points where the evaluation of a polynomial matrix has rank at most r. A natural algebraic representation of this problem gives rise to a determinantal ideal: the ideal generated by all minors of size r+1 of the matrix. We give new complexity bounds for solving this problem using Grobner bases algorithms under genericity assumptions on the input matrix. In particular, these complexity bounds allow us to identify families of generalized MinRank problems for which the arithmetic complexity of the solving process is polynomial in the number of solutions. We also provide an algorithm to compute a rational parametrization of the variety of a 0-dimensional and radical system of bi-degree (D,1). We show that its complexity can be bounded by using the complexity bounds for the generalized MinRank problem.",
"We consider the problem of computing critical points of the restriction of a polynomial map to an algebraic variety. This is of first importance since the global minimum of such a map is reached at a critical point. Thus, these points appear naturally in non-convex polynomial optimization which occurs in a wide range of scientific applications (control theory, chemistry, economics,...). Critical points also play a central role in recent algorithms of effective real algebraic geometry. Experimentally, it has been observed that Gr \"obner basis algorithms are efficient to compute such points. Therefore, recent software based on the so-called Critical Point Method are built on Gr \"obner bases engines. Let @math be polynomials in @math of degree @math , @math be their complex variety and @math be the projection map @math . The critical points of the restriction of @math to @math are defined by the vanishing of @math and some maximal minors of the Jacobian matrix associated to @math . Such a system is algebraically structured: the ideal it generates is the sum of a determinantal ideal and the ideal generated by @math . We provide the first complexity estimates on the computation of Gr \"obner bases of such systems defining critical points. We prove that under genericity assumptions on @math , the complexity is polynomial in the generic number of critical points, i.e. @math . More particularly, in the quadratic case D=2, the complexity of such a Gr \"obner basis computation is polynomial in the number of variables @math and exponential in @math . We also give experimental evidence supporting these theoretical results."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | Exploiting determinantal structures for determinantal situations remained challenging for a long time. In @cite_21 we designed a dedicated algorithm for computing sample points in the real solution set of the determinant of a square linear matrix. This has been extended in @cite_28 to real algebraic sets defined by rank constraints on a linear matrix. Observe that this problem looks similar to the ones we consider thanks to the smallest rank property. As in this paper, the traditional strategy consists in studying incidence varieties for which smoothness and regularity properties are proved under some genericity assumptions on the input linear matrix. | {
"cite_N": [
"@cite_28",
"@cite_21"
],
"mid": [
"2212277022",
"1559760555"
],
"abstract": [
"The problem of finding @math matrices (with @math ) of rank @math in a real affine subspace of dimension n has many applications in information and systems theory, where low rank is synonymous of structure and parsimony. We design computer algebra algorithms to solve this problem efficiently and exactly: the input are the rational coefficients of the matrices spanning the affine subspace as well as the expected maximum rank, and the output is a rational parametrization encoding a finite set of points that intersects each connected component of the low rank real algebraic set. The complexity of our algorithm is studied thoroughly. It is essentially polynomial in @math ; it improves on the state-of-the-art in the field. Moreover, computer experiments show the practical efficiency of our approach.",
"Let A 0 , A 1 , ? , A n be given square matrices of size m with rational coefficients. The paper focuses on the exact computation of one point in each connected component of the real determinantal variety x ? R n : det ? ( A 0 + x 1 A 1 + ? + x n A n ) = 0 . Such a problem finds applications in many areas such as control theory, computational geometry, optimization, etc. Under some genericity assumptions on the coefficients of the matrices, we provide an algorithm solving this problem whose runtime is essentially polynomial in the binomial coefficient ( n + m n ) . We also report on experiments with a computer implementation of this algorithm. Its practical performance illustrates the complexity estimates. In particular, we emphasize that for subfamilies of this problem where m is fixed, the complexity is polynomial in n."
]
} |
1508.03715 | 2098958718 | Let @math be a linear matrix, or pencil, generated by given symmetric matrices @math of size @math with rational entries. The set of real vectors x such that the pencil is positive semidefinite is a convex semi-algebraic set called spectrahedron, described by a linear matrix inequality (LMI). We design an exact algorithm that, up to genericity assumptions on the input matrices, computes an exact algebraic representation of at least one point in the spectrahedron, or decides that it is empty. The algorithm does not assume the existence of an interior point, and the computed point minimizes the rank of the pencil on the spectrahedron. The degree @math of the algebraic representation of the point coincides experimentally with the algebraic degree of a generic semidefinite program associated to the pencil. We provide explicit bounds for the complexity of our algorithm, proving that the maximum number of arithmetic operations that are performed is essentially quadratic in a multilinear B 'ezout bound of @math . When @math (resp. @math ) is fixed, such a bound, and hence the complexity, is polynomial in @math (resp. @math ). We conclude by providing results of experiments showing practical improvements with respect to state-of-the-art computer algebra algorithms. | Hence, in the case of symmetric matrices, these results cannot be used anymore. Because of the structure of the matrix, the system defining the incidence variety involves too many equations; some of them being redundant. Hence, these redundancies need to be eliminated to characterize critical points on incidence varieties in a convenient way. In the case of Hankel matrices, the special structure of their kernel provides an efficient way to do that. This case study is done in @cite_36 . Yet, the problem of eliminating these redundancies remained unsolved in the general symmetric case and this is what we do in which is the starting point of the design of our dedicated algorithm. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2170042437"
],
"abstract": [
"Let H0, …, H n be m x m matrices with entries in Q and Hankel structure, i.e. constant skew diagonals. We consider the linear Hankel matrix H(x) = H0+x1H_1+…+xnHn and the problem of computing sample points in each connected component of the real algebraic set defined by the rank constraint rank (H(x))≤ r, for a given integer r ≤ m-1. Computing sample points in real algebraic sets defined by rank defects in linear matrices is a general problem that finds applications in many areas such as control theory, computational geometry, optimization, etc. Moreover, Hankel matrices appear in many areas of engineering sciences. Also, since Hankel matrices are symmetric, any algorithmic development for this problem can be seen as a first step towards a dedicated exact algorithm for solving semi-definite programming problems, i.e. linear matrix inequalities. Under some genericity assumptions on the input (such as smoothness of an incidence variety), we design a probabilistic algorithm for tackling this problem. It is an adaptation of the so-called critical point method that takes advantage of the special structure of the problem. Its complexity reflects this: it is essentially quadratic in specific degree bounds on an incidence variety. We report on practical experiments and analyze how the algorithm takes advantage of this special structure. A first implementation outperforms existing implementations for computing sample points in general real algebraic sets: it tackles examples that are out of reach of the state-of-the-art."
]
} |
1508.04028 | 1750883736 | Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions. | The algorithm in @cite_13 uses an ensemble of regression trees for super-real-time face alignment. Our face feature extraction algorithm draws upon this method as it is built on a decade of progress on the face alignment problem (see @cite_13 for a detailed review of prior work). The key contribution of the algorithm is an iterative transform of the image to a normalized coordinate system based on the current estimate of the face shape. Also, to avoid the non-convex problem of initially matching a model of the shape to the image data, the assumption is made that the initial estimate of the shape can be found in a linear subspace. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2087681821"
],
"abstract": [
"This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data."
]
} |
1508.04028 | 1750883736 | Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions. | Head pose estimation has a long history in computer vision. Murphy-Chutorian and Trivedi @cite_3 describe 74 published and tested systems from the last two decades. Generally, each approach makes one of several assumptions that limit the general applicability of the system in driver gaze detection. These assumptions include: (1) the video is continuous, (2) initial pose of the subject is known, (3) there is a stereo vision system available, (4) the camera has frontal view of the face, (5) the head can only rotate on one axis, (6) the system only has to work for one person. While the development of a set of assumptions is often necessary for the classification of a large number of possible poses, our approach skips the head pose estimation step (i.e. the computation of a vector in 3D space modeling the orientation of the head) and goes straight from the detection of a facial features to a classification of gaze to one of six glance regions. Prior work has shown that such a classification set is sufficient for the in-vehicle environment, even under rapidly shifting lighting conditions @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2398402543",
"2149382413"
],
"abstract": [
"Abstract—Automated estimation of the allocation of adriver’s visual attention may be a critical component offuture Advanced Driver Assistance Systems. In theory,vision-based tracking of the eye can provide a good estimateof gaze location. In practice, eye tracking from videois challenging because of sunglasses, eyeglass reflections,lighting conditions, occlusions, motion blur, and otherfactors. Estimation of head pose, on the other hand, isrobust to many of these effects, but cannot provide as fine-grained of a resolution in localizing the gaze. However,for the purpose of keeping the driver safe, it is sufficientto partition gaze into regions. In this effort, we proposea system that extracts facial features and classifies theirspatial configuration into six regions in real-time. Ourproposed method achieves an average accuracy of 91.4 at an average decision rate of 11 Hz on a dataset of 50drivers from an on-road study.Index Terms—Head pose estimation, gaze tracking, driverdistraction, driver assistance systems, on-road study.",
"The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments."
]
} |
1508.04028 | 1750883736 | Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions. | Pupil detection approaches have been extensively studied. Methods usually track corneal reflection, distinct pupil shape in combination with edge-detection, characteristic light intensity of the pupil, or a 3D model of the eye to derive an estimate of an individual's pupil, iris, or eye position @cite_8 . Our approach uses an adaptive CDF-based method @cite_14 in conjunction with face alignment that significantly narrows the search space. | {
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"2268346118",
"2108045700"
],
"abstract": [
"This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.",
"Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated."
]
} |
1508.03455 | 2949302828 | We suggest a new algorithm for two-person zero-sum undiscounted stochastic games focusing on stationary strategies. Given a positive real @math , let us call a stochastic game @math -ergodic, if its values from any two initial positions differ by at most @math . The proposed new algorithm outputs for every @math in finite time either a pair of stationary strategies for the two players guaranteeing that the values from any initial positions are within an @math -range, or identifies two initial positions @math and @math and corresponding stationary strategies for the players proving that the game values starting from @math and @math are at least @math apart. In particular, the above result shows that if a stochastic game is @math -ergodic, then there are stationary strategies for the players proving @math -ergodicity. This result strengthens and provides a constructive version of an existential result by Vrieze (1980) claiming that if a stochastic game is @math -ergodic, then there are @math -optimal stationary strategies for every @math . The suggested algorithm is based on a potential transformation technique that changes the range of local values at all positions without changing the normal form of the game. | The following four algorithms for undiscounted stochastic games are based on stronger "ergodicity type" conditions: the strategy iteration algorithm by Hoffman and Karp @cite_17 requires that for any pair of stationary strategies of the two players the obtained Markov chain has to be irreducible; two value iteration algorithms by Federgruen are based on similar but slightly weaker requirements; see @cite_16 for the definitions and more details; the recent algorithm of Chatterjee and Ibsen-Jensen @cite_20 assumes a weaker requirement than the strong ergodicity required by Hoffman and Karp @cite_17 : they call a stochastic game almost surely ergodic if for any pair of (not necessarily stationary) strategies of the two players, and any starting position, some strongly ergodic class (in the sense of @cite_17 ) is reached with probability 1. | {
"cite_N": [
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2039151826",
"",
"2156268532"
],
"abstract": [
"This paper considers undiscounted two-person, zero-sum sequential games with finite state and action spaces. Under conditions that guarantee the existence of stationary optimal strategies, we present two successive approximation methods for finding the optimal gain rate, a solution to the optimality equation, and for any ϵ > 0, ϵ-optimal policies for both players.",
"",
"A stochastic game is played in a sequence of steps; at each step the play is said to be in some state i, chosen from a finite collection of states. If the play is in state i, the first player chooses move k and the second player chooses move l, then the first player receives a reward akli, and, with probability pklij, the next state is j. The concept of stochastic games was introduced by Shapley with the proviso that, with probability 1, play terminates. The authors consider the case when play never terminates, and show properties of such games and offer a convergent algorithm for their solution. In the special case when one of the players is a dummy, the nonterminating stochastic game reduces to a Markovian decision process, and the present work can be regarded as the extension to a game theoretic context of known results on Markovian decision processes."
]
} |
1508.03455 | 2949302828 | We suggest a new algorithm for two-person zero-sum undiscounted stochastic games focusing on stationary strategies. Given a positive real @math , let us call a stochastic game @math -ergodic, if its values from any two initial positions differ by at most @math . The proposed new algorithm outputs for every @math in finite time either a pair of stationary strategies for the two players guaranteeing that the values from any initial positions are within an @math -range, or identifies two initial positions @math and @math and corresponding stationary strategies for the players proving that the game values starting from @math and @math are at least @math apart. In particular, the above result shows that if a stochastic game is @math -ergodic, then there are stationary strategies for the players proving @math -ergodicity. This result strengthens and provides a constructive version of an existential result by Vrieze (1980) claiming that if a stochastic game is @math -ergodic, then there are @math -optimal stationary strategies for every @math . The suggested algorithm is based on a potential transformation technique that changes the range of local values at all positions without changing the normal form of the game. | Interestingly, potentials appear in @cite_16 implicitly, as the differences of local values of positions, as well as in @cite_17 , as the dual variables to linear programs corresponding to the controlled Markov processes, which appear when a player optimizes his strategy against a given strategy of the opponent. Yet, the potential transformation is not considered explicitly in these papers. | {
"cite_N": [
"@cite_16",
"@cite_17"
],
"mid": [
"2039151826",
"2156268532"
],
"abstract": [
"This paper considers undiscounted two-person, zero-sum sequential games with finite state and action spaces. Under conditions that guarantee the existence of stationary optimal strategies, we present two successive approximation methods for finding the optimal gain rate, a solution to the optimality equation, and for any ϵ > 0, ϵ-optimal policies for both players.",
"A stochastic game is played in a sequence of steps; at each step the play is said to be in some state i, chosen from a finite collection of states. If the play is in state i, the first player chooses move k and the second player chooses move l, then the first player receives a reward akli, and, with probability pklij, the next state is j. The concept of stochastic games was introduced by Shapley with the proviso that, with probability 1, play terminates. The authors consider the case when play never terminates, and show properties of such games and offer a convergent algorithm for their solution. In the special case when one of the players is a dummy, the nonterminating stochastic game reduces to a Markovian decision process, and the present work can be regarded as the extension to a game theoretic context of known results on Markovian decision processes."
]
} |
1508.02757 | 2346798022 | Performing statistical inference in high-dimension is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional estimators. Here, we consider linear regression in the high-dimensional regime @math . In this context, we would like to perform inference on a high-dimensional parameters vector @math . Important progress has been achieved in computing confidence intervals for single coordinates @math . A key role in these new methods is played by a certain debiased estimator @math that is constructed from the Lasso. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of @math are asymptotically Gaussian provided @math is @math -sparse with @math . The condition @math is stronger than the one for consistent estimation, namely @math . We study Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition @math . Note that earlier work was limited to @math even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. under the same sparsity conditions on the inverse covariance as assumed by earlier work. For intermediate regimes, we describe the trade-off between sparsity in the coefficients and in the inverse covariance of the design. We further discuss several applications of our results to high-dimensional inference. In particular, we propose a new estimator that is minimax optimal up to a factor @math for i.i.d. Gaussian designs. | A parallel line of research develops methods for performing valid inference after a low-dimensional model is selected for fitting high-dimensional data @cite_34 @cite_2 @cite_50 @cite_60 . The resulting significance statements are typically conditional on the selected model. In contrast, here we are interested in classical (unconditional) significance statements: the two approaches are broadly complementary. | {
"cite_N": [
"@cite_60",
"@cite_34",
"@cite_50",
"@cite_2"
],
"mid": [
"2096179290",
"",
"172338866",
"2099932489"
],
"abstract": [
"We present an expository, general analysis of valid post-selection or post-regularization inference about a low-dimensional target parameter in the presence of a very high-dimensional nuisance parameter that is estimated using selection or regularization methods. Our analysis provides a set of high-level conditions under which inference for the low-dimensional parameter based on testing or point estimation methods will be regular despite selection or regularization biases occurring in the estimation of the high-dimensional nuisance parameter. A key element is the use of so-called immunized or orthogonal estimating equations that are locally insensitive to small mistakes in the estimation of the high-dimensional nuisance parameter. As an illustration, we analyze affine-quadratic models and specialize these results to a linear instrumental variables model with many regressors and many instruments. We conclude with a review of other developments in post-selection inference and note that many can be viewed as special cases of the general encompassing framework of orthogonal estimating equations provided in this article.",
"",
"In this paper we propose new inference tools for forward stepwise and least angle regression. We first present a general scheme to perform valid inference after any selection event that can be characterized as the observation vector y falling into some polyhedral set. This framework then allows us to derive conditional (post-selection) hypothesis tests at any step of the forward stepwise and least angle regression procedures. We derive an exact null distribution for our proposed test statistics in finite samples, yielding p-values with exact type I error control. The tests can also be inverted to produce confidence intervals for appropriate underlying regression parameters. Application of this framework to general likelihood-based regression models (e.g., generalized linear models and the Cox model) is also discussed.",
"To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann"
]
} |
1508.02757 | 2346798022 | Performing statistical inference in high-dimension is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional estimators. Here, we consider linear regression in the high-dimensional regime @math . In this context, we would like to perform inference on a high-dimensional parameters vector @math . Important progress has been achieved in computing confidence intervals for single coordinates @math . A key role in these new methods is played by a certain debiased estimator @math that is constructed from the Lasso. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of @math are asymptotically Gaussian provided @math is @math -sparse with @math . The condition @math is stronger than the one for consistent estimation, namely @math . We study Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition @math . Note that earlier work was limited to @math even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. under the same sparsity conditions on the inverse covariance as assumed by earlier work. For intermediate regimes, we describe the trade-off between sparsity in the coefficients and in the inverse covariance of the design. We further discuss several applications of our results to high-dimensional inference. In particular, we propose a new estimator that is minimax optimal up to a factor @math for i.i.d. Gaussian designs. | The focus of the present paper is assessing statistical significance, such as confidence intervals, for single coordinates in the parameters vector @math and more generally for small groups of coordinates. Other inference tasks are also interesting and challenging in high-dimension, and were the object of recent investigations @cite_56 @cite_58 @cite_14 @cite_27 . | {
"cite_N": [
"@cite_14",
"@cite_27",
"@cite_58",
"@cite_56"
],
"mid": [
"2164492469",
"2245286853",
"2109177042",
"2271626458"
],
"abstract": [
"Summary Consider the following three important problems in statistical inference: constructing confidence intervals for the error of a high dimensional (p>n) regression estimator, the linear regression noise level and the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the l2-norm of the signal in high dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5 wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that EigenPrism with only minor modifications can make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic data set to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.",
"",
"In many fields of science, we observe a response variable together with a large number of potential explanatory variables, and would like to be able to discover which variables are truly associated with the response. At the same time, we need to know that the false discovery rate (FDR) - the expected fraction of false discoveries among all discoveries - is not too high, in order to assure the scientist that most of the discoveries are indeed true and replicable. This paper introduces the knockoff filter, a new variable selection procedure controlling the FDR in the statistical linear model whenever there are at least as many observations as variables. This method achieves exact FDR control in finite sample settings no matter the design or covariates, the number of variables in the model, or the amplitudes of the unknown regression coefficients, and does not require any knowledge of the noise level. As the name suggests, the method operates by manufacturing knockoff variables that are cheap - their construction does not require any new data - and are designed to mimic the correlation structure found within the existing variables, in a way that allows for accurate FDR control, beyond what is possible with permutation-based methods. The method of knockoffs is very general and flexible, and can work with a broad class of test statistics. We test the method in combination with statistics from the Lasso for sparse regression, and obtain empirical results showing that the resulting method has far more power than existing selection rules when the proportion of null variables is high.",
"We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning a coefficient vector Theta 0 is an element of Rp from noisy linear observations y = X Theta 0 + w is an element of Rn (p > n) and the popular estimation procedure of solving the '1-penalized least squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In this context, we develop new estimators for the '2 estimation risk k Theta b- Theta 0k2 and the variance of the noise when distributions of Theta 0 and w are unknown. These can be used to select the regularization parameter optimally. Our approach combines Stein's unbiased risk estimate [Ste81] and the recent results of [BM12a] [BM12b] on the analysis of approximate message passing and the risk of LASSO. We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain conjecture from statistical physics. To the best of our knowledge, this result is the first that provides an asymptotically consistent risk estimator for the LASSO solely based on data. In addition, we demonstrate through simulations that our variance estimation outperforms several existing methods in the literature."
]
} |
1508.02757 | 2346798022 | Performing statistical inference in high-dimension is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional estimators. Here, we consider linear regression in the high-dimensional regime @math . In this context, we would like to perform inference on a high-dimensional parameters vector @math . Important progress has been achieved in computing confidence intervals for single coordinates @math . A key role in these new methods is played by a certain debiased estimator @math that is constructed from the Lasso. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of @math are asymptotically Gaussian provided @math is @math -sparse with @math . The condition @math is stronger than the one for consistent estimation, namely @math . We study Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition @math . Note that earlier work was limited to @math even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. under the same sparsity conditions on the inverse covariance as assumed by earlier work. For intermediate regimes, we describe the trade-off between sparsity in the coefficients and in the inverse covariance of the design. We further discuss several applications of our results to high-dimensional inference. In particular, we propose a new estimator that is minimax optimal up to a factor @math for i.i.d. Gaussian designs. | The debiasing method was developed independently from several points of view @cite_37 @cite_0 @cite_43 @cite_29 @cite_9 . The present authors were motivated by the AMP analysis of the Lasso @cite_20 @cite_59 @cite_61 @cite_49 , and by the Gaussian limits that this analysis implies. In particular @cite_43 used those techniques to analyze standard Gaussian designs (i.e. the case @math ) in the asymptotic limit @math with @math , @math constant. In this limit, the debiased estimator was proven to be asymptotically Gaussian provided @math (for a universal constant @math ). This sparsity condition is even weaker than the one of Theorem (or Theorem ), but the result of @cite_43 only holds asymptotically. Also @cite_43 proved Gaussian convergence in a weaker sense than the one established here, implying coverage of the constructed confidence intervals only on average' over the coordinates @math . | {
"cite_N": [
"@cite_61",
"@cite_37",
"@cite_29",
"@cite_9",
"@cite_0",
"@cite_43",
"@cite_59",
"@cite_49",
"@cite_20"
],
"mid": [
"2123202508",
"",
"",
"2128235479",
"",
"2963278901",
"2610971674",
"2067038358",
"2082029531"
],
"abstract": [
"We consider the problem of learning a coefficient vector xο ∈ RN from noisy linear observation y = Axo + ∈ Rn. In many contexts (ranging from model selection to image processing), it is desirable to construct a sparse estimator x. In this case, a popular approach consists in solving an l1-penalized least-squares problem known as the LASSO or basis pursuit denoising. For sequences of matrices A of increasing dimensions, with independent Gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical model ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.",
"",
"",
"Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the uncertainty associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or p- values for these models. We consider here high-dimensional linear regression problem, and propose an efficient algorithm for constructing confidence intervals and p-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a 'de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. We test our method on synthetic data and a high-throughput genomic data set about riboflavin production rate, made publicly available by (2014).",
"",
"We consider linear regression in the high-dimensional regime in which the number of observations n is smaller than the number of parameters p. A very successful approach in this setting uses 1-penalized least squares (a.k.a. the Lasso) to search for a subset of s0 < n parameters that best explain the data, while setting the other parameters to zero. A considerable amount of work has been devoted to characterizing the estimation and model selection problems within this approach. In this paper we consider instead the fundamental, but far less understood, question of statistical significance. We study this problem under the random design model in which the rows of the design matrix are i.i.d. and drawn from a high-dimensional Gaussian distribution. This situation arises, for instance, in learning high-dimensional Gaussian graphical models. Leveraging on an asymptotic distributional characterization of regularized least squares estimators, we develop a procedure for computing p-values and hence assessing statistical significance for hypothesis testing. We characterize the statistical power of this procedure, and evaluate it on synthetic and real data, comparing it with earlier proposals. Finally, we provide an upper bound on the minimax power of tests with a given significance level and show that our proposed procedure achieves this bound in case of design matrices with i.i.d. Gaussian entries.",
"“Approximate message passing” (AMP) algorithms have proved to be effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper, we provide rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with a large number of short cycles in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.",
"We consider a class of nonlinear mappings FA,N in R N indexed by symmetric random matrices A ∈ R N×N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are known as ‘approximate message passing’ algorithms. We study the high-dimensional (large N) behavior of the iterates of F for polynomial functions F, and prove that it is universal, i.e. it depends only on the first two moments of the entries of A, under a subgaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves –for a broad class of random projections– a conjecture by David Donoho and Jared Tanner.",
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism."
]
} |
1508.02757 | 2346798022 | Performing statistical inference in high-dimension is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional estimators. Here, we consider linear regression in the high-dimensional regime @math . In this context, we would like to perform inference on a high-dimensional parameters vector @math . Important progress has been achieved in computing confidence intervals for single coordinates @math . A key role in these new methods is played by a certain debiased estimator @math that is constructed from the Lasso. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of @math are asymptotically Gaussian provided @math is @math -sparse with @math . The condition @math is stronger than the one for consistent estimation, namely @math . We study Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition @math . Note that earlier work was limited to @math even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. under the same sparsity conditions on the inverse covariance as assumed by earlier work. For intermediate regimes, we describe the trade-off between sparsity in the coefficients and in the inverse covariance of the design. We further discuss several applications of our results to high-dimensional inference. In particular, we propose a new estimator that is minimax optimal up to a factor @math for i.i.d. Gaussian designs. | A non-asymptotic result under weaker sparsity conditions, and for designs with dependent columns, was proved in @cite_22 . However, this only establishes gaussianity of @math for most of the coordinates @math . Here we prove a significantly stronger result holding uniformly over @math . | {
"cite_N": [
"@cite_22"
],
"mid": [
"2042542290"
],
"abstract": [
"We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters @math is comparable to or exceeds the sample size @math , a successful approach uses an @math -penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work javanmard2013hypothesis, confidenceJM, GBR-hypothesis has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal testing power if the sample size @math asymptotically dominates @math , with @math being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires @math to asymptotically dominate @math . In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals."
]
} |
1508.02757 | 2346798022 | Performing statistical inference in high-dimension is an outstanding challenge. A major source of difficulty is the absence of precise information on the distribution of high-dimensional estimators. Here, we consider linear regression in the high-dimensional regime @math . In this context, we would like to perform inference on a high-dimensional parameters vector @math . Important progress has been achieved in computing confidence intervals for single coordinates @math . A key role in these new methods is played by a certain debiased estimator @math that is constructed from the Lasso. Earlier work establishes that, under suitable assumptions on the design matrix, the coordinates of @math are asymptotically Gaussian provided @math is @math -sparse with @math . The condition @math is stronger than the one for consistent estimation, namely @math . We study Gaussian designs with known or unknown population covariance. When the covariance is known, we prove that the debiased estimator is asymptotically Gaussian under the nearly optimal condition @math . Note that earlier work was limited to @math even for perfectly known covariance. The same conclusion holds if the population covariance is unknown but can be estimated sufficiently well, e.g. under the same sparsity conditions on the inverse covariance as assumed by earlier work. For intermediate regimes, we describe the trade-off between sparsity in the coefficients and in the inverse covariance of the design. We further discuss several applications of our results to high-dimensional inference. In particular, we propose a new estimator that is minimax optimal up to a factor @math for i.i.d. Gaussian designs. | Most of the work on statistical inference in high-dimensional models has been focused so far on linear regression. The debiasing method admits a natural extension to generalized linear models that was analyzed in @cite_29 . Robustness to model misspecification was studied in @cite_36 . An R-package for inference in high-dimension that uses the node-wise Lasso is available @cite_46 . An R implementation of the method @cite_9 (which does not make sparsity assumptions on @math ) is also available See http: web.stanford.edu montanar sslasso . . | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_29",
"@cite_46"
],
"mid": [
"1531316514",
"2128235479",
"",
"1741702827"
],
"abstract": [
"We consider high-dimensional inference when the assumed linear model is misspecified. We describe some correct interpretations and corresponding sufficient assumptions for valid asymptotic inference of the model parameters, which still have a useful meaning when the model is misspecified. We largely focus on the de-sparsified Lasso procedure but we also indicate some implications for (multiple) sample splitting techniques. In view of available methods and software, our results contribute to robustness considerations with respect to model misspecification.",
"Fitting high-dimensional statistical models often requires the use of non-linear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the uncertainty associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or p- values for these models. We consider here high-dimensional linear regression problem, and propose an efficient algorithm for constructing confidence intervals and p-values. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a 'de-biased' version of regularized M-estimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. We test our method on synthetic data and a high-throughput genomic data set about riboflavin production rate, made publicly available by (2014).",
"",
"We present a (selective) review of recent frequentist highdimensional inference methods for constructing p-values and confidence intervals in linear and generalized linear models. We include a broad, comparative empirical study which complements the viewpoint from statistical methodology and theory. Furthermore, we introduce and illustrate the Rpackage hdi which easily allows the use of different methods and supports reproducibility."
]
} |
1508.02677 | 2951124472 | The design, implementation and testing of Multi Agent Systems is typically a very complex task. While a number of specialist agent programming languages and toolkits have been created to aid in the development of such systems, the provision of associated development tools still lags behind those available for other programming paradigms. This includes tools such as debuggers and profilers to help analyse system behaviour, performance and efficiency. AgentSpotter is a profiling tool designed specifically to operate on the concepts of agent-oriented programming. This paper extends previous work on AgentSpotter by discussing its Call Graph View, which presents system performance information, with reference to the communication between the agents in the system. This is aimed at aiding developers in examining the effect that agent communication has on the processing requirements of the system. | Besides performance analysis, most agent frameworks provide a debugging tool similar to the Agent Factory Debugger @cite_13 , which provides information about the mental state and communication from the viewpoint of individual agents. A different type of debugging tool is the Agent Viewer that is provided in the Brahms toolkit @cite_1 , which displays agent timelines so as to understand when agents' actions are taken. | {
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"827822956",
"1544271477"
],
"abstract": [
"INTRODUCTION A space mission operations system is a complex network of human organizations, information and deepspace network systems and spacecraft hardware. As in other organizations, one of the problems in mission operations is managing the relationship of the mission information systems related to how people actually work (practices). Brahms, a multi-agent modeling and simulation tool, was used to model and simulate NASA’s Mars Exploration Rover (MER) mission work practice. The objective was to investigate the value of work practice modeling for mission operations design. From spring 2002 until winter 2003, a Brahms modeler participated in mission systems design sessions and operations testing for the MER mission held at Jet Propulsion Laboratory (JPL). He observed how designers interacted with the Brahms tool. This paper discusses mission system designers’ reactions to the simulation output during model validation and the presentation of generated work procedures. This project spurred JPL’s interest in the Brahms model, but it was never included as part of the formal mission design process. We discuss why this occurred. Subsequently, we used the MER model to develop a future mission operations concept. Team members were reluctant to use the MER model, even though it appeared to be highly relevant to their effort. We describe some of the tool issues we encountered.",
"The ability to effectively debug agent-oriented applications is vital if agent technologies are to become adopted as a viable alternative for complex systems development. Recent advances in the area have focussed on the provision of support for debugging agent interaction where tools have been provided that allow developers to analyse and debug the messages that are passed between agents. One potential approach for constructing agent-oriented applications is through the use of agent programming languages. Such languages employ mental notions such as beliefs, goals, commitments, and intentions to facilitate the construction of agent programs that specify the high-level behaviour of the agent. This paper describes how debugging has been supported for one such language, namely the Agent Factory Agent Programming Language (AFAPL)."
]
} |
1508.02677 | 2951124472 | The design, implementation and testing of Multi Agent Systems is typically a very complex task. While a number of specialist agent programming languages and toolkits have been created to aid in the development of such systems, the provision of associated development tools still lags behind those available for other programming paradigms. This includes tools such as debuggers and profilers to help analyse system behaviour, performance and efficiency. AgentSpotter is a profiling tool designed specifically to operate on the concepts of agent-oriented programming. This paper extends previous work on AgentSpotter by discussing its Call Graph View, which presents system performance information, with reference to the communication between the agents in the system. This is aimed at aiding developers in examining the effect that agent communication has on the processing requirements of the system. | In the JADE agent development framework, a Sniffer Agent is a FIPA-compliant agent that monitors messages created with an Agent Communication Language (ACL) passed between agents and presents these in a simple graphical interface @cite_6 . A more sophisticated tool, called , provides more detailed information on agent communication @cite_4 . Again, the principal aim of this is to aid in debugging errors in MASs that relate to coordination or cooperation. | {
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"1568975357",
"1904065432"
],
"abstract": [
"Multi-agent systems (MAS) are a special kind of distributed systems in which the main entities are autonomous in a proactive sense. These systems are special distributed systems because of their complexity and hence their unpredictability. Agents can spontaneously engage in complex interactions, guided by their own goals and intentions. When developing such kinds of system, there are many problems the developer has to face. All these problems make it virtually impossible to totally debug a quite complex multi-agent system (i.e. a MAS in which hundreds or even thousands of agents are involved). In this article we present a debugging tool we have developed for the JADE agents platform. Hence, it is a FIPA technology based tool and seeks to alleviate typical debugging problems derived from distribution and unpredictability.",
"JADE (Java Agent Development Framework) is a software framework to make easy the development of multi-agent applications in compliance with the FIPA specifications. JADE can then be considered a middle-ware that implements an efficient agent platform and supports the development of multi agent systems. JADE agent platform tries to keep high the performance of a distributed agent system implemented with the Java language. In particular, its communication architecture tries to offer flexible and efficient messaging, transparently choosing the best transport available and leveraging state-of-the-art distributed object technology embedded within Java runtime environment. JADE uses an agent model and Java implementation that allow good runtime efficiency, software reuse, agent mobility and the realization of different agent architectures."
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | The problem of minimizing an objective function in a noisy environment has brought forth a very rich body of literature in the field of stochastic approximation @cite_26 @cite_21 . In the framework of this paper, most of this literature examines the evolution of the projected stochastic gradient or subgradient algorithm, where the projection is made on a fixed constraining set. | {
"cite_N": [
"@cite_26",
"@cite_21"
],
"mid": [
"1491706803",
"1499021337"
],
"abstract": [
"These notes were written for a D.E.A. course given at Ecole Normale Superieure de Cachan during the 1996–97 and 1997–98 academic years and at University Toulouse III during the 1997–98 academic year. Their aim is to introduce the reader to the dynamical system aspects of the theory of stochastic approximations.",
"Introduction 1 Review of Continuous Time Models 1.1 Martingales and Martingale Inequalities 1.2 Stochastic Integration 1.3 Stochastic Differential Equations: Diffusions 1.4 Reflected Diffusions 1.5 Processes with Jumps 2 Controlled Markov Chains 2.1 Recursive Equations for the Cost 2.2 Optimal Stopping Problems 2.3 Discounted Cost 2.4 Control to a Target Set and Contraction Mappings 2.5 Finite Time Control Problems 3 Dynamic Programming Equations 3.1 Functionals of Uncontrolled Processes 3.2 The Optimal Stopping Problem 3.3 Control Until a Target Set Is Reached 3.4 A Discounted Problem with a Target Set and Reflection 3.5 Average Cost Per Unit Time 4 Markov Chain Approximation Method: Introduction 4.1 Markov Chain Approximation 4.2 Continuous Time Interpolation 4.3 A Markov Chain Interpolation 4.4 A Random Walk Approximation 4.5 A Deterministic Discounted Problem 4.6 Deterministic Relaxed Controls 5 Construction of the Approximating Markov Chains 5.1 One Dimensional Examples 5.2 Numerical Simplifications 5.3 The General Finite Difference Method 5.4 A Direct Construction 5.5 Variable Grids 5.6 Jump Diffusion Processes 5.7 Reflecting Boundaries 5.8 Dynamic Programming Equations 5.9 Controlled and State Dependent Variance 6 Computational Methods for Controlled Markov Chains 6.1 The Problem Formulation 6.2 Classical Iterative Methods 6.3 Error Bounds 6.4 Accelerated Jacobi and Gauss-Seidel Methods 6.5 Domain Decomposition 6.6 Coarse Grid-Fine Grid Solutions 6.7 A Multigrid Method 6.8 Linear Programming 7 The Ergodic Cost Problem: Formulation and Algorithms 7.1 Formulation of the Control Problem 7.2 A Jacobi Type Iteration 7.3 Approximation in Policy Space 7.4 Numerical Methods 7.5 The Control Problem 7.6 The Interpolated Process 7.7 Computations 7.8 Boundary Costs and Controls 8 Heavy Traffic and Singular Control 8.1 Motivating Examples &nb"
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | In the case where the constraining set has a complicated structure, an incremental minimization algorithm with random constraint updates has been proposed in @cite_1 , where a deterministic convex function @math is minimized on a finite intersection of closed and convex constraining sets. The algorithm developed in @cite_1 consists of a subgradient step over the objective @math followed by an update step towards a randomly chosen constraining set. Using the same principle, a distributed algorithm involving an additional consensus step has been proposed in @cite_22 . Random iterations involving proximal and subgradient operators were considered in @cite_10 and in @cite_20 . In @cite_20 , the functions @math are supposed to have a full domain, to satisfy @math for some constant @math which does not depend on @math and, finally, are such that @math . In the present paper, such conditions are not needed. | {
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_22",
"@cite_20"
],
"mid": [
"2073750241",
"2012180727",
"2092507976",
"2128038973"
],
"abstract": [
"We consider the minimization of a sum @math consisting of a large number of convex component functions f i . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components. We also discuss applications in a few contexts, including signal processing and inference machine learning.",
"This paper deals with iterative gradient and subgradient methods with random feasibility steps for solving constrained convex minimization problems, where the constraint set is specified as the intersection of possibly infinitely many constraint sets. Each constraint set is assumed to be given as a level set of a convex but not necessarily differentiable function. The proposed algorithms are applicable to the situation where the whole constraint set of the problem is not known in advance, but it is rather learned in time through observations. Also, the algorithms are of interest for constrained optimization problems where the constraints are known but the number of constraints is either large or not finite. We analyze the proposed algorithm for the case when the objective function is differentiable with Lipschitz gradients and the case when the objective function is not necessarily differentiable. The behavior of the algorithm is investigated both for diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.",
"Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. We prove that the iterates of all agents converge to the same point in the optimal set almost surely. Experiments on distributed support vector machines demonstrate good performance of the algorithm.",
"We consider convex optimization problems with structures that are suitable for stochastic sampling. In particular, we focus on problems where the objective function is an expected value or is a sum of a large number of component functions, and the constraint set is the intersection of a large number of simpler sets. We propose an algorithmic framework for projection-proximal methods using random subgradient function updates and random constraint updates, which contain as special cases several known algorithms as well as new algorithms. To analyze the convergence of these algorithms in a unied manner, we prove a general coupled convergence theorem. It states that the convergence is obtained from an interplay between two coupled processes: progress towards feasibility and progress towards optimality. Moreover, we consider a number of typical sampling randomization schemes for the subgradients component functions and the constraints, and analyze their performance using our unied convergence framework."
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | The algorithm can also be used to solve a variational inequality problem. Let @math where @math are closed and convex sets in @math . Consider the problem of finding @math that solves the variational inequality [ x C, F(x_ ), x - x_ 0 , , ] where @math is a monotone single-valued operator on @math @cite_2 @cite_9 . Since the projection on @math is difficult, one can use the simple stochastic algorithm @math , where the random variables @math are distributed on the set @math . The variant where @math is itself an expectation can also be considered , @math . The work @cite_2 addresses this context. @cite_2 , it is assumed that @math is strongly monotone and that the stochastic Lipschitz property @math holds, where @math is a positive constant. In our work, the strong monotonicity of @math is not needed, and the Lipschitz property is essentially replaced with the condition @math , where @math is a subgradient of @math at @math (for instance, the least norm one), and @math satisfies a moment condition. | {
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"1516681311",
"2039696315"
],
"abstract": [
"Preface to the SIAM edition Preface Glossary of notations Introduction Part I. Variational Inequalities in Rn Part II. Variational Inequalities in Hilbert Space Part III. Variational Inequalities for Monotone Operators Part IV. Problems of Regularity Part V. Free Boundary Problems and the Coincidence Set of the Solution Part VI. Free Boundary Problems Governed by Elliptic Equations and Systems Part VII. Applications of Variational Inequalities Part VIII. A One Phase Stefan Problem Bibliography Index.",
"We consider the solution of strongly monotone variational inequalities of the form @math F ( x ? ) ? ( x - x ? ) ? 0 , for all @math x ? X . We focus on special structures that lend themselves to sampling, such as when @math X is the intersection of a large number of sets, and or @math F is an expected value or is the sum of a large number of component functions. We propose new methods that combine elements of incremental constraint projection and stochastic gradient. These methods are suitable for problems involving large-scale data, as well as problems with certain online or distributed structures. We analyze the convergence and the rate of convergence of these methods with various types of sampling schemes, and we establish a substantial rate of convergence advantage for random sampling over cyclic sampling."
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | In the same vein as our paper, @cite_28 considered a collection @math of @math maximal monotone operators, and studied the iterations [ y_ n+1 A( n+1 (1), x_n) , , x_ n+1 = i=2 ^N (I + n+1 A( n+1 (i), ))^ -1 (x_n - n+1 y_ n+1 ) , , ] where @math , and where @math is a sequence of permutations of the set @math . The convergence of @math to a zero of @math is established in @cite_28 . In the recent paper @cite_12 , a relaxed version of Algorithm is considered, where @math is cocoercive and where its output, as well as the output of the resolvent of @math , are subjected to random errors. The convergence of the iterates to a zero of @math is established under summability assumptions on these errors. | {
"cite_N": [
"@cite_28",
"@cite_12"
],
"mid": [
"1998576973",
"2963692017"
],
"abstract": [
"In a method of treating a textile material in a shrinking bath containing a swelling agent comprising the step of introducing into said bath a driving-off agent which substantially reduces the solubility of the swelling agent in the shrinking bath.",
"We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the cocoercive operator and stochastic perturbations in the evaluation of the resolvents of the set-valued operator. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak and strong almost sure convergence properties of the iterates is established under mild conditions on the underlying stochastic processes. Leveraging these results, we also establish the almost sure convergence of the iterates of a stochastic variant of a primal-dual proximal splitting method for composite minimization problems."
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | Regarding the convergence rate analysis, let us mention @cite_36 @cite_29 which investigate the performance of the algorithm @math , where @math is a noisy estimate of the gradient @math . The same algorithm is addressed in @cite_37 , where the proximity operator is replaced by the resolvent of a fixed maximal monotone operator, and @math is replaced by a noisy version of a (single-valued) cocoercive operator evaluated at @math . The paper @cite_11 addresses the statistical analysis of the empirical means of the estimates obtained from the random proximal point algorithm. | {
"cite_N": [
"@cite_36",
"@cite_29",
"@cite_37",
"@cite_11"
],
"mid": [
"1513883278",
"1658113598",
"2949090393",
"2271805634"
],
"abstract": [
"We study a perturbed version of the proximal gradient algorithm for which the gradient is not known in closed form and should be approximated. We address the convergence and derive a non-asymptotic bound on the convergence rate for the perturbed proximal gradient, a perturbed averaged version of the proximal gradient algorithm and a perturbed version of the fast iterative shrinkage-thresholding (FISTA) of becketteboulle09 . When the approximation is achieved by using Monte Carlo methods, we derive conditions involving the Monte Carlo batch-size and the step-size of the algorithm under which convergence is guaranteed. In particular, we show that the Monte Carlo approximations of some averaged proximal gradient algorithms and a Monte Carlo approximation of FISTA achieve the same convergence rates as their deterministic counterparts. To illustrate, we apply the algorithms to high-dimensional generalized linear mixed models using @math -penalization.",
"We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive @math non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.",
"We propose an inertial forward-backward splitting algorithm to compute the zero of a sum of two monotone operators allowing for stochastic errors in the computation of the operators. More precisely, we establish almost sure convergence in real Hilbert spaces of the sequence of iterates to an optimal solution. Then, based on this analysis, we introduce two new classes of stochastic inertial primal-dual splitting methods for solving structured systems of composite monotone inclusions and prove their convergence. Our results extend to the stochastic and inertial setting various types of structured monotone inclusion problems and corresponding algorithmic solutions. Application to minimization problems is discussed.",
""
]
} |
1508.02845 | 2951769687 | The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Bena " m and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization or variational inequality problems in random environments. | This paper follows the line of thought of the recent paper @cite_16 , who studies the behavior of the random iterates @math in a Hilbert space, and establishes the convergence of the empirical means @math towards a zero of the mean operator @math . In the present paper, the proximal point algorithm is replaced with the more general forward-backward algorithm. Thanks to the dynamic approach developed here, the convergences of both @math and @math are studied. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2963005234"
],
"abstract": [
"The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates @math given by @math where @math is a collection of maximal monotone operators on a separable Hilbert space, @math is an independent identically distributed sequence of random variables on @math and @math is a positive sequence in @math . The weighted averaged sequence of iterates is shown to converge weakly to a zero (assumed to exist) of the Aumann expectation @math under the assumption that the latter is maximal. We consider applications to stochastic optimization problems of the form @math , where @math is a normal convex integrand and @math is a collection of closed convex sets. In this case, the iterations are closely related to a stochastic proximal algorithm recently proposed by Wang and Bertsekas [Increment..."
]
} |
1508.02810 | 1915957564 | We consider the problem of minimizing a sum of @math functions over a convex parameter set @math where @math . In this regime, algorithms which utilize sub-sampling techniques are known to be effective. In this paper, we use sub-sampling techniques together with low-rank approximation to design a new randomized batch algorithm which possesses comparable convergence rate to Newton's method, yet has much smaller per-iteration cost. The proposed algorithm is robust in terms of starting point and step size, and enjoys a composite convergence rate, namely, quadratic convergence at start and linear convergence when the iterate is close to the minimizer. We develop its theoretical analysis which also allows us to select near-optimal algorithm parameters. Our theoretical results can be used to obtain convergence rates of previously proposed sub-sampling based algorithms as well. We demonstrate how our results apply to well-known machine learning problems. Lastly, we evaluate the performance of our algorithm on several datasets under various scenarios. | In contrast, online algorithms are the option of choice for very large @math since the computation per update is independent of @math . In the case of (SGD), the descent direction is formed by a randomly selected gradient @cite_8 . Improvements to SGD have been developed by incorporating the previous gradient directions in the current update @cite_45 @cite_40 @cite_22 @cite_33 . | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_40",
"@cite_45"
],
"mid": [
"114517082",
"2146502635",
"",
"2000200144",
"1791038712"
],
"abstract": [
"During the last decade, the data sizes have grown faster than the speed of processors. In this context, the capabilities of statistical machine learning methods is limited by the computing time rather than the sample size. A more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems. The large-scale case involves the computational complexity of the underlying optimization algorithm in non-trivial ways. Unlikely optimization algorithms such as stochastic gradient descent show amazing performance for large-scale problems. In particular, second order stochastic gradient and averaged stochastic gradient are asymptotically efficient after a single pass on the training set.",
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.",
"",
"Recent deep neural network systems for large vocabulary speech recognition are trained with minibatch stochastic gradient descent but use a variety of learning rate scheduling schemes. We investigate several of these schemes, particularly AdaGrad. Based on our analysis of its limitations, we propose a new variant AdaDec' that decouples long-term learning-rate scheduling from per-parameter learning rate variation. AdaDec was found to result in higher frame accuracies than other methods. Overall, careful choice of learning rate schemes leads to faster convergence and lower word error rates.",
"We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1 k^ 1 2 ) to O(1 k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 k) to a linear convergence rate of the form O(p^k) for p 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies."
]
} |
1508.02810 | 1915957564 | We consider the problem of minimizing a sum of @math functions over a convex parameter set @math where @math . In this regime, algorithms which utilize sub-sampling techniques are known to be effective. In this paper, we use sub-sampling techniques together with low-rank approximation to design a new randomized batch algorithm which possesses comparable convergence rate to Newton's method, yet has much smaller per-iteration cost. The proposed algorithm is robust in terms of starting point and step size, and enjoys a composite convergence rate, namely, quadratic convergence at start and linear convergence when the iterate is close to the minimizer. We develop its theoretical analysis which also allows us to select near-optimal algorithm parameters. Our theoretical results can be used to obtain convergence rates of previously proposed sub-sampling based algorithms as well. We demonstrate how our results apply to well-known machine learning problems. Lastly, we evaluate the performance of our algorithm on several datasets under various scenarios. | Batch algorithms, on the other hand, can achieve faster convergence and exploit second order information. They are competitive for intermediate @math . Several methods in this category aim at quadratic, or at least super-linear convergence rates. In particular, Quasi-Newton methods have proven effective @cite_29 @cite_4 . Another approach towards the same goal is to utilize sub-sampling to form an approximate Hessian @cite_20 @cite_30 @cite_50 @cite_49 @cite_25 @cite_14 . If the sub-sampled Hessian is close to the true Hessian, these methods can approach NM in terms of convergence rate, nevertheless, they enjoy much smaller complexity per update. No convergence rate analysis is available for these methods: this analysis is the main contribution of our paper. To the best of our knowledge, the best result in this direction is proven in @cite_30 that estabilishes asymptotic convergence without quantitative bounds (exploiting general theory from @cite_1 ). | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_4",
"@cite_29",
"@cite_1",
"@cite_50",
"@cite_49",
"@cite_25",
"@cite_20"
],
"mid": [
"1991083751",
"",
"2124541940",
"1554663460",
"592715486",
"",
"2951301236",
"",
"196761320"
],
"abstract": [
"This paper describes how to incorporate sampled curvature information in a Newton-CG method and in a limited memory quasi-Newton method for statistical learning. The motivation for this work stems from supervised machine learning applications involving a very large number of training points. We follow a batch approach, also known in the stochastic optimization literature as a sample average approximation approach. Curvature information is incorporated in two subsampled Hessian algorithms, one based on a matrix-free inexact Newton iteration and one on a preconditioned limited memory BFGS iteration. A crucial feature of our technique is that Hessian-vector multiplications are carried out with a significantly smaller sample size than is used for the function and gradient. The efficiency of the proposed methods is illustrated using a machine learning application involving speech recognition.",
"",
"It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].",
"From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.",
"Preface Part I. Basics: 1. Optimization models 2. Fundamentals of optimization 3. Representation of linear constraints Part II. Linear Programming: 4. Geometry of linear programming 5. The simplex method 6. Duality and sensitivity 7. Enhancements of the simplex method 8. Network problems 9. Computational complexity of linear programming 10. Interior-point methods of linear programming Part III. Unconstrained Optimization: 11. Basics of unconstrained optimization 12. Methods for unconstrained optimization 13. Low-storage methods for unconstrained problems Part IV. Nonlinear Optimization: 14. Optimality conditions for constrained problems 15. Feasible-point methods 16. Penalty and barrier methods Part V. Appendices: Appendix A. Topics from linear algebra Appendix B. Other fundamentals Appendix C. Software Bibliography Index.",
"",
"We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all curvature information contained in the examples, which leads to striking improvements in both theory and practice - sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method. If, in addition, the loss functions are quadratic, our method can be interpreted as a novel variant of the recently introduced Iterative Hessian Sketch.",
"",
"We develop a 2nd-order optimization method based on the \"Hessian-free\" approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn't limited in applicability to auto-encoders, or any specific model class. We also discuss the issue of \"pathological curvature\" as a possible explanation for the difficulty of deep-learning and how 2nd-order optimization, and our method in particular, effectively deals with it."
]
} |
1508.02810 | 1915957564 | We consider the problem of minimizing a sum of @math functions over a convex parameter set @math where @math . In this regime, algorithms which utilize sub-sampling techniques are known to be effective. In this paper, we use sub-sampling techniques together with low-rank approximation to design a new randomized batch algorithm which possesses comparable convergence rate to Newton's method, yet has much smaller per-iteration cost. The proposed algorithm is robust in terms of starting point and step size, and enjoys a composite convergence rate, namely, quadratic convergence at start and linear convergence when the iterate is close to the minimizer. We develop its theoretical analysis which also allows us to select near-optimal algorithm parameters. Our theoretical results can be used to obtain convergence rates of previously proposed sub-sampling based algorithms as well. We demonstrate how our results apply to well-known machine learning problems. Lastly, we evaluate the performance of our algorithm on several datasets under various scenarios. | Further improvements have been suggested either by utilizing (CG) methods and or using Krylov sub-spaces @cite_20 @cite_30 @cite_50 . Sub-sampling can be also used to obtain an approximate solution, if an exact solution is not required @cite_38 . Lastly, there are various hybrid algorithms that combine two or more techniques to gain improvement. Examples include, sub-sampling and Quasi-Newton @cite_6 @cite_11 @cite_31 , SGD and GD @cite_42 , NGD and NM @cite_34 , NGD and low-rank approximation @cite_44 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_42",
"@cite_6",
"@cite_44",
"@cite_50",
"@cite_31",
"@cite_34",
"@cite_20",
"@cite_11"
],
"mid": [
"1991083751",
"2170582628",
"1751687266",
"1491622225",
"",
"",
"1707676469",
"1598497354",
"196761320",
""
],
"abstract": [
"This paper describes how to incorporate sampled curvature information in a Newton-CG method and in a limited memory quasi-Newton method for statistical learning. The motivation for this work stems from supervised machine learning applications involving a very large number of training points. We follow a batch approach, also known in the stochastic optimization literature as a sample average approximation approach. Curvature information is incorporated in two subsampled Hessian algorithms, one based on a matrix-free inexact Newton iteration and one on a preconditioned limited memory BFGS iteration. A crucial feature of our technique is that Hessian-vector multiplications are carried out with a significantly smaller sample size than is used for the function and gradient. The efficiency of the proposed methods is illustrated using a machine learning application involving speech recognition.",
"We address the problem of fast estimation of ordinary least squares (OLS) from large amounts of data (n ≫ p). We propose three methods which solve the big data problem by subsampling the covariance matrix using either a single or two stage estimation. All three run in the order of size of input i.e. O(np) and our best method, Uluru, gives an error bound of O(√p n) which is independent of the amount of subsampling as long as it is above a threshold. We provide theoretical bounds for our algorithms in the fixed design (with Randomized Hadamard preconditioning) as well as sub-Gaussian random design setting. We also compare the performance of our methods on synthetic and real-world datasets and show that if observations are i.i.d., sub-Gaussian then one can directly subsample without the expensive Randomized Hadamard preconditioning without loss of accuracy.",
"Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum; these methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental-gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.",
"We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperforms previous stochastic gradient methods for parameter estimation in conditional random fields. We are working on analyzing the convergence of online (L)BFGS, and extending it to nonconvex optimization problems.",
"",
"",
"The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasi- Newton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust and scalable. It employs the classical BFGS update formula in its limited memory form, and is based on the observation that it is beneficial to collect curvature information pointwise, and at regular intervals, through (sub-sampled) Hessian-vector products. This technique differs from the classical approach that would compute differences of gradients, and where controlling the quality of the curvature estimates can be difficult. We present numerical results on problems arising in machine learning that suggest that the proposed method shows much promise.",
"Nowadays, for many tasks such as object recognition or language modeling, data is plentiful. As such, an important challenge has become to find learning algorithms which can make use of all the available data. In this setting, called \"large-scale learning\" by Bottou & Bousquet (2008), learning and optimization become different and powerful optimization algorithms are suboptimal learning algorithms. While most efforts are focused on adapting optimization algorithms for learning by efficiently using the information contained in the Hessian, Le (2008) exploited the special structure of the learning problem to achieve faster convergence. In this paper, we investigate a natural way of combining these two directions to yield fast and robust learning algorithms.",
"We develop a 2nd-order optimization method based on the \"Hessian-free\" approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn't limited in applicability to auto-encoders, or any specific model class. We also discuss the issue of \"pathological curvature\" as a possible explanation for the difficulty of deep-learning and how 2nd-order optimization, and our method in particular, effectively deals with it.",
""
]
} |
1508.02674 | 1933280806 | Advances in Agent Oriented Software Engineering have focused on the provision of frameworks and toolkits to aid in the creation of Multi Agent Systems (MASs). However, despite the need to address the inherent complexity of such systems, little progress has been made in the development of tools to allow for the debugging and understanding of their inner workings. This paper introduces a novel performance analysis system, named AgentSpotter, which facilitates such analysis. AgentSpotter was developed by mapping conventional profiling concepts to the domain of MASs. We outline its integration into the Agent Factory multi agent framework. | In the traditional software engineering community, historical profilers such as gprof @cite_10 or performance analysis APIs like ATOM @cite_13 and the Java Virtual Machine Tool Interface (JVMTI) @cite_12 have made performance analysis more accessible for researchers and software engineers. However, the MAS community does not yet have general access to these types of tools. | {
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_12"
],
"mid": [
"2047226031",
"2144433126",
"2197000251"
],
"abstract": [
"ATOM (Analysis Tools with OM) is a single framework for building a wide range of customized program analysis tools. It provides the common infrastructure present in all code-instrumenting tools; this is the difficult and time-consuming part. The user simply defines the tool-specific details in instrumentation and analysis routines. Building a basic block counting tool like Pixie with ATOM requires only a page of code. ATOM, using OM link-time technology, organizes the final executable such that the application program and user's analysis routines run in the same address space. Information is directly passed from the application program to the analysis routines through simple procedure calls instead of inter-process communication or files on disk. ATOM takes care that analysis routines do not interfere with the program's execution, and precise information about the program is presented to the analysis routines at all times. ATOM uses no simulation or interpretation. ATOM has been implemented on the Alpha AXP under OSF 1. It is efficient and has been used to build a diverse set of tools for basic block counting, profiling, dynamic memory recording, instruction and data cache simulation, pipeline simulation, evaluating branch prediction, and instruction scheduling.",
"Large complex programs are composed of many small routines that implement abstractions for the routines that call them. To be useful, an execution profiler must attribute execution time in a way that is significant for the logical structure of a program as well as for its textual decomposition. This data must then be displayed to the user in a convenient and informative way. The gprof profiler accounts for the running time of called routines in the running time of the routines that call them. The design and use of this profiler is described.",
""
]
} |
1508.02674 | 1933280806 | Advances in Agent Oriented Software Engineering have focused on the provision of frameworks and toolkits to aid in the creation of Multi Agent Systems (MASs). However, despite the need to address the inherent complexity of such systems, little progress has been made in the development of tools to allow for the debugging and understanding of their inner workings. This paper introduces a novel performance analysis system, named AgentSpotter, which facilitates such analysis. AgentSpotter was developed by mapping conventional profiling concepts to the domain of MASs. We outline its integration into the Agent Factory multi agent framework. | Unique amongst all of the mainstream MAS development platforms, Cougaar is alone in integrating a performance measurement infrastructure directly into the system architecture @cite_4 . Although this is not applicable to other platforms, it does provide a good insight into the features that MAS developers could reasonably expect from any performance measurement application. The principal characteristics of this structure are as follows: Primary data channels consist of raw polling sensors at the heart of the system execution engine that gather simple low-impact data elements such as counters and event sensors. Secondary channels provide more elaborate information, such as summaries of the state of individual components and history analysis that stores performance data over lengthy running times. Computer-level metrics provide data on such items as CPU load, network load and memory usage. The message transport service gathers data on messages flowing through it. An extension mechanism based on servlets allows the addition of visualisation plugins that bind to the performance metrics data source. The service that is charged with gathering these metrics is designed so as to have no impact on system performance when not in use. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2132811232"
],
"abstract": [
"Distributed multi-agent systems have become more mature in recent years, with the growing potential to handle large volumes of data and coordinate the operations of many organizations. However, widespread adoption by industry and government has been blocked in part by concerns about scalability and survivability, especially in unpredictable environments of attacks and system failures. In this paper, we present Cougaar, an open-source Java-based agent architecture that provides a survivable base on which to deploy large-scale, robust distributed applications. We define the challenging problem of the UltraLog project; a distributed logistics application comprised of more than 1000 agents distributed over 100 hosts, which guided the design of the Cougaar architecture to ensure scalability, robustness, and security. We conclude with a survey of Cougaar uses as the preferred agent platform for a variety of applications."
]
} |
1508.02674 | 1933280806 | Advances in Agent Oriented Software Engineering have focused on the provision of frameworks and toolkits to aid in the creation of Multi Agent Systems (MASs). However, despite the need to address the inherent complexity of such systems, little progress has been made in the development of tools to allow for the debugging and understanding of their inner workings. This paper introduces a novel performance analysis system, named AgentSpotter, which facilitates such analysis. AgentSpotter was developed by mapping conventional profiling concepts to the domain of MASs. We outline its integration into the Agent Factory multi agent framework. | Other analysis tools exist for aiding the development of MASs. However, these tend to be narrower in their focus, concentrating only on specific aspects of debugging MASs and typically being only applicable to a specific agent platform. The Agent Factory Debugger @cite_11 is an example of a tool that is typical of most multi agent frameworks. Its principal function is inspecting the status and mental state of an individual agent: its goals, beliefs, commitments and the messages it has exchanged with other agents. Tools such as this give limited information about the interaction between agents and the consequences of these interactions. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1544271477"
],
"abstract": [
"The ability to effectively debug agent-oriented applications is vital if agent technologies are to become adopted as a viable alternative for complex systems development. Recent advances in the area have focussed on the provision of support for debugging agent interaction where tools have been provided that allow developers to analyse and debug the messages that are passed between agents. One potential approach for constructing agent-oriented applications is through the use of agent programming languages. Such languages employ mental notions such as beliefs, goals, commitments, and intentions to facilitate the construction of agent programs that specify the high-level behaviour of the agent. This paper describes how debugging has been supported for one such language, namely the Agent Factory Agent Programming Language (AFAPL)."
]
} |
1508.02674 | 1933280806 | Advances in Agent Oriented Software Engineering have focused on the provision of frameworks and toolkits to aid in the creation of Multi Agent Systems (MASs). However, despite the need to address the inherent complexity of such systems, little progress has been made in the development of tools to allow for the debugging and understanding of their inner workings. This paper introduces a novel performance analysis system, named AgentSpotter, which facilitates such analysis. AgentSpotter was developed by mapping conventional profiling concepts to the domain of MASs. We outline its integration into the Agent Factory multi agent framework. | The Brahms toolkit features an AgentViewer that allows developers to view along a time line the actions that particular agents have taken, so as to enable them to verify that the conceptual model of the MAS is reflected in reality @cite_1 . An administrator tool for the LS TS agent platform provides some high-level system monitoring information, such as overall memory consumption and data on the size of the agent population @cite_0 . Another type of agent debugging tool is the ACLAnalyzer that has been developed for the JADE platform @cite_5 . Rather than concentrating on individual agents, it is intended to analyse agent interaction in order to see how the community of agents interacts and is organised. In addition to visualising the number and size of messages sent between specific agents, it also employs clustering in order to identify cliques in the agent community. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_1"
],
"mid": [
"376136620",
"1568975357",
"827822956"
],
"abstract": [
"The JADE Platform and Experiences with Mobile MAS Applications.- A-globe: Agent Development Platform with Inaccessibility and Mobility Support.- Supporting Agent Development in Erlang through the eXAT Platform.- Living Systems(R) Technology Suite.- Multi Agent System Development Kit.- An Integrated Development Environment for Electronic Institutions.- Jadex: A BDI-Agent System Combining Middleware and Reasoning.- Component Agent Framework for Non-Experts (CAFnE) Toolkit.- The WSDL2Agent Tool.- WS2JADE: A Tool for Run-time Deployment and Control of Web Services as JADE Agent Services.- A System for Analysis of Multi-Issue Negotiation.- FuzzyMAN: An Agent-based E-Marketplace with a Voice and Mobile User Interface.- Efficient Agent Communication in Wireless Environments.- AMETAS - the Asynchronous MEssage Transfer Agent System.- Tracy: An Extensible Plugin-Oriented Software Architecture for Mobile Agent Toolkits.- The Packet-World: A Test Bed for Investigating Situated Multi-Agent Systems.- Decommitment in a Competitive Multi-Agent Transportation Setting.- TeamWorker: An Agent-Based Support System for Mobile Task Execution.",
"Multi-agent systems (MAS) are a special kind of distributed systems in which the main entities are autonomous in a proactive sense. These systems are special distributed systems because of their complexity and hence their unpredictability. Agents can spontaneously engage in complex interactions, guided by their own goals and intentions. When developing such kinds of system, there are many problems the developer has to face. All these problems make it virtually impossible to totally debug a quite complex multi-agent system (i.e. a MAS in which hundreds or even thousands of agents are involved). In this article we present a debugging tool we have developed for the JADE agents platform. Hence, it is a FIPA technology based tool and seeks to alleviate typical debugging problems derived from distribution and unpredictability.",
"INTRODUCTION A space mission operations system is a complex network of human organizations, information and deepspace network systems and spacecraft hardware. As in other organizations, one of the problems in mission operations is managing the relationship of the mission information systems related to how people actually work (practices). Brahms, a multi-agent modeling and simulation tool, was used to model and simulate NASA’s Mars Exploration Rover (MER) mission work practice. The objective was to investigate the value of work practice modeling for mission operations design. From spring 2002 until winter 2003, a Brahms modeler participated in mission systems design sessions and operations testing for the MER mission held at Jet Propulsion Laboratory (JPL). He observed how designers interacted with the Brahms tool. This paper discusses mission system designers’ reactions to the simulation output during model validation and the presentation of generated work procedures. This project spurred JPL’s interest in the Brahms model, but it was never included as part of the formal mission design process. We discuss why this occurred. Subsequently, we used the MER model to develop a future mission operations concept. Team members were reluctant to use the MER model, even though it appeared to be highly relevant to their effort. We describe some of the tool issues we encountered."
]
} |
1508.02407 | 2196704028 | We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph. | The random graph model @math considered here is also known as general random intersection graphs in the literature; e.g., see @cite_16 @cite_14 @cite_20 . To the best of our knowledge this model has been first considered by Godehardt and Jaworski @cite_20 and by @cite_3 . Results for both the existence of isolated nodes and graph connectivity have been established; see below for a comparison of these results with those established here. Later, @cite_14 analyzed the component evolution problem in the general random intersection graph and provided scaling conditions for the existence of a giant component . There, they also established that under certain conditions @math behaves very similarly with a standard Erd o s-R 'enyi graph @cite_9 . Taking advantage of this similarity, @cite_16 established various results for the @math -connectivity and @math -robustness of the general random intersection graph by means of a coupling argument. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_3",
"@cite_16",
"@cite_20"
],
"mid": [
"2135291682",
"",
"1519188191",
"2060271434",
"188357332"
],
"abstract": [
"We study a connectivity property of a secure wireless network that uses random pre-distribution of keys. A network is composed of n sensors. Each sensor is assigned a collection of d different keys drawn uniformly at random from a given set of m keys. Two sensors are joined by a communication link if they share a common key. We show that for large n with high probability the connected component of size Ω(n) emerges in the network when the probability of a link exceeds the threshold 1-n. Similar component evolution is shown for networks where sensors communicate if they share at least s common keys. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009",
"",
"We study properties of random intersection graphs generated by a random bipartite graph. We focus on the connectedness of these random intersection graphs and give threshold functions for this property and results for the size of the largest components in such graphs. The application of intersection graphs to find clusters and to test their randomness in sets of non-metric data is shortly discussed.",
"Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature.",
"Graph concepts generally are usefulfor defining and detecting clusters. We consider basic properties of random intersection graphs generated by a random bipartite graph BG n, m on n+m vertices. In particular, we focus on the distr ibution of the number of isolated vertices, and on the distribution of the vertex degrees. These results are applied to study the asymptotic properties of such random intersection graphs for the special case that the distribution P (m) is degenerated. The application of this model to find clusters and to test their randomness especially for non-metric data is discussed."
]
} |
1508.02407 | 2196704028 | We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph. | We now compare our results with those established in the literature. Our main argument is that previous results for the connectivity of inhomogeneous random key graphs are constrained to very narrow parameter ranges that are impractical for wireless sensor network applications. In particular, we will argue below that the result by @cite_16 is restricted to very large key ring sizes, rendering them impractical for resource-constrained sensor networks. On the other hand, the results by @cite_20 @cite_4 focus on fixed key ring sizes that do not grow with the network size @math . As a consequence, in order to ensure connectivity, their result requires a key pool size @math that is much smaller than typically prescribed for security and resiliency purposes. | {
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_20"
],
"mid": [
"2060271434",
"",
"188357332"
],
"abstract": [
"Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature.",
"",
"Graph concepts generally are usefulfor defining and detecting clusters. We consider basic properties of random intersection graphs generated by a random bipartite graph BG n, m on n+m vertices. In particular, we focus on the distr ibution of the number of isolated vertices, and on the distribution of the vertex degrees. These results are applied to study the asymptotic properties of such random intersection graphs for the special case that the distribution P (m) is degenerated. The application of this model to find clusters and to test their randomness especially for non-metric data is discussed."
]
} |
1508.02407 | 2196704028 | We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph. | In comparing Theorems , and , it is worth noting that @math -connectivity is a stronger property than connectivity, which in turn is stronger than absence of isolated nodes. However, although Theorems and consider strong graph properties, we now argue why the established results are not likely to be applicable for real-world sensor networks. First, Theorem focuses on the case where all possible key rings have a finite size that do not scale with @math . In addition, with @math fixed, it is clear that the scaling conditions ) and ) both require Unfortunately, it is often needed that key pool size @math be much larger than the network size @math @cite_15 @cite_10 as otherwise the network will be extremely vulnerable against node capture attacks. In fact, one can see that with ) in effect, an adversary can compromise a significant portion of the key pool (and, hence network communication) by capturing @math nodes. | {
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2116269350",
"2089421240"
],
"abstract": [
"Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.",
"We give, for the first time, a precise mathematical analysis of the connectivity and security properties of sensor networks that make use of the random predistribution of keys. We also show how to set the parameters---pool and key ring size---in such a way that the network is not only connected with high probability via secure links but also provably resilient, in the following sense: We formally show that any adversary that captures sensors at random with the aim of compromising a constant fraction of the secure links must capture at least a constant fraction of the nodes of the network. In the context of wireless sensor networks where random predistribution of keys is employed, we are the first to provide a mathematically precise proof, with a clear indication of parameter choice, that two crucial properties---connectivity via secure links and resilience against malicious attacks---can be obtained simultaneously. We also show in a mathematically rigorous way that the network enjoys another strong security property. The adversary cannot partition the network into two linear size components, compromising all the links between them, unless it captures linearly many nodes. This implies that the network is also fault tolerant with respect to node failures. Our theoretical results are complemented by extensive simulations that reinforce our main conclusions."
]
} |
1508.02407 | 2196704028 | We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph. | We now focus on Theorem , where the major problem arises from the assumption For the model to be deemed as inhomogeneous random key graph, the variance of the key ring size should be non-zero. In fact, given that key ring sizes are integer-valued, the simplest possible case would be that @math and @math for some @math and positive integer @math . This would amount to assigning either @math or @math keys to each node with probabilities @math and @math , respectively. In this case, we can easily see that @math as long as @math . Therefore, for an inhomogeneous random key graph, the condition ) implies that @math , or, equivalently that Put differently, Theorem enforces mean key ring size to be much larger than @math . However, a typical wireless sensor network will consist of a very large number of sensors, each with very limited memory and computational capability @cite_15 @cite_10 . As a result, key rings with size @math are unlikely to be implementable in most practical network deployments. In fact, it was suggested by Di @cite_10 that key rings with size @math are acceptable for sensor networks. | {
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2116269350",
"2089421240"
],
"abstract": [
"Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented.",
"We give, for the first time, a precise mathematical analysis of the connectivity and security properties of sensor networks that make use of the random predistribution of keys. We also show how to set the parameters---pool and key ring size---in such a way that the network is not only connected with high probability via secure links but also provably resilient, in the following sense: We formally show that any adversary that captures sensors at random with the aim of compromising a constant fraction of the secure links must capture at least a constant fraction of the nodes of the network. In the context of wireless sensor networks where random predistribution of keys is employed, we are the first to provide a mathematically precise proof, with a clear indication of parameter choice, that two crucial properties---connectivity via secure links and resilience against malicious attacks---can be obtained simultaneously. We also show in a mathematically rigorous way that the network enjoys another strong security property. The adversary cannot partition the network into two linear size components, compromising all the links between them, unless it captures linearly many nodes. This implies that the network is also fault tolerant with respect to node failures. Our theoretical results are complemented by extensive simulations that reinforce our main conclusions."
]
} |
1508.02407 | 2196704028 | We introduce a new random key predistribution scheme for securing heterogeneous wireless sensor networks. Each of the @math sensors in the network is classified into @math classes according to some probability distribution @math . Before deployment, a class- @math sensor is assigned @math cryptographic keys that are selected uniformly at random from a common pool of @math keys. Once deployed, a pair of sensors can communicate securely if and only if they have a key in common. We model the communication topology of this network by a newly defined inhomogeneous random key graph. We establish scaling conditions on the parameters @math and @math so that this graph: 1) has no isolated nodes and 2) is connected, both with high probability. The results are given in the form of zero-one laws with the number of sensors @math growing unboundedly large; critical scalings are identified and shown to coincide for both graph properties. Our results are shown to complement and improve those given by and for the same model, therein referred to as the general random intersection graph. | In conclusion, we showed that our results enable parameter choices that are widely regarded as practical in real-world sensor networks, while previous results given in @cite_16 and @cite_3 do not. | {
"cite_N": [
"@cite_16",
"@cite_3"
],
"mid": [
"2060271434",
"1519188191"
],
"abstract": [
"Random intersection graphs have received much attention for nearly two decades, and currently have a wide range of applications ranging from key predistribution in wireless sensor networks to modeling social networks. In this paper, we investigate the strengths of connectivity and robustness in a general random intersection graph model. Specifically, we establish sharp asymptotic zero-one laws for k-connectivity and k-robustness, as well as the asymptotically exact probability of k-connectivity, for any positive integer k. The k-connectivity property quantifies how resilient is the connectivity of a graph against node or edge failures. On the other hand, k-robustness measures the effectiveness of local diffusion strategies (that do not use global graph topology information) in spreading information over the graph in the presence of misbehaving nodes. In addition to presenting the results under the general random intersection graph model, we consider two special cases of the general model, a binomial random intersection graph and a uniform random intersection graph, which both have numerous applications as well. For these two specialized graphs, our results on asymptotically exact probabilities of k-connectivity and asymptotic zero-one laws for k-robustness are also novel in the literature.",
"We study properties of random intersection graphs generated by a random bipartite graph. We focus on the connectedness of these random intersection graphs and give threshold functions for this property and results for the size of the largest components in such graphs. The application of intersection graphs to find clusters and to test their randomness in sets of non-metric data is shortly discussed."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.